text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
Identification of paramagnetic centers in irradiated Sn-doped silicon dioxide by first-principles
We present a first-principles investigation of Sn paramagnetic centers in Sn-doped vitreous silica based on calculations of the electron paramagnetic resonance (EPR) parameters. The present investigation provides evidence of an extended analogy between the family of Ge paramagnetic centers in Ge-doped silica and the family of Sn paramagnetic centers in Sn-doped silica for SnO2 concentrations below phase separation. We infer, also keeping into account the larger spin–orbit coupling of Sn atoms with respect to Ge atoms, that a peculiar and highly distorted three-fold coordinated Sn center (i.e. the Sn forward-oriented configuration) should give rise to an orthorhombic EPR signal of which we suggest a fingerprint in the EPR spectra recorded by Chiodini et al (2001 Phys. Rev. B 64 073102). Given its structural analogy with the Eα′ and Ge(2) centers, we here name it as the ‘Sn(2) center’. Moreover, we show that the single trapped electron at a SnO4 tetrahedron constitutes a paramagnetic center responsible for the orthorhombic EPR signal reported in Chiodini et al (1998 Phys. Rev. B 58 9615), confuting the early assignment to a distorted variant of the Sn-Eʹ center. We hence relabel the latter orthorhombic EPR signal as the ‘Sn(1) center’ due to its analogy to the Ge(1) center in Ge-doped silica.
Introduction
Ion implantation is a largely exploited experimental way to introduce point defects in a solid matrix so to functionalize the host material [1].For instance, Sn implantation of silica constitutes a way to make silica glass a luminescent material that can be interesting for optoelectronic or even photonic applications [2][3][4].Moreover, Sn-doped silica glass has recently been proved to work as a dosimetric material based on the photoluminescence of Sn point defects [5].Incidentally we also mention that Sn related point defects have also been found [6] in irradiated, and thermally treated, nanostructured tin-silicate glass ceramics which nowadays is a very active research field [7,8].
About two decades ago, it was remarked that Sn-doped silica based fibers exhibit extremely high photosensitivity when exposed to UV light such as the 248 nm KrF excimer laser [9,10].Such photosensitivity stems from the presence of optically active point defects [11][12][13].In particular, absorption bands at about 4.5 eV, 4.9 eV, and 5.9 eV have been attributed to the occurrence of Sn color centers in silica [14][15][16] of which the band at 4.9 eV is universally attributed to the occurrence of twofold Sn centers, alternatively known as tin oxygen deficient centers (Sn-ODC) of the second type, shortly Sn-ODC(II) [17][18][19].As for the Sn-ODC of the first type, i.e. the Sn-ODC(I), its structure should consist in a ≡Sn−Si≡ dimer.Yet there is no evidence concerning the optical absorption fingerprint neither of Sn-ODC(I) nor of Ge-ODC(I) centers [20,21].Similarly to the 5.1 eV absorption band in Ge-doped glass [22], the 4.9 eV absorption band of Sn-doped silica is bleached under UV radiation together with a concomitant increase of absorption bands at about 4.5 eV and 5.5 eV [10,15].
Besides the 4.9 eV absorption band, twofold Sn centers in silica glass, and in other glasses, are responsible for singlet-singlet and triplet-singlet emissions at ∼4.2 eV and ∼3.1 eV which currently make them interesting for luminescence applications [3,5,[23][24][25][26][27]].Yet, it is not so clear what kind of paramagnetic centers could be generated in irradiated Sn-doped silica containing twofold Sn centers [28].In the experimental study of [11] two variants of Sn-related paramagnetic centers were identified, an orthorhombic one with g 1 = 1.994, g 2 = 1.986, and g 3 = 1.975, and an axial one with g ∥ = 1.994 and g ⊥ = 1.977.These centers show a huge hyperfine coupling constant (434 mT) as also observed for Sn centers in other compounds [29].Chiodini et al [11] proposed an identification of these two kind of paramagnetic centers with Sn-E ′ centers consisting of unpaired spins in sp 3 orbitals of three-fold coordinated Sn sites, eventually decorated with a varying number of next nearest neighbors Sn atoms.By contrast, Nakanishi et al [15] by investigating the absorption of point defects in irradiated Ge-Sn-SiO 2 glass have shown that two distinct kinds of paramagnetic centers, Sn-E ′ and (Sn 4+ ) − centers, are induced by the irradiation.The absorption band of the Sn-E ′ is indicated to be ∼6 eV, similarly to Ge-E ′ centers [30], while the absorption band of the (Sn 4+ ) − center is located at about ∼4.4 and ∼5.4 eV, suggesting an analogy with the Ge electron center that features absorption bands at ∼4.5 and ∼5.8 eV [15,22] and thus supporting a non Sn-E ′ origin for the orthorhombic signal of [11].The model that was originally proposed for the (Sn 4+ ) − center by Kawazoe et al consists in an electron trapped at an octahedral Sn site [31] in analogy with Sn coordination in SnO 2 crystal.The spindensity thought for the (Sn 4+ ) − center has a rather spherical symmetry reflecting the high s-character of the center analysed in [31] and is not consistent with an orthorhombic center as the one found by Chiodini et al [11].Moreover a model based on octahedral coordination does not seem very suitable for Sn point defects in silica at very low Sn-doping levels (⩽0.4 mol%) that were considered in [11,16], thus further prompting for a dedicated theoretical investigation of Sn paramagnetic centers in silica.
First-principles electron paramagnetic resonance (EPR) calculations, e.g. based on the linear response approach of [32], have become a standard tool in many computational investigations of point defects in solids and have allowed for safely grounded interpretation of experimental data leading to the formulation of new models such as in the case of the E ′ α and Ge(2) centers in silica [33,34].Concerning Sn-related paramagnetic centers in silica, as far as we know, only the so-called H(III) centers, i.e. paramagnetic hydrogenated twofold Sn centers (= Ṡn − H), have been studied by means of a first-principles approach [35].In this work, we present a firstprinciples investigation of the EPR parameters of Sn paramagnetic centers (with spin S = 1/2) in vitreous silica (v-SiO 2 ) aiming at identifying the origin of the orthorhombic EPR signal reported in [11].We deduce, by analysing our g-values distributions obtained for a large set of defect configurations, that the orthorhombic EPR signal [11] arises from a single trapped electron (STE) center at a SnO 4 tetrahedron, which is an analogue of the Ge(1) center [36], and hence we here name it 'Sn(1) center'.In such a configuration, the unpaired spin is localized at a distorted SnO 4 tetrahedron.On the basis of our calculations we also suggest that Sn forward-oriented (Sn-FO) configurations should give rise to a paramagnetic center, here named 'Sn(2) center', where the unpaired spin is localized at a three-fold Sn atom featuring a weak long bond with a three-fold O atom, of which a fingerprint may be discernible in the EPR spectra published in [11,37,38].The Sn-FO may alternatively be thought as a structure obtained through the relaxation of an ionized twofold Sn atom [33].We thus establish that not only are Sn-E ′ centers the analogue of E ′ γ (i.e.Si-E ′ ) and Ge-E ′ centers in silica, but also that the orthorhombic EPR signal [11], i.e. the Sn(1) center, has to be considered as the Sn counterpart of the Ge(1) center in silica [33], and is not due to a variant of the Sn-E ′ center as supposed in [11].Furthermore the present investigation bring a strong support in favor of an extended structural analogy between paramagnetic centers along the isoelectronic series in pure silica, in Ge-doped silica and Sn-doped silica [15,17,33].
Theoretical methodology and modeling details
The calculations carried out in this work are based on density functional theory.The Perdew-Burke-Ernzerhof exchange-correlation functional has been adopted [39].Normconserving Trouiller-Martins pseudopotentials with gauge including projector augmented wave (GIPAW) reconstruction are used [40,41] and the Kohn-Sham wavefunctions are expanded in a basis of plane waves up to a cutoff energy of 80 Ry.The Γ point was used for sampling the Brillouin zone [42].Geometry optimizations and EPR parameters have been obtained by means of spin-polarized calculations as implemented in the Quantum-Espresso (QE) package [43,44].The EPR parameters are calculated by exploiting the GIPAW method as available in the QE package [32].Tin isotropic hyperfine couplings (Fermi contacts) are obtained by including a scalar relativistic correction [45][46][47], while corerelaxation effects are not considered, as they bring just a minor correction of ∼2%.Hyperfine calculations have been carried out by using for the nuclear g-factors the values given for stable isotopes as reported in [48] (in particular, for 119 Sn we used g N = −2.09456).Actually tin in nature is found in several stable isotopes of which those carrying a 1/2 nuclear spin are 115 Sn with 0.34% natural abundance, 117 Sn with 7.68% natural abundance, and 119 Sn with 8.59% natural abundance.Yet, as the nuclear g-factors for 117 Sn (−2.002 08) and 119 Sn (−2.094 56) are quite close, and concentration of 115 Sn is negligeable, in this paper we report Fermi contacts results based on the 119 Sn only.In fact the average Fermi contacts obtained with all isotopes according to the natural abundance will only show minor differences (e.g.∼2% in the case of Sn-E ′ centers) with respect to results assuming the occurrence of 119 Sn isotope only.
Preliminary g-tensor test calculations have been carried out on the free radical SnH 3 for which we calculate a ∆g ⊥ = g ⊥ − g e = 46 780 ppm, that overestimates, by ∼9.6%, the DFT result of [49] based on the gauge-including atomic orbitals approach, and is in excellent agreement with the ZORA result (47 031 ppm) obtained in [50].In the presence of such considerably large deviations of g principal values from the free electron value g e , [51] may suggest that the non-perturbative 'converse' approach [52,53] could show an improved convergence over system size with respect to linear response based methods.Thus we here have run several tests calculations for selected Sn-E ′ and STE configurations, and found that the g principal values obtained through the linear response approach [32] overestimate by a shift of ∼1000 ppm the gvalues obtained with the non-perturbative approach [52] (see table S1 in the supplementary data file).The latter shifts, which may roughly correspond to the finite size effect on the g-tensor, however only weakly affect the relative differences g 12 = g 1 − g 2 and g 13 = g 1 − g 3 here employed for the identification of the defects.In fact g 12 and g 13 are identical between the two approaches [32,52] within ∼3%.Yet, [49] points out that for a correct treatment of spin systems involving high atomic numbers [54][55][56], higher orders in the spin-orbital splitting should be included instead of up to first order in the magnetic field.For the latter reason we may expect both methods [32,52] to present sizeable (a few ppt) deviations for the g-tensor of S = 1/2 spin system involving an atomic species with a rather large atomic number such as tin.Other sources of systematic errors in the calculation of the g i values (up to ∼2000 ppm) and hyperfine constant (few percent) of the tin EPR centers under investigation may come from the choice of the DFT flavour and pseudopotential generation details, e.g.inclusion in valence of semicores states (see table S2 in the supplementary data file).Moreover, for the smallest supercell size used in this work, minor size effects on the hyperfine couplings and g-tensor principal values are assessed to be up to ∼4% and ∼1000 ppm (see tables S3, S4 and figures S1, S2 in the supplementary data file).However we estimate that the full analysis of the the systematic errors mentioned here above is beyond the scope of the present paper.
The two periodic Sn-doped silica supercells used in this work contain 108 atoms and 144 atoms at the experimental density (2.2 g cm −3 ) and are hereafter referred to as model I and model II, respectively.The original silica model I and model II were previously employed to investigate silicon oxygen deficient centers (Si-ODC) in v-SiO 2 [34,57,58] and later used in [33] for the discussion of Ge paramagnetic centers in Ge-doped silica.On one side the chosen models allow us to carry out test comparisons along the isoelectronic series Si, Ge, Sn for the very same configurations, on the other side by using two models we can considerably increase the number (statistics) of forward-oriented and STE configurations with respect to those analysed in [33,58].
In particular, we generated Sn-ODC configurations of the Sn-E ′ kind by replacing silicon atom (or germanium atom) with tin for the silicon (or germanium) three-fold coordinated site carrying the unpaired spin in ODC configurations of previous investigations [33,58].In this way we obtained more than hundred Sn-E ′ (puckered and unpuckered [58]) configurations and about twenty Sn-FO configurations.Moreover, to generate tin STEs in model I and model II, as starting configuration we took the LDA ground states of [59] and [60,61], respectively.By replacing each Si with a Sn atom and by adding an electron to the system we obtain, after a first-principles relaxation of the structure, a tin STE configuration, so that in total we generated 84 tin STE configurations in these models [62].We relaxed the structure of each configuration using the Broyden-Fletcher-Goldfarb-Shanno algorithm [43].A force threshold of 0.0005 Ryd/bohr has been adopted.Tests carried out on Sn-E ′ configurations using a variable cell relaxation scheme to relax the simulation cell lead just to minor changes on g values (e.g. for g 12 and g 13 we register variations of ∼5%) not affecting the conclusions of the present work.We have also carried out EPR parameters calculations for a few positively charged Sn-Si dimer configurations, obtained as explained here above from Si-Si dimer [i.e.Si-ODC(I)] configurations of [20,58]), which show A iso ( 119 Sn) ⩽ 100 mT, thus indicating that Sn-Si dimers are not relevant for the discussion of the Sn-E ′ variants of [11,37].
Table 1.Geometrical parameters (Sn-O, Sn-H and O-Sn-O angle) and absolute values of A iso (H), A iso ( 119 Sn) hyperfine couplings of H(III) paramagnetic centers as calculated in this work (TW) and as obtained in [35] by using DFT-B3LYP for the atomic cluster [(H 3 SiO) 2 Sn-H].We also report the experimental hyperfine splitting given in [17] for H(III) centers.Standard deviations are given in parenthesis.
EPR parameters of H(III) centers
As far as we know, Sn-related paramagnetic defects e.g.Sn-E ′ centers in silica have not been theoretically studied (in particular not using large periodic supercells) apart from [35] where twofold Sn and H(III) centers [17] were investigated using DFT/HF based calculations on the atomic cluster [(H 3 SiO) 2 Sn-H].In particular, the H(III) centers were attributed to the occurence of Sn atoms coordinated by two oxygens and a hydrogen atom [35,63].In this work, prior to the study of Sn-E ′ centers, we investigate H(III) centers as a preliminary study for which comparisons can be carried out with both experiments and previous theoretical calculations [17,35].Using the twofold Ge/H(II) configurations discussed in [34] we could generate H(III) configurations in the silica model I, after replacing Ge with Sn atom as mentioned hereabove in the methods section.
Concerning the geometrical parameters of the ground state of H(III) paramagnetic centers, in [35], the all electron DFT-B3LYP results indicate a Sn-O bond length of 1.95 Å , close to the one reported for the twofold Sn atom ∼1.9 Å [35], while the Sn-H length is 1.79 Å and with a O-Sn-O angle of 94 • .The hyperfine coupling constant A iso (H) calculated at DFT-B3LYP level in [35] is 14.7 mT, close to the experimental value of [17] of 15.0 mT.In table 1 we show the results of our calculations for H(III) centers and compare them to [35] and [17].In our calculation the Sn-O bond length is slightly (∼3%) longer (i.e.2.02 Å), while the Sn-H bond length is slightly (~1%) shorter than in [35], whereas the O-Sn-O angle is sensibly wider (~5%) than given in [35] .The average A iso (H) calculated for our H(III) configurations (15.4 mT) is in fair agreement with the experimental findings (15.0 mT) of [17] and the cluster calculations of [35], thus supporting our DFT setup for the investigation of the Sn-E ′ centers that is discussed in the next sections.Furthermore, the calculated hyperfine coupling constant, A iso ( 119 Sn), although it is quite large (∼300 mT), consistently with ESR data of tin-centered radicals [64,65] which show very large isotropic hyperfine splitting constants (∼200 mT), it is considerably smaller than reported by Chiodini et al [11,16], thus further ruling out the possible occurrence of H(III) centers in those experiments.At variance, the average g principal values we calculate for our H(III) configurations, g 1 = 2.0029, g 2 = 1.9932, g 3 = 1.9774, correspond to g 12 = 0.0096 and g 13 = 0.0255 values which are not so different from those calculated for the STE and Sn-E ′ centers (see next section), and thus would not allow, when no other information on the hyperfine coupling constants was provided, to exclude the presence of H(III) centers in the experiments of [11,16].
Structural properties of the Sn-E ′ -like and Sn-FO configurations
In Sn-doped v-SiO 2 , the Sn-O bond length in the Sn-E ′ -like configurations is found to be on average ∼2.00Å with a standard deviation (std) of 0.02 Å, sligthly shorter than the experimental estimate of ∼2.05-2.09Å in rutile SnO 2 [66,67] which features edge and corner-sharing SnO 6 octahedra.Bond angles O-Sn-O are slightly narrower (∼102 • ) than reported for Ge-E ′ in silica [33].
As compared to the case of the Si-FO [58,68], the structure of the Sn-FO configurations is quite strained and at first sight considerably more similar to the one of a twofold coordinated Si/Ge atom [33].The difference between Si-FO and Sn-FO structures is especially evident when looking to the O [3] -Sn and O [3] -Si bond distances formed by the three-fold coordinated oxygen (O [3] ): in the Sn-FO structure, the latter O [3] -Si distances are slightly longer (∼1.75 Å) than usual Si-O bond in silica (1.6 Å), while the former O [3] -Sn distance is remarkably longer (∼2.3 Å).By contrast for the Si-FO structure all the O [3] -Si bond distances are ∼1.8Å [58].The two normal Sn-O bonds of the Sn-FO have a bond length of 1.98 (0.03) Å, while the bond between the three-fold Sn and the three-fold O is remarkably longer (∼15%) (figure 1(a)).The spreading of the latter bond is double (0.06 Å) than it was found (0.03 Å) for the corresponding bond in Si-FO configurations in pure silica [58], thus suggesting an increased structural disorder of the Sn-FO configurations.The average O-Sn-O angle in Sn-FO configurations is about 97.6 • , considerably smaller than the value associated with sp 3 hybridization (i.e.109.47 • ).More specifically, in these configurations, the lowest O-Sn-O angle is about 87.4 • with a standard deviation of 6.6 • .The largest O-Sn-O angle (between the two normal Sn-O bonds) and the last one are on average ∼106.9• and 98.4 • with standard deviations of ∼6.9 • and 7.8 • , respectively.
The spin-density of the Sn-E ′ -like configuration is mainly localized on the three-fold Sn atom, and, to a minor extent, equally shared among the three oxygen nearest neighbors (figure 1(a)), consistently with the typical shape of the spindensity in Si-E ′ and Ge-E ′ centers in v-SiO 2 [33,58,69].At variance for the Sn-FO configuration, the sharing of spin density is different among the three oxygen nearest neighbors (figure 1(b)), and it is indicative of an orthorhombic center.[11,24,37].At variance with [11,37], our results indicate for the Sn-E ′ center a slight orthorhombicity (near axiality), analogously to the one previously observed for the Ge-E ′ centers in Ge-doped silica glass [71] and quartz [72].The average g 1 , g 2 , and g 3 values suffer of some overestimation of ∼5000 to ∼10 000 ppm with respect to the experimental estimates.Incidentally we note that these overestimations exhibit a tendency to increase along the isolectronic series of Si-E ′ [58], Ge-E ′ [33] and Sn-E ′ centers.Considerably large (∼1000 to ∼10 000 ppm) overestimations of g i principal values have also been reported for calculations of other S = 1/2 centers in wide bandgap oxides/nitrides involving atomic species of the fourth and fifth rows of the periodic table [73,74].The overestimation errors on g i however tend to cancel out when considering g 12 and, above all, g 13 values for which we find a good agreement with the most recent experiment [24] and still a fair agreement with the axial Sn-E ′ of [11].The calculated A iso ( 119 Sn) of Sn-E ′ -like configurations on average underestimates only by ∼5% (21 mT) the huge experimental value of 434 mT given in [11] (table 2).The dipolar (anisotropic terms) for Sn-E ′ centers are quite small (|A dip |/A iso ∼6%) as compared to the isotropic (Fermi contact) term, as typical of E ′ centers in silica [58].
In figures 2(a) and (b) for each Sn-E ′ and Sn-FO configuration we show the A iso ( 119 Sn) plotted versus g 12 and g 13 values.The plots, as previously found for Ge-doped silica [33], reveal the existence of two distributions of EPR parameters, the first one pertaining to the Sn dangling bond as found e.g. in puckered configurations [58], and quite well corresponding to the axial Sn-E ′ center [11].A second distribution Calculated |A iso ( 119 Sn)| Fermi contacts of Sn-E ′ -like (blue squares) and Sn-FO (green filled squares) configurations in our models I and II plotted vs (a) g 12 = g 1 − g 2 and (b) g 13 = g 1 − g 3 where g i are the g tensor principal values.Experimental data for the orthorhombic EPR signal (vertical dot-dashed line [11] and dotted line [37]) and axial EPR signal (red disc [11], red filled square [37]) are shown.with markedly different average g 12 and g 13 values (table 2) arises from Sn-FO configurations.Both distributions show large variations in the hyperfine couplings: A iso ( 119 Sn) varies from ∼320 to ∼550 mT.The spreads of the distributions shown in figures 2 reflect the different local bonding environment experienced by each configuration [33,58].
In analogy with the case of Ge-doped silica [33], figures 2(a) and (b) strongly suggest that Sn-E ′ -like and Sn-FO configurations should give rise to two well distinct EPR signals.In particular, the Sn-FO configurations should manifest themselves with a not-yet identified EPR signal hereafter named as the Sn(2) center.Moreover, figure 2 indicates that the orthorhombic signal of [11,37] can not be attributed to Sn-FO configurations.In fact, for the latter, g Sn-FO 13 is about twice the one of the orthorhombic EPR signals of [11,37].Furthermore for the vast majority of our Sn-E ′ -like configurations, the calculated g Sn−E ′ 12 is ∼30%-50% larger than reported for the orthorhombic Sn-E ′ variant [11,37], thus making a Sn-E ′ center (figure 1(a)) a not so likely explanation of the orthorhombic signal [11,37].In addition, we have also checked the hypothetical origin of the orthorhombic signal as due to the distortions arising when other Sn atoms are included in the second nearest neighbors shell of the central threefold Sn atom [11].In fact we have calculated the g-tensor for a representative Sn-E ′ configuration where we considered a varying number (i. 3, the symmetry of the g-tensor is only weakly affected by the amount of Sn neighbors atoms.In particular both g 12 and g 13 are always larger than ∼0.0162 which undermines the interpretation of the orthorhombic Sn-E ′ as due to the varying number of Sn next-nearest neighbors of a threefold Sn center (figure 1(a)).
STE configurations at a fourfold coordinated Sn site: the Sn(1) center
In the tin STE configurations, figure 3, the SnO 4 tetrahedron is distorted from the ideal geometry (point group T d ) as consequence of the localization of an extra electron in close analogy with the electronic structure of the Ge(1) center in Gedoped silica [33,36].The vast majority (∼80%) of the tin STE here calculated are markedly of the orthorhombic kind, hereafter also called 'Griscom' kind [75,76], shown in figure 3 In table 2 we give the average g-values and A iso ( 119 Sn) Fermi contact calculated for the STE configurations in our Sndoped silica models (figure 4).As seen for the Sn-E ′ and Sn-FO, the absolute values of A iso ( 119 Sn) calculated for the tin STE configurations are huge, on average ∼490 mT, about 20% larger than those calculated for the Sn-E ′ configurations.Such a relative difference corresponds to the relative difference between the Fermi contacts of Ge(1) and Ge-E ′ centers [33].Dipolar (anisotropic terms) calculated for the tin STE configurations are even smaller (|A dip |/A iso ∼1.3%) that those found for the Sn-E ′ centers.This is very close to the weight of dipolar terms as calculated for the Ge(1) center in silica (∼2%) [33,69].The A iso values given in table 2 indicate a strong component of the Sn 5s orbital in the unpaired electron wave function.In terms of total wave function, the contribution of the 5s orbital (based on the formula C 2 s = A iso /A s where A s refers to the free atomic value [77]) is around ∼30%-40% [11,31] for the tin STE, to be compared with the estimate of 35% of Watanabe et al [77] given for the Ge center.
Figure 4(b) shows the existence of a trend between the A iso (Sn) and the g 13 values in the STE configurations.Such a trend is related to a structural variation of the SnO 4 tetrahedron, in analogy with what was previously remarked in Gedoped silica [33].In fact, in figures 5(a) and (b) we show the existence of a dependence of the g 12 , g 13 and A iso (Sn) values on the widening of an O-Sn-O angle of the tin STE (figure 3(a)).Figures 5(a) and (b) confirm and generalize the analogous relation described for the Ge(1) centers in [33] but based on a very small number of configurations.For O-Sn-O angles smaller than 140 • , one can find several STE configurations that can be classified as quasi-axial or nearly axial as shown in figure 3(b).Quasi-axial STE configurations tend to show larger A iso and also larger g 1 values (by ∼1000 ppm) than Griscom's STE.On average quasi-axial tin STE feature the following g-tensor principal values g 1 = 2.0035, g 2 = 1.9936 and g 3 = 1.9881.|A iso ( 119 Sn)| vs g 12 and g 13 , as here calculated by first-principles for tin STE configurations in model I and model II (blue squares).Experimental data of the axial Sn-E ′ (red [11] and blue [37] discs) and orthorhombic EPR signals (red [11] and blue [37] vertical lines) are shown.
Discussion
In Ge-doped silica the Ge(1) and Ge-E ′ centers are universally attributed to an electron localized at a four-fold Ge atom and to a Ge paramagnetic defect analogous to the E ′ γ in SiO 2 , respectively [33,75,78].The origin of the Ge(2) defect, detected in both Ge-doped SiO 2 [22,71,[79][80][81][82][83], and high purity v-GeO 2 [84], has been very controversial for long time and only recently clarified thanks to first principles calculations [33].In the present paper, by exploiting the paramagnetic centers configurations generated in the adopted model I and II of v-SiO 2 [33,58], we show that the EPR signals due to Sn paramagnetic defects discussed in [11,37] have to be ascribed to two different kinds of Sn point defects.A first kind, corresponding to the axial EPR signal in [11,37], has to be attributed to a defect analogous to the E ′ γ in SiO 2 i.e. a Sn-E ′ center which is a hole trap (figure 1(a)).A second kind, responsible for the orthorhombic EPR signal in [11,37], has to be attributed to a point defect, analogous to the Ge(1) center, consisting in an electron trapped at a SnO 4 tetrahedron i.e. a Sn(1) center (figure 3).We note that such an electron center, the Sn(1) center, provides a more likely explanation for the absorption bands at ∼4.4 and ∼5.4 eV than the model put forward by [15], i.e. the (Sn 4+ ) − center, which requires a sixfold coordination for the tin atom that is never realized in our calculations and seems very unlikely for low tin concentration in silica.The considerable disorder broadening in figure 3(c) of [11] was regarded as attributable to the expected occurrence of a subset of low-symmetry Sn-E ′ variants arising from Sn sites coordinated through bridging oxygen atoms bound to three unequivalent atoms, in particular to combinations such as two Si and one Sn, or one Si and two Sn.We note that the former would involve two close Sn atoms (the central one hosting the unpaired spin) and the latter a cluster with three nearby Sn atoms, which for low Sn concentrations might be very questionable.Moreover, by direct testing (table 3) we could rule out the relevance of these configurations as far as concerns the orthorhombic EPR signal of [11,37].The Sn(1) center add upon the known series of electron centers trapped at a tetrahedral impurity in silica, together e.g. with the P 2 and the Ge(1) centers [33,76,85].The existence of another similar electron center, that we here label as the Si(1) center, was put forward based on theoretical calculations by El-Sayed et al [78], but has not yet been confirmed by experimental EPR investigations.
As far as concerns the result of [11,24,37] we also suggest that the analogy between Sn-doped silica and Ge-doped silica can be pushed further, and that a Sn(2) center may exist (figure 1(b)), though less abundant than the Sn(1) and Sn-E ′ centers, and be the tin analogous of the Ge( 2) and E ′ α centers.From an experimental point of view, the EPR spectrum of the Sn(2) center could be obtained after the subtraction of an appropriate fraction of the Sn(1) and Sn-E ′ signals similarly to the procedure usually adopted for obtaining the Ge(2) spectrum.The width of the EPR signal provides the relative difference g 13 = g 1 − g 3 between g 1 , g 3 values: where β is the Bohr magneton, H 1 and H 3 are the field positions of the main positive peak and of the farthest negative peak, respectively.Hence, the g 13 value can be known with a very good accuracy as it corresponds to the whole width of the Ge(2), or Sn(2), EPR signal and is not biased by offset errors that could affect the experimental estimate of the g principal values.At variance, the g 12 = g 1 − g 2 value is obtained by finding the first zero crossing value on the high field side of the main peak [76,83].This procedure however can lead to a considerable position error in g 2 values seen in experiments [83,86].From Chiodini's data of [37], by using the equation ( 1), and by assuming that the small dimple located at ∼350 mT in figure 2 of [37] is a fingerprint of the Sn(2) center, we estimate, with ν = 9.6 GHz, a g 13 value of ∼0.0360 that is within one std from the average g 13 value (0.0394) given in table 2 for the Sn-FO configurations.We note that in [33] the calculated g 13 of Ge-FO configurations coincides with the experimental g 13 values of the Ge(2) center and are clearly larger (double) than g 13 values of other centers like Ge-E ′ and Ge(1) centers.Hence, we infer that the calculated g 13 values obtained in this work for Sn-FO should be very close to those that one could extract from the experimental spectra for the here identified Sn(2) center.Table 4. Isoelectronic series (X = Si, Ge, Sn) of several point defects in silica as nowadays known from the present investigation and from [11,17,33,35,58,78,88].The dotted Ẋ is used to indicate the atom hosting the unpaired electron.Dashes − and-indicate the bond with a bridging oxygen atom and with a threefold-coordinated oxygen atom, respectively.The * indicates still under debate, not yet observed with EPR.= (λ Sn /λ Ge )g Ge( 2) 13 and also that [11,33]: Keeping into account that for Ge(2) the g 13 is 0.001 43 [71], we may roughly estimate for the Sn(2) center: This value is quite close to the g 13 values as calculated from first-principles for the Sn-FO configurations (table 2) and also as estimated here above using equation ( 1).The larger spinorbit coupling of Sn with respect to Ge also supports a non zero value for the g 23 = g 2 − g 3 difference in Sn-E ′ centers.
In fact, for Ge-E ′ centers an experimental estimate [71] of g 23 = 0.0010 was given, and by following similar reasonings as here above one could then estimate for Sn-E ′ centers a g 23 ∼ 0.0025, which is not so far from the experimental g 23 = 0.0042 of [24].Moreover our calculations (table 2) further support a slightly orthorhombic (nearly axial) g-tensor for the Sn-E ′ centers rather than an ideal axial symmetry as it was assumed in [11].
Concerning the (neutral) defect precursors of Sn-E ′ centers, Hayakawa et al [24] suggested that a conversion of twofold Sn centers to Sn-E ′ centers could take place in x−ray irradiated 5SnO 2 -95SiO 2 glass.Hayakawa et al [24] also supposes that the conversion from twofold Sn centers to Sn-E ′ centers should be similar to the one observed in irradiated Ge-doped SiO 2 glass where the optical absorption due to the twofold Ge, i.e. the germanium lone pair centers (GLPC), is bleached under irradiation with concomitant generation of Ge(1), Ge-E ′ and Ge(2) centers [22,33].The present investigation further supports the radiation induced generation mechanisms proposed by [24], since a strict analogy exists in silica between the family of Ge paramagnetic and Sn paramagnetic centers (table 4).The Sn-FO, in analogy to Si-FO and Ge-FO configurations, should be generated by the ionization of a twofold coordinated Sn atom and subsequent relaxation of the defect structure [33,58].It is however quite likely that the relative concentration ratios of Sn(1), Sn-E ′ and Sn(2) centers will differ with respect to those observed for Ge centers.However a precise evaluation of these ratios is beyond the scope of the present paper.
Conclusion
In conclusion, our study, by analyzing the EPR parameters distributions of a large amount of defect configurations, provides evidence for an assignment of the orthorhombic EPR signal found by Chiodini et al [11] to a Sn(1) center, structurally analogous to the Ge(1) center [33], consisting in an unpaired electron trapped at a distorted SnO 4 tetrahedron.Furthermore, we argue that a Sn(2) center arising from Sn-FO configurations should also appear in the EPR spectra, and in particular we suggest its presence in the EPR spectra of [37].More specifically, the Sn(2) center arises from an unpaired spin localized at a three-fold Sn atom which is bonded to a three-fold oxygen atom through a weak Sn-O bond, ∼0.3 Å longer than usual Sn-O bonds of Sn-E ′ centers in Sn-doped silica.
The successful identification, by means of first-principle calculations of EPR parameters, of the two Sn paramagnetic defects in Sn-doped silica, the Sn(1) and Sn(2) centers, allows us to infer that an extended analogy exists between the family of Ge paramagnetic defects and the one of Sn paramagnetic defects for low Sn doping content.Moreover the larger number of Sn(1) configurations here analyzed with respect to the Ge(1) configurations of [33] allows us to infer a better description also of the electronic structure of the Ge(1) center which, similarly to the Sn(1) center, should arise mainly from markedly orthorhombic tin STE configurations (figure 3(a)).
Yet, given the higher ionicity of Sn-O bond with respect of the Ge-O bond, and given the likely smaller energy barriers for defect diffusion and interconversion in Sn-doped silica with respect to Ge-doped silica, the thermal and aging behavior of the Sn(1) and Sn(2) centers may differ substantially from the one observed for the analogous Gerelated centers.We hope that the latter issue and as well as the optical activity (absorption/emission...) of the Sn-related defects will be matter of future investigations which will be necessary for a complete overview/understanding of Sn point defects in silica, and that eventually will support a wider usage of Sn-doped silica for applications in fiber-optics and microelectronics.
Data availability statement
All data that support the findings of this study are included within the article (and any supplementary files).
Figure 1 .
Figure 1.Ball and stick models [70] and spin densities (shadowed) of (a) a Sn-E ′ configuration, and of (b) a Sn forward-oriented (Sn-FO) configuration.O atoms (red), Si atoms (light brown) and Sn atom (light grey) are shown.Principal directions of the g-tensor are shown with blue arrows.For plotting the spin density a isovalue corresponding to 5% of grid maximum was used.Approximatively, the local symmetry (rotation) axis of the threefold coordinated Sn center provides the principal direction (v 1 ) relative to g 1 ∼ ge in table 2.
Figure 2 .
Figure 2.Calculated |A iso ( 119 Sn)| Fermi contacts of Sn-E ′ -like (blue squares) and Sn-FO (green filled squares) configurations in our models I and II plotted vs (a) g 12 = g 1 − g 2 and (b) g 13 = g 1 − g 3 where g i are the g tensor principal values.Experimental data for the orthorhombic EPR signal (vertical dot-dashed line[11] and dotted line[37]) and axial EPR signal (red disc[11], red filled square[37]) are shown.
e. zero, one and two) of Sn next-nearest neighbors of the central threefold Sn atom.Upon replacing a Si next-nearest neighbor with a Sn atom, Sn-O bonds of the Sn-E ′ center are uniformly elongated by ∼0.01 Å while the average O-Sn-O angle only slightly decreases by ∼1 • .Upon a second Si with Sn replacement we register again similar variations with a final average Sn-O of ∼2.02 Å and O-Sn-O angle of 101.1 • .As shown in table
Figure 3 .
Figure 3. Spin densities (shadowed) and principal directions (blue arrows) of the g-tensor of (a) a tin STE configuration of the orthorhombic 'Griscom type'.The principal direction (v 1 ) relative to g 1 ∼ ge is along the bisector of the wide O-Sn-O angle (159.7 • ), (b) a tin STE configuration of the "quasi-axial" kind featuring a widest O-Sn-O angle of 131.8 • .The principal direction (v 1 ) relative to g 1 ∼ ge is almost parallel to a Sn-O bond.
(a).In the Griscom STE, following the capture of the electron at a SnO 4 tetrahedron, one O-Sn-O angle opens up (∼140−170 • ) with the two Sn-O bonds becoming slightly longer (∼2.1 Å ) with respect to the usual bond length in Sn-E ′ (∼2 Å).In figure 3(b) we show the structure and spin density of another kind of geometry for the tin STE configuration.The SnO 4 tetrahedron is also quite strained, resembling a pyramid, with oxygen atoms at vertexes and the Sn atom at the center of the basis (i.e.lying on the O-O-O plane, with O-Sn-O angles smaller than 140 • ), and with the spin-density of the unpaired electron localized around the Sn atom, on the basis (O-O-O) of the pyramid.
Figure 4 .
Figure 4.|A iso ( 119 Sn)| vs g 12 and g 13 , as here calculated by first-principles for tin STE configurations in model I and model II (blue squares).Experimental data of the axial Sn-E ′ (red[11] and blue[37] discs) and orthorhombic EPR signals (red[11] and blue[37] vertical lines) are shown.
Figure 5 .
Figure 5. Calculated (a) g 12 (empty squares) g 13 (filled squares) values, and (b) Fermi contact A iso ( 119 Sn) plotted vs widest O-Sn-O angle of the Sn tetrahedron in the tin STE configurations of model I and model II.
′ -like and Sn-FO configurationsIn table 2 we show the results of the 119 Sn Fermi contact [A iso ( 119 Sn)] and g-tensor calculations of Sn-E ′ -like and Sn-FO configurations in model I and II of Sn-doped silica.The Sn-E ′ -like configurations (figure1(a)) give rise to broad distributions of g i values (figureS2in the supplementary data file) whose averages are compared in table 2 to the experimental data reported for the Sn-E ′ (axial) center in Sn-doped silica
Table 3 .
Calculated g principal values for a representative Sn-E ′ -like configuration in model I where we considered a varying number of Sn next-nearest neighbors (NN Sn ) of the central threefold Sn atom. | 9,472.4 | 2024-02-16T00:00:00.000 | [
"Materials Science",
"Physics"
] |
Effects of Organotins on Crustaceans: Update and Perspectives
Organotins (OTs) are considered some of the most toxic chemicals introduced into aquatic environments by anthropogenic activities. They are widely used for agricultural and industrial purposes and as antifouling additives on boat hull’s paints. Even though the use of OTs was banned in 2008, elevated levels of OTs can still be detected in aquatic environments. OTs’ deleterious effects upon wildlife and experimental animals are well documented and include endocrine disruption, immunotoxicity, neurotoxicity, genotoxicity, and metabolic dysfunction. Crustaceans are key members of zooplankton and benthic communities and have vital roles in food chains, so the endocrine-disrupting effects of tributyltin (TBT) on crustaceans can affect other organisms. TBT can disrupt carbohydrate and lipid homeostasis of crustaceans by interacting with retinoid X receptor (RXR) and crustacean hyperglycemic hormone (CHH) signaling. Moreover, it can also interact with other nuclear receptors, disrupting methyl farnesoate and ecdysteroid signaling, thereby altering growth and sexual maturity, respectively. This compound also interferes in cytochrome P450 system disrupting steroid synthesis and reproduction. Crustaceans are also important fisheries worldwide, and its consumption can pose risks to human health. However, some questions remain unanswered. This mini review aims to update information about the effects of OTs on the metabolism, growth, and reproduction of crustaceans; to compare with known effects in mammals; and to point aspects that still needs to be addressed in future studies. Since both macrocrustaceans and microcrustaceans are good models to study the effects of sublethal TBT contamination, novel studies should be developed using multibiomarkers and omics technology.
lobsters) are major constituents of the zooplankton and have a vital role in the trophic transfer of nutrients and xenobiotics (17,22,25,26). Decapod crustaceans, important worldwide fisheries, are usually marine, with few freshwater (crayfishes) and terrestrial (land crabs) species (17). Since decapods live on the sea floor, they can accumulate OTs dissolved in the water, in their food, or on the sediment (8,27,28). However, there is still little information about the mechanisms of OTs' effects in crustaceans. This mini review aims to update information about the effects of OTs on the metabolism, growth, and reproduction of crustaceans; to compare with known effects in mammals, and to point aspects that still needs to be addressed in future studies.
Both macrocrustaceans and microcrustaceans are considered good animal models to study xenobiotics' ecological and toxicological effects (16,25,26,(35)(36)(37). Acute toxicity assays of xenobiotics, useful to assess environmental risks, usually evaluate endpoints parameters such as mortality, egg hatching, development, growth, and reproduction (16,25,37,38). These endpoints are usually expressed as median-lethal or median-effect concentrations (LC50 and EC50) and no-observed-effect-level, which can be compared with predicted environmental concentrations in exposure media for purposes of risk assessment (17,19,39). Decapod crustaceans exhibit higher LC50 values to TBT than mysidacid shrimps, copepods, amphipods, and branchiopods (16,26,35,40). This higher tolerance to TBT of decapods can be related to a faster rate of TBT elimination and/or activation (16). However, larval forms of decapods are highly sensitive to TBT (41). The LC50 for TBT of the shrimp Penaeus japonicus increased progressively during initial larval stages (nauplius to mysis) and sharply after metamorphosis (41). When the larvae were exposed to hyperosmotic or hypo-osmotic stress, the osmoregulatory capacity was compromised by TBT (41).
Organotins can enter crustacean's hemolymph from water, sediment, or food via gills and stomach (28,42). Once inside the animal, their fate depends on the processes of accumulation, biotransformation (metabolism), and elimination (16,28,42,43). In the hermit crab Clibanarius vittatus, assimilation of a single dose of TBT from food was higher than from water, and the levels of TBT in the tissues decreased progressively after 15 days, reaching null values after 75 days (44). In this study, dibutyltin (DBT) was also detected indicating an active metabolism of TBT (44). The hepatopancreas of crustaceans is an important metabolic organ that accumulates functions equivalent to vertebrate pancreas and liver: digestive enzyme synthesis, uptake and storage of nutrients, and xenobiotic's metabolism (42,(45)(46)(47)(48)(49). According to their physicochemical properties, xenobiotics can be metabolized in two distinct phases: phase I-oxidation, reduction, and hydrolysis of the substance by the cytochrome P-450 (CYP) system family of proteins; and phase II-conjugation of polar groups to become soluble (28, 42, 50). Crustaceans' hepatopancreas have an active CYP-dependent monoxygenase system that oxidizes TBT to a series of hydroxylated derivatives that are dealkylated to form DBT and/or monobutyltin (MBT) (42,(50)(51)(52)(53). When blue crabs Callinectes sapidus were fed with TBT-contaminated food, TBT levels in the whole abdomen peaked to 0.12 µg g −1 after 4 days of feeding, while DBT and MBT peaked to 0.39 and 0.35 µg g −1 after 8 and 12 days of feeding, respectively (54). In another study in which C. sapidus were fed TBT-contaminated food, TBT levels were higher in hepatopancreas compared to gills and muscle (43). In a third study in which C. sapidus was fed TBT-contaminated food, the respiration rate, the expression of P-450 3A (CYP3A), and heat shock proteins (HSPs) in the hepatopancreas increased, indicating that the crabs were stressed by TBT (51). An active heat shock response, specially with increased HSP70 expression, occurs when crustaceans are exposed to many types of environmental stress such as heat (55)(56)(57)(58), metals (59,60), and salinity alterations (61,62). Therefore, increased expression of HSPs could be a useful indicator of BTs/TBT contamination that should be studied in other crustacean species (Figure 2).
Reactive oxygen species (ROS), byproducts of cellular respiratory chain, are kept at physiological levels by a balance between oxidant and antioxidant agents (63,64). Liver phase I metabolism also generates ROS as byproducts, leading to oxidative stress (OS) (37). Many drugs, pesticides, and metals induce OS in crustaceans, either by altering the expression and activity of antioxidant enzymes such as catalase, superoxide dismutase (SOD), and glutathione peroxidase (GPx) or by decreasing non-enzymatic antioxidants such as glutathione (37, 65,66). In mammals, BTs increase ROS by decreasing the concentration and activity of SOD, GPx, and glutathione reductase (GR), while simultaneously increasing lipid peroxidation in liver, testis, and kidney (67). Since decapod crustaceans, such as the green crab Carcinus maenas, C. sapidus, and Macrobrachium rosenbergii, are considered good sentinel species, OS biomarkers should be monitored in bioassays with sublethal concentrations of BTs.
Stressed animals usually develop hyperglycemia. In vertebrates, it is considered a secondary response to the increase in catecholamine and corticosteroids' blood levels (68,69). In crustaceans, the main hormone responsible for triggering hyperglycemia during stress is CHH (29, 34, 70, 71). Injection of 10 μmoles of tripalmitin, fentin, and fenbutatin increased glucose levels in the hemolymph of the crab Oziotelphusa senex senex (72). Since this effect did not occur in the eyestalk-ablated crabs, it is possible that OTs injection caused CHH secretion (72). In M. rosenbergii, the treatment with TBT (10, 100, and 1000 ng L −1 ) dissolved in water for 90 days also increased glucose levels in the hemolymph (73). Therefore, synthesis, release, and secretion of CHH and its signaling are processes that could be disrupted as the result of OTs exposure and needs to be further investigated.
In mammals, TBT disrupts both glucose and lipid homeostasis: increases body weight, inflammation, adipogenesis, and blood glucose and insulin levels (2,74,75). These effects are mediated by alterations in insulin signaling cascade and of nuclear receptors such as estrogen receptor, peroxisome proliferator-activated receptor γ (PPARγ), and retinoid X receptor (RXR) (2,74,75). RXR can form both homodimers or heterodimers with many other nuclear receptors, including PPARs, and therefore bind to DNA response elements inducing the transcription of genes involved in xenoprotection, lipid homeostasis, and development (19,76). Since TBT is recognized as a potent agonist of RXR, this binding can be considered a key step of TBT's mechanism of action (19,77).
The main sites of glycogen and lipid storage in decapod crustaceans are the hepatopancreas, gonads, and muscle, and these energetic reserves fluctuate in distinct species according to seasonality, reproductive stage, molt cycle, type, and regularity of the diet (46,49,78). These metabolites are distinctively mobilized during diverse types of stresses, reflecting homeostasis alterations that can be used as biomarkers of health and stress condition (31, 37, 46, 47, 79). In the freshwater prawn M. rosenbergii, TBT (10, 100, and 1,000 ng L −1 ) treatment reduced hepatosomatic index (HIS) and the content of proteins, glycogen, and lipids in the hepatopancreas in a dose-dependent manner (73). In the cladoceran Daphnia magna, lipids are stored in spherical lipid droplets scattered throughout the body, and treatment with 0.036 or 0.36 µg L −1 increased lipid fluorescent stain (80). In female D. magna, both doses of TBT decreased the levels of triglycerides, cholesteryl esters, and phosphocolines and increased diacylglycerol levels and altered the expression of many genes, including RXR (Figure 2) (80).
OTs eFFeCTS On GROwTH
Crustacean growth, as in other ecdysozoans, occurs by the recapitulated molting process (81). Molting is regulated by a negative feedback mechanism involving CHH, MIH, and ecdysteroids (Figure 1) (81,82). Ecdysone and 25-deoxyecdysone, inactive ecdysteroids, are secreted by the Y-organ and converted to 20-hydroxyecdysone (20-HE) and ponasterone A, the active forms, in peripheral tissues (33, 81). Ecdysteroids bind to arthropod ecdysteroid receptor (EcR) that complex with RXR (22, 80). The heterodimer EcR:RXR binds to ecdysteroid response element regulating the transcription of genes involved in development, growth, reproduction, and the genes involved in the pathways of ecdysone synthesis (17,22,80). Incomplete ecdysis leading to death occurs when D. magna is exposed to exogenous 20-HE (22). TBT alone do not alter the incidence of incomplete ecdysis; however, when in combination with 20-HE, this incidence is increased. Therefore, TBT synergizes with 20-HE leading to mortality associated with molting (22). In TBT-treated daphnids, the expression of RXR and EcR increase, disrupting the ecdysteroids' pathways (22, 80). In the brown shrimp Cangron cangron, it was demonstrated that TBT fits in the ligand binding pocket of RXR, affecting the expression of RXR and EcR and probably of downstream genes (83). This genomic action of TBT was also demonstrated in the larvae of an insect Chironomus riparius, where TBT also increased the expression of RXR, EcR, as well as estrogen-related receptor gene and E74 (84).
Besides ecdysteroids, the sesquiterpenoids methyl farnesoate (MF) and juvenile hormone are also important during arthropod's growth and metamorphosis (85). MF, synthesized in the MOs, is the main sesquiterpenoid of crustaceans (Figure 1) (86). The major function of MF in crustaceans is regulation of reproductive maturation (86). MF binds to methoprene-tolerant (MET), which forms a heterodimer with steroid receptor coactivator (SRC), activating the transcription of downstream genes, such as sex-determining genes involved in oocyte maturation (87). In D. magna, TBT also affected the expression of genes related to MF signaling pathway such as MET and SRC (80). Considering that TBT may also affect MF signaling in other crustaceans, and therefore alter their growth and development, serious impact on both planktonic and benthic communities can be expected.
OTs eFFeCTS On RePRODUCTiOn
Imposex in female gastropods is one of the better-known effects caused by TBT on invertebrates. Imposex is characterized by the formation of male sexual organs such as penis and vas deferens in these females (19,86). Although some studies show an early sexual reversal (intersex) in crustaceans exposed to TBT, these changes are less marked than those occurring in mollusks (31, 88). Nevertheless, other detrimental effects on the reproductive system of different species of crustaceans were found in both females and males (27, [88][89][90]. The mechanism by which TBT causes these damages is still unclear, and there are different possible sites of action (80,86,89).
Unlike mollusks, when female crustaceans are exposed to TBT, there is no formation of complete male sex organs (31). Nevertheless, in M. rosenbergii, the treatment with TBT (10, 100, and 1000 ng L −1 ) for 45 days altered ovarian morphology and induced spermatogonia and ovotestis (with spermatocytes and structures similar to seminiferous tubules) (88). In the hermit crab C. vittatus, TBT induced several degrees of ovarian disorganization with follicular atresia and irregular oocytes although there was no formation of male sexual structures (27). Besides damage to reproductive organs, TBT may impair reproductive rates in further generations. Juvenile female D. magna exposed to TBT (100 and 1,000 ng L −1 ) produced smaller newborn neonates than those of unexposed females and suffered a higher mortality during their adulthood, which resulted in lower reproductive output and fitness. The reproductive rates of exposed female's first clutch were also lower than control (80).
Although the main described effect of TBT is the masculinization of females, it also causes damage to male reproductive organs. In M. rosenbergii, exposure to TBT (10, 100, and 1,000 ng L −1 ) for 45 or 90 days caused several damages to the gametes and to the gonadal tissue itself. The gonadosomatic index of the testes reduced, and the seminiferous tubules architecture was compromised by an increase in connective tissue and immature cells (spermatogonia and spermatocytes) (73,90). Spermatozoa count and length reduced (73,90). The activity of the antioxidant enzymes SOD, GPx, and GR reduced in the testes, while DNA damage increased (89). These results are in line with studies in mammals such as the hamster Mesocricetus auratus, where TBT also caused alterations in testicular histology and reduction in spermatogenesis and in enzymatic and non-enzymatic antioxidants (67).
Since sex steroids are the major regulators of vertebrate reproduction, many steroidogenic enzymes and steroid receptors seem to have co-evolved (91,92). However, the role of vertebrate-type sex steroids on invertebrate reproduction is not well determined (19). In mollusks, TBT-induced imposex correlates with increased free testosterone (T) levels, probably induced by inhibition of acyl-CoA:testosterone acyltransferase, which conjugates T with fatty acids, and/or CYPs, reducing T clearance (19,93). The stimulatory effects of steroids on crustacean reproduction are well recognized; however, it was only with the development of modern omics technology that genes of steroidogenic enzymes and putative steroid receptors were identified (31, 39, 94-98). In female M. rosenbergii, TBT reduced 17β-estradiol in the hemolymph and ovary and increased T levels in the ovary (88), while in males, TBT reduced T levels in testis (73,90) (Figure 2) (53,94). In crustaceans, an alternative action proposed was that TBT could block T excretion, but results are still inconclusive (18,93,99,100).
The synthesis and release of steroids in crustaceans is controlled mainly by GIH and CHH, released from the ES-SG system ( Figure 1) (32, 39). As already mentioned, OTs can stimulate CHH release and probably also interfere with other peptides of the CHH family such as GIH (72). Gonad-stimulating hormone, released from the brain and thoracic ganglion, monoamines, and MF also participate in the control of crustacean reproduction (32, 33, 39). GIH and MIH also regulate a peptide hormone called insulinlike androgenic gland hormone, synthesized by the androgenic gland, which is responsible for male sexual differentiation (39, 97). Therefore, there are many sites where TBT may affect the neuroendocrine regulation of crustacean's reproduction.
COnCLUSiOn
Crustaceans form a large group of aquatic animals that are important from both the economic and the ecological perspectives. They are important members of zooplankton and benthic communities and have vital roles in food chains, so the endocrine-disrupting effects of TBT on crustaceans can affect other organisms. They are also important fisheries worldwide. Therefore, human consumption of TBT-contaminated crustaceans can pose risks to human health. In summary, TBT can disrupt carbohydrate and lipid homeostasis of crustaceans by interacting with RXR and CHH signaling and can interact with other nuclear receptors, such as EcR, MET, and SRC, disrupting MF and ecdysteroid signaling, thereby altering growth and sexual maturity, respectively. This compound also interferes in cytochrome P450 system disrupting steroid synthesis and reproduction. Both macrocrustaceans and microcrustaceans are good models to study the effects of sublethal TBT contamination, usually found in natural environments. Multibiomarkers studies focusing on TBT's effects on molecular, biochemical, cellular, morphological, physiological, and behavioral endpoints can be developed with crustaceans. The recent advances in omics technology, with the development of transcriptomes, lipidomes, and proteomes, are providing a novel set of information. The knowledge of the genes involved in the growth, development, and reproduction of crustaceans will certainly provide novel insights about TBT effects.
AUTHOR COnTRiBUTiOnS
EV wrote Sections "Introduction, " "OTs Effects on the Metabolism, " and "OTs Effects on Growth. " JM wrote Sections "OTs Effects on Reproduction" and "Conclusion" and elaborated figures. AV reviewed the manuscript.
ACKnOwLeDGMenTS
The authors gratefully acknowledge support from Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) for the fellowship to EV and Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq) for the fellowship to JM.
FUnDinG
This work was supported by Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) and Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq). | 3,850 | 2018-02-27T00:00:00.000 | [
"Biology"
] |
Gamma-ray spectroscopy for the characterization of uranium contamination in nuclear decommissioning
— Decommissioning is the last step in the life cycle of a nuclear facility. After the evacuation of the facility components, the remaining structures such as concrete walls and floors must be surveyed to ensure that no residual contamination remains. It is a costly and time consuming activity, for which CEA develops fast alpha and beta detection methods allowing a full scanning of very large areas (hundreds of thousands of square meters) in legacy uranium enrichment plants. To support these developments, we present here complementary high-resolution gamma-ray spectroscopy analyses of a contaminated area at the gaseous diffusion uranium enrichment facility UDG, currently under decommissioning at Pierrelatte nuclear plant, France. Long measurements are performed with a High-Purity Germanium (HPGe) detector on the contaminated surface, and in a clean area to assess the natural gamma background of the concrete ground. The surface activity of uranium is 16.6 ± 6.0 Bq.cm -2 , mainly due to 234 U and 238 U, most of the uncertainty coming from the non-uniform distribution of the contamination on the ground. This measurements also allowed us estimating the uranium enrichment of the contamination, which amounts to (0.80 ± 0.13) % of 235 U mass fraction, consistently with the range of the Low Enrichment Plant where this measure was performed. Eventually, the background spectrum allowed us to determine the mass fractions of natural uranium, thorium and potassium in the concrete ground, which respectively amount to 3.8 ± 0.2. ppm U (i.e. 3.8 mg of uranium per kg of concrete), 7.4 ± 0.7 ppm Th , and (2.6 ± 0.1) % K of potassium.
I. INTRODUCTION
HE gaseous diffusion plant (UDG) of Pierrelatte nuclear site was the first uranium enrichment facility in France.It provided enriched uranium for both pressurised water reactors and for the development of nuclear weapons.The facility was operated from 1960 until its shutdown in 1996.Shortly after, the decommissioning operations began and the enrichment equipment has since been evacuated.As of today, a complete decommissioning of the UDG has yet to be approved by the nuclear safety authorities.Before the site is given clearance for decommissioning, a verification of the radiological cleanliness of the remaining concrete structures must be carried out to ensure the absence of residual uranium contamination.A full scan of UDG soils and walls, equivalent to 700 000 m 2 , has to be performed to verify that the radioactivity level stay below a clearance alpha activity threshold of 0.4 Bq.cm -2 , established by the French Nuclear Safety Authority (ASN).
This calls for the development of fast measurement methods, capable of reaching low detection limits in very short acquisition times to measure large areas at a high rate.This can be achieved by alpha and beta contamination monitors, which are used as 1 st level detectors, but when a contamination is detected, within a few seconds, a more precise quantification of the surface activity is needed as large interpretation uncertainties may occur due to the low range of alpha and beta particles in matter, especially in concrete.Therefore, the Nuclear Measurement Laboratory of IRESNE Institute, at CEA Cadarache, studies the use of gamma spectroscopy as an upper level quantification method of uranium contamination.
Gamma rays emitted by uranium contamination are mainly due to 238 U, 235 U and 234 U isotopes, and their direct descendants having a short enough half-life for radioactive equilibrium to be reached.Taking into account a decay period of 40 years as reference, corresponding to the UDG service life, the radioisotopes present in UDG contamination are thus 238 U, 234 Th, 234m Pa, 235 U, 231 Th and 234 U. Their main gamma emissions (energy and intensity with their respective uncertainties) are presented in Table I.These gamma emissions compose the contamination signal and are predominantly present in the low energy region of the Gamma-ray spectroscopy for the characterization of uranium contamination in nuclear decommissioning gamma spectra, between 40 keV and 210 keV (see further Fig. 6).Only the gamma rays of 234m Pa are outside this interval.During the uranium gaseous enrichment process, natural uranium is filtered in porous membranes to increase the proportion of 235 U and 234 U with respect to that of 238 U [2].While the mass proportion of 234 U is very small, from about 0.005 to 0.006 % in natural uranium to 1 % in highly enriched uranium (HEU) [3], it brings more than 50 % of the alpha activity at low enrichment and up to almost 100 % for HEU, as a results of its short half-life of 245.10 3 years compared to 704.10 6 years for 235 U and 447.10 7 years for 238 U [1].The UDG plant is made up of increasing enrichment facilities: the Low Enrichment Plant up to about 2 % ( 235 U mass fraction), the Medium Enrichment Plant (MP) up to about 8 % 235 U, the High Enrichment Plant (HP) up to about 25 %, and the Very High Enrichment Plant (VHP) up to more than 90 % [4].Depending on the Enrichment Plant, the main contributors to alpha radioactivity significantly differ, as well as gamma emission rates.
Gamma detection is also subject to an important background emitted by the natural radioisotopes found in concrete, either in the natural decay chains of 238 U, 235 U, and 232 Th (uranium and thorium elements are present in concrete in ppm quantities, and their radioactive chains are in secular equilibrium), or by 40 K naturally present in potassium.Table II shows the most intense natural gamma emissions.We will show in next sections that this natural gamma background of concrete may represent a significant contribution in certain gamma rays, which must be subtracted to correctly estimate the activity of the uranium surface contamination.The measurement of this background and the detection of uranium contamination is reported in this work with a high resolution gamma spectroscopy detector, in the UDG Low Enrichment Plant.
A. Measured areas
The measurements took place in a diffusion group of the UDG Low Enrichment Plant, especially on the reference area presented in Fig. 2, chosen because of the presence of a proven uranium contamination deposited over the concrete floor, which allows to compare different detection methods (alpha, beta, gamma).The measurement area was separated into several measurement points (marked by the blue squares on Fig. 2) with an area of 25 cm × 25 cm each, which corresponds to the detection surface seen by our instruments.A label was attributed to each point and uranium contamination is located inside measurement points # 6, 7, 8 and 9. Other measurements were carried out on other points outside this region, which are supposed free of contamination, to estimate the concentration of natural gamma emitters in concrete (U, K, Th).
B. Experimental setup and MCNP model
High resolution gamma spectroscopy studies were conducted using a Falcon 5000 High Purity Germanium (HPGe) detector.This model includes a planar germanium crystal with a 6.5 cm diameter and 3 cm thickness, as well as an integrated multichannel analyzer and an electrical cooling system [6].Two measurement configurations were used, the first of which was used to measure uranium contaminated areas with the detector lifted 12.4 cm from the floor, thanks to a manual stacker (configuration # 1, see Fig. 3).This setup led to a detection surface of 530 cm 2 , similar to that of larger alpha and beta detectors used during the same measurement campaign, so that results could be compared.However, positioning the HPGe detector at this height above the ground reduces its detection efficiency.On the other hand, 5 cm thick lead rings were used to shield the germanium crystal against background radiation coming from neighboring soil surfaces but also from concrete walls.In the second configuration, the HPGe detector is closer to the floor with a height of 6.7 cm between its entrance window and the ground (configuration # 2).This configuration # 2 allowed us to increase detection efficiency for the soil background measurements, while keeping the detector shielding.Monte-Carlo simulation models of both setups were developed with MCNP computer code [7] in order to calculate the detection efficiency of gamma rays emitted by uranium contamination, in surface, or by the bulk concrete soil for the natural background.These efficiencies are then used for activity calculations as detailed further in (1).The next figure shows a graphical representation of these MCNP models.For configuration # 1, the thin contaminated area (1 mm thickness) above the concrete ground is not visible in Fig. 5.It is also made of concrete and it is as a volume source in MCNP, while for configuration # 2, the source is the bulk concrete soil (50 cm thickness).The model of the detector and lead shielding is the same for both configurations.
C. Contamination activity estimation and enrichment percentage
As mentioned above, MCNP calculations are used to determine the gamma detection efficiencies allowing to convert the net measured count rates into activities (in Bq).Using detection setup # 1, an acquisition of 64.3 hours was performed on contaminated point # 7 located inside the contaminated area of Fig. 2. The obtained spectrum is given in Fig. 6.Uranium characteristic rays are visible on the spectrum, such as the 63.3 keV and 1001 keV lines emitted by 234 Th and 234m Pa, respectively.Both isotopes are part of the 238 U contamination decay chain (see Table I).Peaks emitted by 235 U are also visible, for instance at 143.8 keV and 185.7 keV.A small 53.2 keV gamma from 234 U is also present (see Fig. 7).Fig. 7. Uranium contamination gamma spectrum measured in point # 7 with the HPGe detector zoomed between 20 and 100 keV to show the peak fit obtained with the Genie2000 gamma analysis software [8] for the 53.2 keV peak.
Uranium contamination
The activity of the emitting isotopes is estimated by analyzing gamma rays at different energies using the following equation: With: -AS the calculated surface activity (in Bq.cm -2 ) calculated with peak of energy E (it is thus possible to calculate an average activity for the isotopes emitting several detectable gamma rays), -Sn(E) the net area under the gamma peak after subtraction of the Compton continuum under the peak, and of the net area of the natural background presented in Fig. 8 when it is significant, such as for the 185.7 keV peak of 235 U that is interfered by the 186.2 keV peak of 226 Ra in the background.-Eff(E) the simulated detection efficiency (counts per source particle at energy E), calculated with the first model of Fig. 5.The migration depth of the uranium contamination was set to 1 mm to simulate a surface contamination.Other depth lengths from a few µm to a few mm were also considered but with a limited effect on simulated efficiency with a maximum relative difference of 4 % at low energy (53.2 keV), -I(E) the emission intensity at energy E (number of gamma emitted per disintegration), -Tc the active counting time (in seconds), -Scontamination the area of the contamination taken into account in the MCNP model, here 25 cm × 25 cm (see Fig. 5).
This led us to an estimation of the weighted average activity from the analysis of multiple gamma rays emitted by each of 238 U and 235 U isotopes.The average activities were calculated with the formula below: (2)
With:
-As,i the calculated surface activity from gamma peak at energy Ei (in Bq.cm -2 , refer to (1)) emitted by 238 U or 235 U, -Sn the net area of the gamma peak at energy Ei -σSn the absolute statistical standard deviation for the gamma peak at energy Ei taking into account the background Compton continuum B under the peak, σ = √ + 2 with Sn the net area and B the background.From (2), we obtain the following weighted average activities: A238 = 8.4 ± 4.5 Bq.cm -2 and A235 = 0.44 ± 0.23 Bq.cm -2 .The estimation of 234 U activity relies on the only exploitable gamma peak at 53.2 keV, with an emission intensity of 0.13 %.Its net area fluctuates between 3711 and 5146 counts depending on fit parameters used to estimate the background under the peak (pink area in Fig. 7), with a relative standard deviation of 11 % and an average net area of 4090 counts resulting from 10 different fits.We added this standard deviation through a quadratic sum to the statistical standard deviation (√ + 2 ), and thus estimate a total relative standard deviation of 17 % on the net area of the small 53.2 keV peak.Finally, we obtain the following activity for 234 U: A234 = 7.8 ± 4.1 Bq.cm -2 .We list below the different causes of uncertainty taken into account to estimate the previous confidence intervals on 234 U, 235 U, and 238 U activities: the representativeness of the MCNP model, especially the real distribution of the contamination, which was measured as non-uniform by autoradiography [9].Therefore, we calculated efficiencies for multiple distribution hypothesis ranging from a hot spot (i.e. a point source in the middle of the collimator solid angle) to a surface of 30 cm × 30 cm exceeding the field of view of the detector inside the collimator (disk with a diameter of about 20 cm, i.e. a detection surface around 300 cm²).These calculations (not reported here) show that if the contaminated area is smaller (for instance a hot spot or a 10 cm × 10 cm surface) than the detector field of view, the efficiency can be almost twice than that used for interpretation (calculated for a contaminated area of 25 cm × 25 cm) or those calculated with surfaces larger than the field of view (we studied from 20 cm × 20 cm to 30 cm × 30 cm).From this study and due to the lack of knowledge of the real contamination distribution, we consider an arbitrary relative uncertainty of 50 % on efficiency, this last also includes the uncertainty on the HPGe detector MCNP model, which is however less than 10 % from our feedback [10].Even if this uncertainty is much smaller than the previous one, we will check the HPGe detector model through MCNP vs. experiment comparisons of precise measurements with calibration point sources, on the wide energy range of interest (53.2 keV peak of 234 U up to 1001 keV peak of 234m Pa), the other uncertainties on the MCNP model, such as the limited knowledge of the concrete block density and chemical composition, are smaller and also included in the abovementioned 50 % relative standard deviation, statistical uncertainties, including the variability associated to the net area extraction for the small 53.2 keV peak of 234 U, and the dispersion of the activities obtained with the different peaks of multi-gamma emitters, calculated through the weighted average (2).Summing the individual 238 U, 235 U and 234 U activity values, we obtain a total uranium activity of 16.6 ± 6.0 Bq.cm -2 .HPGe measurements also allowed for an estimation of the enrichment percentage of the contamination.Individual activities of 238 U, 235 U, and 234 U are divided by their respective mass activities of 12.4×10 3 Bq.g -1 , 79.9×10 3 Bq.g -1 and 230×10 6 Bq.g -1 [1], to obtain the following masses m238 = 0.44 ± 0.05 g, m235 = (3.5 ± 0.4)×10 -3 g and m234 = (2.1 ± 0.3) ×10 -5 g.An 235 U enrichment of 0.80 ± 0.13 % is thus deduced using (3): This 0.8 % enrichment is consistent with the range of the Low Enrichment Plant, which extends from depleted uranium up to about 2 % [4], as mentioned in Section I. Since m235 and m238 are estimated from the measurement, only statistical uncertainties are considered but not those due to the possible non-uniformity of contamination.
D. Natural radiation measurements and U, K and Th concentration estimations
A 23 hour background measurement was done in a clean area using the HPGe detector in experimental configuration # 2 (see Fig. 4 and Fig. 5).The background spectrum is shown in Fig. 8, in which the well-known natural gamma rays are identified, such as the 609 keV peak of 214 Bi (bottom of 238 U decay chain), 1460 keV of 40 K, and 2614 keV of 208 Tl (end of 232 Th chain).The efficiency is calculated with MCNP and the second model of Fig. 5 which considers a 200 cm × 200 cm × 50 cm concrete block of density 2.35 g.cm -3 as gamma source, which is a sufficient volume to have an almost infinite-equivalent geometry, i.e. no significant additional signal would come from a larger volume.We will investigate other geometries in future work, in particular to take into account new information about the real thickness of the ground at UDG Low Enrichment Plant.Note that this geometry leads to similar calculated activities for all gamma peaks (on a wide energy range) of the isotopes present in the 238 U and 232 Th natural chains, which are in secular equilibrium.The individual activities of each isotope is calculated as follows: With: -AV the volume activity (in Bq) of 238 U or 232 Th calculated with the peak of energy E, -Sn(E) the net area under the gamma peak, after subtraction of the Compton continuum, -Eff(E): simulated detection efficiency (counts per source particle at energy E), calculated with the second model of Fig. 5, -I(E) and Tc as in (1).Using (2), we obtain the following weighted average activities with the different peaks of 238 U and 232 Th chains: Anat 238U = 219 ± 14 kBq and Anat 232Th = 141 ± 14 kBq.On the other hand, the activity of 40 K estimated with the 1460 keV is Anat 40K = 3726 ± 192 kBq.The relative uncertainties are here ranging between about 5 and 10 %, which is much smaller than for contamination activity because natural U, Th and K elements are supposed uniformly distributed in the volume of concrete.The confidence intervals are therefore mainly due to counting statistics.The lack of knowledge of the real density and composition of the concrete ground has an effect smaller than 5 % on the energy range used to assess the above activities.From these activities, we can deduce the masses of U, Th and K elements present in the simulated concrete block, using the mass activities of the 238 U, 232 Th and 40 K isotopes (12.4 × 10 3 , 4.07 × 10 3 and 265 × 10 3 in Bq.g -1 respectively, from [1]) and their natural abundance (99.3 % for 238 U, 99.9 % for 232 Th and 0.012 % for 40 K): mU = 17.8 ± 1.1 g, mTh = 34.6 ± 3.5 g and mK = 120.4± 6.2 kg.
Taking into account the mass of the simulated concrete block, these masses corresponds to mass fractions of 3.8 ± 0.2 ppmU for uranium (1 ppm is 1 mg of uranium per kg of concrete), 7.4 ± 0.7 ppmTh for thorium and (2.6 ± 0.1) %K for potassium.These mass fractions are within the range of typical U, K and Th proportions in the continental crust [11].
III. CONCLUSION
Experimental tests have been performed at the UDG uranium gaseous enrichment plant for the detection of residual uranium contamination, using high resolution gamma spectroscopy with an HPGe detector.This measurement of 64 hours allowed us to evaluate the surface activity of a contaminated area to 16.6 ± 6.1 Bq.cm -2 .The large uncertainty is mainly due to the limited knowledge of the real contamination distribution, which has been demonstrated to be non-homogenous thanks to autoradiography analysis [9].We also measured the 235 U mass enrichment, estimated to (0.80 ± 0.13) %, and the natural background over a clean area of the facility.From the study of this background spectrum, we evaluated the natural U, Th and K mass fractions to 3.8 ± 0.2 ppmU, 7.4 ± 0.7 ppmTh and (2.6 ± 0.1) %K inside the facility concrete, which is consistent with typical continental crust levels [11].Future works will focus on low resolution gamma spectroscopy with an NaI(Tl) scintillator on the same contaminated area.The goal being to have a faster characterization (about 15 min) of the contaminated surfaces detected by alpha or beta contamination monitors.
Fig. 6 .
Fig. 6.Uranium contamination gamma spectrum measured in point # 7 with the HPGe detector.The main gamma peaks due to uranium contamination are indicated with arrows. | 4,726.4 | 2023-01-01T00:00:00.000 | [
"Physics",
"Environmental Science",
"Engineering"
] |
The identification of high-performing antibodies for RNA-binding protein FUS for use in Western Blot, immunoprecipitation, and immunofluorescence
RNA-binding protein Fused-in Sarcoma (FUS) plays an essential role in various cellular processes. Mutations in the C-terminal domain region, where the nuclear localization signal (NLS) is located, causes the redistribution of FUS from the nucleus to the cytoplasm. In neurons, neurotoxic aggregates are formed as a result, contributing to neurogenerative diseases. Well-characterized anti-FUS antibodies would enable the reproducibility of FUS research, thereby benefiting the scientific community. In this study, we characterized ten FUS commercial antibodies for Western Blot, immunoprecipitation, and immunofluorescence using a standardized experimental protocol based on comparing read-outs in knockout cell lines and isogenic parental controls. We identified many high-performing antibodies and encourage readers to use this report as a guide to select the most appropriate antibody for their specific needs.
Introduction
Fused-in Sarcoma (FUS) encodes a DNA/RNA-binding protein involved in numerous cellular processes including transcriptional regulation, RNA splicing, RNA transport and DNA repair. 1 Predominantly localized in the nucleus, FUS can shuttle between the nucleus and cytoplasm. 2 The FUS transcript is reported to have multiple domains including an N-terminal Gln-Gly-Ser-Tyr -rich region, an RNA-recognition motif, Arg-Gly-Gly repeat regions, a zinc finger motif and a highly conserved C-terminal NLS. [3][4][5] Variants in the FUS gene have been identified as potential causative factors for amyotrophic lateral sclerosis (ALS), frontotemporal dementia (FTD) and frontotemporal lobar degeneration (FTLD). [6][7][8][9] FUS related mutations found in familial ALS/FTD patients are clustered in the C-terminal NLS, causing FUS to be mislocalized and accumulate as aggregates in the cytoplasm of neurons, initiating a pathway that contributes to neurodegeneration. 6,7 FUS function is reduced when aggregates form, but it is not yet known whether this initiates the pathogenic process or if the aggregates are pathogenic. 10 Mechanistic studies would be greatly facilitated with the availability of high-quality antibodies.
Here, we compared the performance of a range of commercially-available antibodies for RNA-binding protein FUS and validated several antibodies for Western Blot, immunoprecipitation and immunofluorescence, enabling biochemical and cellular assessment of FUS properties and function.
REVISED Amendments from Version 1
To the introduction, we included frontotemporal local degeneration (FTLD) to the list of neurodegenerative diseases that the FUS gene may contribute to.
Any further responses from the reviewers can be found at the end of the article Results and discussion Our standard protocol involves comparing readouts from wild-type (WT) and knockout (KO) cells. [11][12][13][14][15] To identify a cell line that expresses adequate levels of FUS protein to provide sufficient signal to noise, we examined public proteomics databases, namely PaxDB 16 and DepMap. 17 HeLa was identified as a suitable cell line and thus HeLa was modified with CRISPR/Cas9 to knockout the corresponding FUS gene (Table 1).
For Western Blot experiments, we resolved proteins from WT and FUS KO cell extracts and probed them side-by-side with all antibodies in parallel 12-15 ( Figure 1).
For immunoprecipitation experiments, we used the antibodies to immunopurify FUS from HeLa cell extracts. The performance of each antibody was evaluated by detecting the FUS protein in extracts, in the immunodepleted extracts and in the immunoprecipitates 12-15 ( Figure 2). For immunofluorescence, as described previously, antibodies were screened using a mosaic strategy. 18 In brief, we plated WT and KO cells together in the same well and imaged both cell types in the same field of view to reduce staining, imaging and image analysis bias ( Figure 3).
In conclusion, we have screened FUS commercial antibodies by Western Blot, immunoprecipitation and immunofluorescence and identified several high-quality antibodies under our standardized experimental conditions. The underlying data can be found on Zenodo. 19,20 Methods
Antibodies
All FUS antibodies are listed in Table 2, together with their corresponding Research Resource Identifiers, or RRID, to ensure the antibodies are cited properly. 21 Peroxidase-conjugated goat anti-rabbit and anti-mouse antibodies are from
Antibody screening by Western Blot
Western Blots were performed as described in our standard operating procedure. 23 Antibody screening by immunoprecipitation Immunoprecipitation was performed as described in our standard operating procedure. 24 Antibody-bead conjugates were prepared by adding 1.0 μg of antibody to 500 μL of phosphate-buffered saline (PBS) (Wisent, cat. number 311-010-CL) with 0,01% triton X-100 (Thermo Fisher Scientific, cat. number BP151-500) in a 1.5 mL microcentrifuge tube, together with 30 μL of protein A-(for rabbit antibodies) or protein G-(for mouse antibodies) Sepharose beads. Tubes were rocked overnight at 4°C followed by two washes to remove unbound antibodies.
HeLa WT were collected in HEPES buffer (20 mM HEPES, 100 mM sodium chloride, 1 mM EDTA, 1% Triton X-100, pH 7.4) supplemented with protease inhibitor. Lysates were rocked 30 min at 4°C and spun at 110,000 Â g for 15 min at 4°C. One mL aliquots at 1.0 mg/mL of lysate were incubated with an antibody-bead conjugate for~2 hours at 4°C. The unbound fractions were collected, and beads were subsequently washed three times with 1.0 mL of HEPES lysis buffer and processed for SDS-PAGE and Western Blot on a 5-16% polyacrylamide gels.
Antibody screening by immunofluorescence Immunofluorescence was performed as described in our standard operating procedure. 12-15,18 HeLa WT and FUS KO were labelled with a green and a far-red fluorescence dye, respectively. The fluorescent dyes used are from Thermo Fisher Scientific (cat. number C2925 and C34565). WT and KO cells were plated on glass coverslips as a mosaic and incubated for 24 hrs in a cell culture incubator at 37 o C, 5% CO 2 . Cells were fixed in 4% paraformaldehyde (PFA) (Beantown chemical, cat. number 140770-10ml) in PBS for 15 min at room temperature and then washed 3 times with PBS. Cells were permeabilized in PBS with 0,1% Triton X-100 for 10 min at room temperature and blocked with PBS with 5% BSA, 5% goat serum (Gibco, cat. number 16210-064) and 0.01% Triton X-100 for 30 min at room temperature. Cells were incubated with IF buffer (PBS, 5% BSA, 0.01% Triton X-100) containing the primary FUS antibodies overnight at 4°C. Cells were then washed 3 Â 10 min with IF buffer and incubated with corresponding Alexa Fluor 555-conjugated secondary antibodies in IF buffer at a dilution of 1.0 μg/mL for 1 hr at room temperature with DAPI. Cells were washed 3 Â 10 min with IF buffer and once with PBS. Coverslips were mounted on a microscopic slide using fluorescence mounting media (DAKO).
Imaging was performed using a Zeiss LSM 880 laser scanning confocal microscope equipped with a Plan-Apo 40Â oil objective (NA = 1.40). Analysis was done using the Zen navigation software (Zeiss
Kathleen Southern
Dear Ryota Hikiami, Thank you for your thorough review of this Data Note which analyses the performance of commercial antibodies for RNA binding protein FUS, a potential causative gene in many neurodegenerative diseases.
To answer your first point of feedback, literature reference 9, which refers to the publication by Van Langenhove et al., performs a mutational analysis of FUS in patients with cases of FTLD. To further elucidate this fact, we will be submitting a revised version with modified text to include FTLD in the list of neurodegenerative diseases potentially caused by FUS gene variants.
As for the second point, we always include a dataset in our reports, which includes all underlying raw data for the experiments performed (reference 20). This increases the transparency and reproducibility of our work. It also allows viewers to have a better understanding of the results and see what wasn't included in the figures. As for immunofluorescence, the dataset includes czi files of each antibody under the microscope.
We hope this provides the additional image coverage you were looking for.
Thank you again for your suggestions! Competing Interests: No competing interests were disclosed. | 1,786.4 | 2023-04-06T00:00:00.000 | [
"Biology"
] |
Inhibitory Effects of Ethanol in the Neonatal Rat Hippocampus In Vivo
Ethanol-induced neuroapoptosis in the developing brain has been suggested to involve suppression of neuronal activity. However, ethanol acts as a potent stimulant of neuronal activity by increasing the frequency of depolarizing GABA dependent giant depolarizing potentials in the neonatal rat hippocampal slices in vitro. Here, we show that ethanol strongly inhibits, in a dose-dependent manner (1–6 g/kg), sharp waves and multiple unit activity in the hippocampus of neonatal (postnatal days P4–6) rats in vivo. Thus, the effects of ethanol on the developing hippocampal network activity cardinally differ in vitro (stimulation) and in vivo (inhibition).
Introduction
Ethanol and general anesthetics induce massive neuroapoptosis in the developing brain [1,2]. Considerable evidence obtained in the neocortex indicates that the neuroapoptotic effects of these drugs involve suppression of the early activity patterns and the neuronal activity [3][4][5]. However, ethanol has also been shown to increase the frequency of depolarizing GABA driven giant depolarizing potentials and to act as a potent stimulator of the neuronal activity in the neonatal rat hippocampal slices in vitro [6]. These findings go against the hypothesis that the adverse effects of ethanol in the neonatal hippocampus involve suppression of an activity. Yet, the effects of ethanol on the early hippocampal activity so far have not been addressed in vivo. The aim of this study was to characterize the effects of ethanol on the electrographic activity and the neuronal firing in the hippocampus of neonatal rat pups in vivo.
Material and Methods
This work has been carried out in accordance with EU Directive 2010/63/EU for animal experiments and all animal-use protocols were approved by the French National Institute of Health and Medical Research (INSERM, protocol N007.08.01) and Kazan Federal University on the use of laboratory animals (ethical approval by the Institutional Animal Care and Use Committee of Kazan State Medical University N9-2013). Wistar rats from postnatal day (P) P4-P6 were used. Surgery was performed under isoflurane anesthesia, and the animals were left to recover from anesthesia for more than one hour before the recordings. Preparation of the animals for the head-restrained recordings and the recording setups were as described previously [3]. Recordings of the sharp waves (SPWs) on the local field potential (LFP) and the multiple unit activity (MUA) were performed from the hippocampus using linear silicone probes (16 channels, 100 μm separation distance between the recording sites, Neuronexus Technologies, USA). The signals were amplified and filtered (×10,000; 0.1-10 kHz) using a DigitalLynx (Neuralynx, USA) amplifier, digitized at 32 kHz and saved on a PC for post-hoc analysis using custom-written functions in Matlab (MathWorks, USA) as described previously [3]. The Mann-Whitney and t tests were used for the group data comparisons with a level of significance set at p < 0.05.
Results and Discussion
The electrical activity of the hippocampus was characterized by SPWs which are the predominant electrographic activity pattern during the early postnatal period (Fig. 1a, b) [7]. In keeping with these previous studies, the SPWs synchronized CA1 units, reversed polarity at the pyramidal cell layer, as evidenced by the current source density analysis, and occurred irregularly at a frequency of 3.7 ± 0.2 min -1 (n = 9; P4-6). The average MUA in the CA1 pyramidal cell layer estimated by a 1-h recording session prior to ethanol administration was 3.5 ± 1.2 spikes/s. Ethanol was administered intraperitoneally at three dosage regimens: 1 g/kg + 3 g/kg (with 1-h interval), 4.5 g/kg, and 6 g/kg. At the maximal dosage (6 g/kg), ethanol-induced rapid and profound inhibition of the hippocampal activity (Fig. 1a, c, d). Thirty minutes after ethanol administration, SPWs and CA1 MUA were almost completely suppressed that was characterized by the reduction in SPWs frequency and MUA to 7.0 ± 4.8 and 2.3 ± 1.8 % of the control values, respectively (n = 3, p < 0.05, t test). This was followed by partial recovery of the activity 3 h after the ethanol administration (SPWs frequency recovered to 32 ± 6 % and MUA to 15 ± 11 % of the control values; n = 3). Ethanol at lower doses, 1 + 3 g/kg with a 1-h interval and 4.5 g/kg produced less rapidly and less prominently, but also the long-lasting Hippocampal layers stratum pyramidale, stratum oriens, and stratum radiatum are indicated as sp, so, and sr, respectively. b Example of the SPW and its current source density. c-d Time course of the CA1 pyramidal cell layer MUA frequency (c) and the SPWs frequency (d) after ethanol administration at 1 g/kg + 3 g/kg, 3 g/kg, 6 g/kg (i.p.) during the 3-h recordings. Significant differences from the control values (p < 0.05) are indicated by the asterisks (Mann-Whitney test) suppression of the hippocampal activity. Thirty minutes after ethanol administration at dosage 4.5 g/kg MUA decreased to 30 ± 26 % and SPWs frequency reduced to 25 ± 4 % of the control values (n = 3, p < 0.05, t test). At dosage 1 g/kg, MUA showed a tendency to decrease to 61 ± 13 % of the control values but this was not significant (n = 3, p > 0.05, t test) whereas SPWs frequency reduced to 29 ± 2 % of the control values (n = 3, p < 0.05, t test). These results indicate that ethanol induces rapid suppression of the hippocampal activity in the rat pups and that the effects of ethanol are dose-dependent. The inhibitory effects of ethanol were long-lasting so that by 3 h after the injection, the electrical activity was still suppressed and was significantly lower than in control before the ethanol injection (Fig. 1c, d).
Thus, ethanol at the doses inducing massive neuroapoptosis in the developing brain exerts powerful inhibitory actions on SPWs and neuronal firing in the neonatal rat hippocampus. These results differ from the stimulation of the activity described in the hippocampal slices of the neonatal rats in vitro, where ethanol was shown to increase the frequency of depolarizing GABA driven giant depolarizing potentials [6]. Although the reasons for this sharp discrepancy in the ethanol actions in vivo and in vitro are unknown, it is plausible that it involves an increase in GABAergic transmission by ethanol [6], which exerts, during the neonatal period, complex excitatory and inhibitory network actions in the hippocampus in vitro [8,9], but mainly inhibitory network actions in cortical circuits of neonatal rodents in vivo [10][11][12]. Another mechanism could involve respiratory acidosis which was shown to suppress giant depolarizing potentials in the neonatal rat hippocampal slices [13]. However, in our previous study, ethanol at the maximal dosage of 6 g/kg did not affect oxygen saturation (SpO 2 ), breath rate and heart rate in the neonatal rats under similar experimental conditions [4]. Independently on the underlying mechanisms, our results support the hypothesis that ethanol-induced hippocampal neuroapoptosis during the neonatal period involves suppression of neuronal activity [14,15].
Conclusion
Our main finding is that ethanol strongly suppresses activity in the neonatal rat hippocampus in a dose-dependent manner similarly to the ethanol actions previously described for the somatosensory cortex [4]. These results provide mechanistic support to the hypothesis that the neuroapoptotic actions of ethanol and general anesthetics involve severe suppression and particular vulnerability of the early activity patterns to these drugs. | 1,685.6 | 2016-10-12T00:00:00.000 | [
"Biology",
"Medicine"
] |
Biodiversity research in the “big data” era: GigaScience and Pensoft work together to publish the most data-rich species description
With the publication of the first eukaryotic species description, combining transcriptomic, DNA barcoding, and micro-CT imaging data, GigaScience and Pensoft demonstrate how classical taxonomic description of a new species can be enhanced by applying new generation molecular methods, and novel computing and imaging technologies. This 'holistic’ approach in taxonomic description of a new species of cave-dwelling centipede is published in the Biodiversity Data Journal (BDJ), with coordinated data release in the GigaScience GigaDB database.
Background
The challenge While much has been written on the data deluge in genomics, biodiversity research has undergone a similar explosion in the throughput and volume of data produced. With increasingly threatened habitats, free and open access to this data is essential for informed decision-making on conservation issues. Much of this growth has been led by advances in DNA barcoding, and by combining bulksampling with genomic technology, the technique of metabarcoding will increase this flood of data even further. With growing intensities in sampling via mass sampling of arthropods, mass detection of environmental DNA in aquatic environments, and broad overviews of plant communities, these sophisticated analyses allow temporal and spatial assessment of biodiversity across varied environments at previously unobtainable levels of detail.
These new ecoinformatics and biomonitoring techniques are able to work quantitively [1], so in addition to ecosystem assessment, they also allow biodiversity surveys and the discovery of new species, even inside metropolitan areas that should be comparatively well sampled [1].
Traditional descriptive taxonomy has failed to keep pace with the explosive growth of sequencing. As a consequence there has been a huge increase in the number of "dark taxa" within public sequence databases. These are taxa that are not identified to a known species, either because they are new to science, or because the specimen has never been identified. In many cases dark taxa are already represented within museum collections and have published descriptions However, there is no mechanism by which taxonomists can easily verify the identity of dark taxa, and even if there were, describing them quickly and efficiently was impossible until recently, due to the nomenclatural rules prohibiting the description of new species in electronic only publications. The increasing pace of species extinction, coupled with the decreasing pool of taxonomic expertise, means that there is an urgent need to speed up the process of investigating biodiversity.
Potential solutions
From September 2012 the process of describing animal species joined the electronic era, with the acceptance of electronic taxonomy publication and registration with ZooBank, the official registry of the ICZN (International Trust for Zoological Nomenclature). The genomic explosion has led to a rapid increase in the number of reference genomes, and the production of transcriptomes is becoming an even faster and more cost-effective substitute to produce massive amounts of gene sequence data for genetic and phylogenomic studies. The pace of traditional taxonomy is, in some instances, catching up with genome sequencing, as was demonstrated with a new Strepsiptera genome [2] which was published backto-back with its species description in Zookeys [3].
While the barcoding community has produced workarounds for the lack of species descriptions, such as the use of interim taxonomic nomenclature (operational taxonomic units) in their sample registries, the use of, DNA-based classifications were initially restricted to 'taxonomy-free' groups such as bacteria and fungi. The new Barcode Index Number (BIN) system allows clustering of sequences into "BINs", and can aid revisionary taxonomy by flagging possible cases of synonymy [4].
On top of advances in sequencing technology, new imaging techniques are providing ways to study morphology and animal behavior in unprecedented and reproducible detail, and in a non-destructive manner. Subrobotic digital imaging can rapidly process stacks of images through collections. Digital video allows for archiving of in-situ behavior, while the use of X-ray micro-computed tomography scanning (microCT) supports three-dimensional virtual representations of materials. The use of these data as virtual type specimens has been promoted through the concept of "cybertypes". These digital representations of exemplar specimens create the potential for new forms of collections that can be openly accessed and used without the physical constraining of loaning specimens or visiting natural history collections [5].
Some have suggested a 'turbo-taxonomy' approach, combining all of these techniques to address a perceived decline in taxonomic expertise [6,7]. This putative pipeline has recently been demonstrated with large series of parasitic wasps [6] and Trigonopterus weevils [7]. While these examples have focused on taxonomic throughput, less attention has been given to the potential to integrate these different data types.
The example
GigaScience and Pensoft Publishers present the results of a pilot study aiming to demonstrate how the classical taxonomic description of a new species can be enhanced by utilizing the latest molecular methods, and novel computing and imaging technologies. A new species of cavedwelling centipede, Eupolybothrus cavernicolus Komerički & Stoev (Chilopoda: Lithobiomorpha: Lithobiidae) [8], recently discovered underground in a Croatian cave, is the first Eukaryotic species description for which, in addition to traditional morphological description, the authors provide a fully sequenced transcriptome, DNA barcodes and BIN entries, detailed anatomical X-ray micro-CT scans, as well as a movie of the living specimen to document important traits of its behavior [9].
Communicating the results of next generation sequencing effectively requires the next generation of data publishing. The description published in the newly launched Biodiversity Data Journal (BDJ) aims to provide a gold standard for not just the quantity and diversity of data available, but for quality and amount of metadata to make this data reusable and interoperable. It also demonstrates the benefits of integrating a scholarly publishing workflow that allows authors, curators and editors to write, peerreview, publish, and disseminate biodiversity data within a single web-based platform [10]. GigaScience's contribution to the pilot is using the GigaDB database for large-scale data handling, management, curation and storage (see [9]). The data are also available in relevant community specific databases, with transcriptomic sequencing data in both ENA and ArrayExpress, plus annotation data made publically available through ArrayExpress to the most stringent (MINSEQE) metadata standards. Imaging data is deposited in morphological databases, and biodiversity data in the Barcode of Life databases. All data are made available with no restrictions on reuse under the most open CC0 public domain waiver. The publication of Stoev et al., in this manner provides a significant step forward from integrating small data sets in the article text in both computer-and human-readable formats, into the world of big data publishing.
To tackle complex and novel scientific questions, datasets and metadata from different sources need to be harmonized and made interoperable. Working with the ISA community we have provided metadata in the interoperable ISA-TAB format to maximize the discovery, exchange and informed integration of these diverse datasets. Until recently there has been a lack of incentives for data producers to make their data available, but this data note provides an example of how credit can be obtained for providing this effort. While the focus is on providing data rather than analysis, there are interesting questions to be asked such as on the evolution of the species, development of its segmented body structure, and how it has adapted to its dark cave environment. By providing such a diverse range of phenotypic and molecular data in an integrated and reusable form, we hope to enable other researchers to explore these and other questions. While this new species subterranean lifestyle could hopefully protect it from some of the growing threats surface habitats are encountering, this new type of species description also provides an example of how much previously uncharacterized information on its behavior, internal structure, physiology and genetic make-up can be preserved for future generations.
Competing interests SCE and CIH are employed by GigaScience and BGI Hong Kong. VS is Editorin-Chief of BDJ. PS and LP are employed by Pensoft. | 1,827 | 2013-10-28T00:00:00.000 | [
"Biology",
"Computer Science",
"Environmental Science"
] |
A combined on-the-fly/interpolation procedure for evaluating energy values needed in molecular simulations
We propose an algorithm for molecular dynamics or Monte Carlo simulations that uses an interpolation procedure to estimate potential energy values from energies and gradients evaluated previously at points of a simplicial mesh. We chose an interpolation procedure which is exact for harmonic systems and considered two possible mesh types: Delaunay triangulation and an alternative anisotropic triangulation designed to improve performance in anharmonic systems. The mesh is generated and updated on the fly during the simulation. The procedure is tested on two-dimensional quartic oscillators and on the path integral Monte Carlo evaluation of HCN/DCN equilibrium isotope effect.
Switzerland
(Dated: November 19, 2021) We propose an algorithm for molecular dynamics or Monte Carlo simulations that uses an interpolation procedure to estimate potential energy values from energies and gradients evaluated previously at points of a simplicial mesh. We chose an interpolation procedure which is exact for harmonic systems and considered two possible mesh types: Delaunay triangulation and an alternative anisotropic triangulation designed to improve performance in anharmonic systems. The mesh is generated and updated on the fly during the simulation. The procedure is tested on two-dimensional quartic oscillators and on the path integral Monte Carlo evaluation of HCN/DCN equilibrium isotope effect.
I. INTRODUCTION
Accurate evaluation of the Born-Oppenheimer potential energy surface of a molecular system is essential for predicting its dynamical and equilibrium properties. Numerous advances in the algorithms used for the problem 1,2 combined with increasing computational power available to researchers have made it possible to combine on-the-fly ab initio evaluation of the potential energy even with path integral 3-5 or semiclassical 6-11 dynamics algorithms.
Unfortunately, such approaches are still computationally expensive and, for long simulations requiring a very large number of potential energy values in the same region of configuration space, it is reasonable to instead generate a mesh of points at which accurate ab initio calculations are made and then fit a function to reproduce their potential energy values or some other potential quantities that become bottlenecks of the calculation, such as Hessians of the potential energy. 12,13 For that purpose, a multitude of methods has been proposed, from modified Shepard interpolation [14][15][16][17] to more sophisticated approaches, 18 including those based on interpolating moving least squares, 19,20 Gaussian process regression, 21,22 and neural networks. [23][24][25][26] We aimed for a procedure that would interpolate energies from stored data evaluated at points of a simplicial mesh and that would be comparable to Shepard interpolation in terms of simplicity and generality. To that end, we investigated interpolation from points of the mesh that constitute a simplex containing the point of interest, an approach already applied to some lower-dimensional systems. 27,28 Compared to Shepard interpolation, the downside of this approach is the necessity to generate a triangulation for the mesh, whose size grows very fast with the number of dimensions, 29 but the upside is the logarithmic scaling of the interpolation procedure with the number of mesh points as well as an extra order of accuracy for a given number of derivatives available at the mesh points. In comparison to the method of Ref. 28, the main differences in the approach presented here are an alternative triangulation of the mesh and a different choice of the interpolant, along with a procedure for updating the mesh during the simulation.
The theory behind the proposed algorithm is explained in Sec. II, while Sec. III presents numerical tests for model anharmonic potentials and for the HCN/DCN equilibrium isotope effect. While we focus on classical Monte Carlo and path integral Monte Carlo applications, similar interpolation procedures can be also used with molecular dynamics or path integral molecular dynamics methods.
II. THEORY
Running a Monte Carlo simulation requires knowledge of the potential energy function V (r), where r is the D-dimensional vector of system's internal coordinates. Let us assume that we can access a number of previously stored points together with their potential energy and gradient values as well as a triangulation of their mesh; we want to use that information to estimate the potential energy value at a new pointr. If the mesh currently contains fewer than D + 1 points, V (r) is evaluated exactly andr is added to the mesh, which will be triangulated (in the only possible way) once D + 1 points have been added. If the mesh has already been triangulated the following algorithm is used for estimating V (r): 1. Find the simplexS containingr or verify thatr lies outside the convex hull C mesh of all mesh points.
2. IfS was found, calculate the value of the interpolantṼ (r) and estimate whether the interpolation error |Ṽ (r) − V (r)| is below a predefined threshold.
Ifr /
∈ C mesh or if V (r) cannot be estimated with sufficient accuracy, add more points to the mesh to allow for an accurate estimate of V (r).
We will discuss each part of the algorithm separately in the following subsections.
A. Interpolation procedure and reliability estimate
Supposer is inside simplexS with vertices r jS (j = 1, . . . , D + 1) and we want to estimate V (r) based on the values of the energy and its gradient at the D + 1 points r jS . Previously, Clough-Tocher interpolants 30,31 were used for the problem in up to three dimensions; 27,28 these interpolation schemes are exact for cubic potentials and have derivatives that are continuous up to the second order, but they have two disadvantages: they use Hessians, whose evaluation increases enormously the cost of an ab initio calculation, and their generalization to higher-dimensional systems is not straightforward. Perpendicular interpolation 32 is another powerful approach which, for an arbitrary number of dimensions and an arbitrary number of derivatives q available for all vertices, produces an interpolant that is exact for a polynomial of order q + 1 and that has q continuous derivatives; however it scales exponentially with dimensionality D, making potential applications to higher-dimensional systems problematic. In this work we used an interpolant that exhibits a better scaling with D at the cost of having discontinuous derivatives. (If this is a problem, interpolants of Ref. 32 should be used instead.) To define this interpolant we introduce barycentric coordinates λ j (j = 1, . . . , D + 1) ofr, which are defined by the D + 1 equations D+1 j=1 λ j r jS =r, The interpolant we propose is defined in terms of "partial" interpolantsṼ j all of which are exact for quadratic potentials. One way to combine them into a single interpolant symmetric with respect to vertex permutations is which is an interpolant proposed in Ref. 33 (based on Refs. 34 and 35). The term on the second line is zero because the second factor is zero. This combination ofṼ j , however, would not reproduce potential energy gradient at the vertices, which is a waste since each V j reproduces the gradient at vertex j. An alternative expression that does reproduce gradients at all vertices isṼ It is impossible to get a reliable estimate of the interpolation error without any knowledge of the third derivatives of V (r) in the simplex and application of Bayesian approaches as in Shepard interpolation 36 is complicated byṼ j (r) containing data from all vertices of the simplex at once. One exception is the one-dimensional case, where defining (with D = 1) yields an exact estimate |Ṽ (r)−V (r)| ≤ δV (r). The estimate seems to perform qualitatively correctly for a large number of higher-dimensional potentials as well, so we decided to deem the interpolation result reliable if δV (r) were below some predetermined threshold δV max . This makes δV max a parameter whose only relation to interpolation error is that both are proportional to the magnitude of third derivatives in a small enough simplex.
The lack of a more precise relation between the two quantities forced us to estimate the exact interpolation error for a small number of randomly chosen interpolation results, determining whether the chosen δV max is adequate. For small enough simplices the leading contributions to both the interpolation error and δV (r) depend linearly on the tensor of third derivatives of potential energy, so one possible rule of thumb for fixing an unacceptable interpolation error would be a proportional decrease of δV max , e.g. halving δV max if root mean square error (RMSE) of interpolation should be halved.
B. Updating the mesh and its triangulation
If we either findr to be outside the convex hull C mesh or that δV ≥ δV max , we add a carefully chosen point r add to the mesh and update the triangulation. Before describing the algorithm, let us introduce several definitions. Firstly, the boundary of C mesh is a set of faces referred to as F mesh . Secondly, we will often use the signed distance n F (r) from the plane containing the face F ∈ F mesh , with sign defined to be nonnegative at the mesh points. Lastly, both systems that will be considered in Sec. III have certain restrictions on the values r can take: in the symmetric two-dimensional quartic oscillator, the symmetry makes it possible to consider values of coordinates in only one quadrant (e.g., both x and y non-negative), while in HCN our choice of internal coordinates implies that two of them only take non-negative values. We thus consider a situation where r needs to satisfy one or more linear constraints of the form where i is an index of the constraint, c i is a scalar constant, and v i is a vector constant.
Whenr was inside C mesh , we found that adding a point r add =r to the mesh worked well enough. However, whenr was outside C mesh , we "pushed" r add further out, i.e., chose r add further away from C mesh thanr, primarily to avoid creating nearly singular simplices. The procedure, referred to as "outward push," works as follows: 1. Find F min ∈ F mesh that minimizes n F (r).
Set
Step 3 is designed to move the mesh point (pushed away in Steps 1-2) onto one of the "constraining surfaces" defined by instead of rejecting a move that would violate the constraint. Several such rejections would lead to a mesh that would approach infinitely close to one of the constraining surfaces during the simulation, as illustrated in Subsec. III A.
Once r add is chosen, we need to update the triangulation of the mesh. Here we only present the main ideas of the employed algorithms; the details are in the Appendix. The starting point of this work was using Delaunay triangulation 37 following its previous successful applications. 27,28 There exist several algorithms 29 that update a Delaunay triangulation at a cost that does not increase with the number of simplices. The approach outlined in this subsection uses Lawson flips 37 defined using "parabolic lifting" 38 instead of the more conventional empty circumsphere test. 37,38 One considers sets of D + 2 points, which can be triangulated at most in two different ways. 37 Whenever such a set is already triangulated using simplex array S and an alternative triangulation S is available, one compares the values G(S) and G(S ), where the function G is defined as and where v(S) is the volume of simplex S. The choice of the cost function g(S) that yields Delaunay triangulation is If G Delaunay (S) > G Delaunay (S ), which is equivalent to S failing the empty circumsphere test used to define Delaunay triangulation, 37,38 the simplices of S are replaced with those of S .
One performs Lawson flips until they fail to change the triangulation regardless of the initial S. Since each Lawson flip decreases G Delaunay (S mesh ), where S mesh is the array of all simplices in C mesh , the algorithm is bound to stop at a certain point, and it can be proven 37 that the resulting final triangulation is unique to the mesh. It can also be shown that G(S) = G(S ) for the two triangulations of D + 2 points unless the points lie on a sphere or in a hyperplane; treatment of these singular cases is discussed in the Appendix.
The expression for g Delaunay (S) underlines one problem with Delaunay triangulation: it treats all dimensions equivalently, necessitating a choice of internal coordinates that makes properties of V (r) approximately isotropic, which tends to be non-trivial. A Bayesian approach to bypassing the problem for Shepard interpolation is discussed in Ref. 36, while for simplex interpolation one can use higher-order derivatives to define a Riemannian metric 39 that can then be used to construct the triangulation optimal for the current interpolation procedure. [40][41][42] Unfortunately, for quadratic interpolation the latter option would involve calculating third derivatives of the potential, which is rather expensive; therefore, we instead used Lawson flips with a modified g(S). Obviously, the procedure still stops at a certain triangulation regardless of the choice of g(S), even though we will not be able to guarantee the triangulation's uniqueness without restrictions on the potential V (r). The anisotropic g(S) proposed in this work was which is a qualitative estimate of the upper bound for interpolation error in a given simplex; unlike g Delaunay , g anisotr is invariant with respect to linear transformations of coordinates. To avoid entering infinite loops for cases when G(S) = G(S ), we modify the flipping criterion to be G(S) − G(S ) > δG min , where δG min is a small predefined parameter.
g anisotr should be applicable for any simplex interpolant which uses potential and its gradient and is exact for quadratic potentials. To understand why quadratic interpolation is special in this context, consider interpolating from D+1 vertices with q derivatives available, which in general can yield an interpolant exact for polynomials up to degree q + 1. In the special case of D = 1, q polynomials of degree q + 1 can be constructed from 2(q + 1) parameters available; for q = 1 g anisotr arises naturally as the degree to which the two cubic polynomials disagree. The q = 1 case is special because for larger values of q and more than two polynomials of degree (q + 1), several analogues of g anisotr are possible, while for q = 0 the (linear) interpolant is uniquely defined, making it impossible to define a similar g anisotr .
From now on, the triangulation that results from using g anisotr with Lawson flips will be referred to as "anisotropic triangulation".
C. Search for the simplex
The last task is finding the simplexS that containsr. We used stochastic walk 43 that iteratively updatesS from an initial guessS init by calculating barycentric coordinates λ j [Eq. (1)
A. Anharmonic oscillator
A large number of molecular systems are close to harmonic in the most relevant part of their configuration space, so as a model problem we chose a system of two harmonic vibrational modes with a "very anisotropic" anharmonic perturbation, where anharm determines the "anharmonicity" of the potential. In all examples presented here we ran 2 24 (≈ 1.68 · 10 7 ) step Monte Carlo simulations with inverse thermodynamic temperature β = 1 and δV max = 3.125 · 10 −2 ; the mesh was constructed in |x| and |y| rather than x and y to capitalize on the potential's symmetry. The results are presented in Figs Table I. Table I, the improvement is definitely less drastic for anharmonic potentials where simplex size is also determined by the magnitude of δV (r).
The motivation behind the anisotropic triangulation introduced in this work is illustrated by Fig. 2, comparing the meshes and interpolation error distributions obtained using two different triangulations in Monte Carlo simulations for anharm = 0.01. The simplices obtained with Delaunay and anisotropic triangulations are plotted in panels (a) and (b); switching to the anisotropic triangulation "elongates" triangles along |x| axis, as it should, judging by the form of anharmonic part of the potential (13). While in this case the distribution of interpolation errors is not significantly affected, the number of mesh points added is decreased more than by a factor of two (see Table I).
We also checked how the tendencies observed for anharm smaller than δV max , a tendency we observed for a wide range of potentials.
Lastly, recall that running the Monte Carlo simulations presented here involved 2 24 + 1 ≈ 1.6 · 10 7 potential energy evaluations, and instead of exact calculations in each instance we used mere thousands of mesh points to reproduce these exact calculations with great precision (see Table I). This demonstrates the potential of our method for speeding up practical calculations, a point elaborated further in the next subsection.
B. HCN/DCN equilibrium isotope effect
In this subsection we combine our interpolation procedure with the path integral Monte Carlo method, 46,47 which accounts for nuclear quantum effects by replacing each atom of the simulated molecule with P replicas connected by harmonic forces. 48 We combined the path integral Monte Carlo method with the free energy perturbation approach 49 (direct estimators 50 ) for isotope fractionation to calculate the HCN/DCN equilibrium isotope effect defined as where Q denotes the partition function. The potential energy surface of HCN was taken from Ref. 51. The interpolation algorithm used three internal coordinates that were defined in terms of atom radius-vectors r D/H , r C , and r N as follows It is necessary to use the "outward push" procedure to avoid the mesh approaching infinitely closely the x 3 = 0 surface due to the x 3 ≥ 0 constraint, for reasons illustrated in Subsec. III A. Monte Carlo steps (to avoid wasting computational effort on calculating correlated samples); the statistical error of its average was estimated as the root mean square error evaluated with block averaging. 45 The number of replicas P were chosen as 256 and 32 for 200 K and 1000 K; it was verified with separate calculations that doubling P did not change the isotope effect by more than 1%. For the other temperatures the P was assigned by linear interpolation of P values as a function of 1/T . We set δV max = 10 −4 a.u., and after each successful interpolation the algorithm had a 10 −5 probability to carry out an additional exact potential energy calculation in order to estimate the RMSE of interpolation.
Results and discussion
In Table II, isotope effects calculated with our interpolation algorithm are compared to benchmark values calculated with the original force field and with harmonic approximation [54][55][56] values. Interpolation allows reproducing benchmark isotope effect values with an error below 1%; the decent agreement between the harmonic approximation values and the formally exact path integral results are expected considering HCN is a fairly harmonic molecule.
The RMSEs of interpolation and the number of mesh points generated during the simulations are displayed in Table III. If we used an expensive ab initio procedure for the exact potential, then the speedup due to the interpolation method would equal the ratio of the numbers of interpolated potential energy evaluations and the number of exact potential energy evaluations which approximately equals to the number of mesh points generated in the simulation. In this 3-dimensional problem this ratio is always of the order of 10 4 , indicating a large potential speedup. As discussed in Subsec. II A, we made an additional number of exact potential energy calculations to make sure that the choice of δV max guarantees an adequate interpolation accuracy, however the number of these additional calculations (approximately the number of potential evaluations during the calculation times 10 −5 , see Subsubsec. III B 1) was always small compared to the number of mesh points, but still large enough to estimate the RMSE of interpolation with high precision. As was the case for the quartic oscillator, the mean square interpolation error is significantly smaller than |δV max | 2 .
However, because HCN is a very harmonic system, the anisotropic triangulation loses its advantage over the Delaunay triangulation. Both approaches behave similarly and, in fact, the anisotropic triangulation yields slightly higher interpolation errors and generates slightly We have also investigated how our method performs if the mesh created during one simulation is reused for simulations at other temperatures. One would expect that the best starting point would be the mesh generated during the highest temperature simulation, whichâĂŤin accordance with classical Boltzmann distributionâĂŤtends to visit a larger region of configuration space during a fixed number of simulation steps. However, in our calculations of HCN isotope effect we found that using the lowest temperature was preferable. 57 The results are presented in Table IV, which shows clearly that the number of extra mesh points that must be added to the mesh generated during the lowest temperature simulation is relatively small for simulations at all other temperatures.
IV. CONCLUSION
We have proposed an algorithm for interpolating potential energy values from the values of the potential energy and its gradient calculated and stored for points of a mesh generated during a Monte Carlo simulation. The interpolation procedure is exact in harmonic systems, while in anharmonic systems its accuracy depends on the triangulation procedure chosen for the mesh. For the latter, we considered two choices: the previously used Delaunay triangulation and an anisotropic triangulation designed to decrease the interpolation error. Both triangulations combined with subsequent interpolation resulted in a very large reduction of potential energy evaluations in comparison with a purely on-the-fly approach. Moreover, we found that for nearly harmonic systems the two triangulations give similar results, with Delaunay triangulation demonstrating superior performance in some cases, but for more anharmonic systems the proposed anisotropic triangulation achieves similar interpolation errors with significantly fewer mesh points. The ad hoc procedure used for construction of such anisotropic triangulations may be used to improve performance of other interpolants, [31][32][33]35 even though a different definition of δV (r) [Eq. (6)] may prove more convenient.
To combine our interpolation algorithm with classical or semiclassical molecular dynamics simulations, one may need to use a different interpolant, as mentioned in Subsec. II A; the "outward push" procedure would also need to be extended to points added inside C mesh to avoid forming nearly degenerate simplices. By contrast, as mentioned in Subsec. II C, searching for the simplex used in interpolation should become even simpler.
It is important to discuss the scaling of our interpolation procedure with respect to two parameters: number of points in the mesh and dimensionality. For the former, adding new points to the mesh is done at a cost that does not depend on the number of points already in the mesh and the cost of finding the simplex used in interpolation scales logarithmically with the number of mesh points; calculating the interpolant costs the same regardless of the number of mesh points. This behavior compares favourably to Shepard interpolation and Gaussian process regression, which utilize functions whose evaluation cost is proportional to the number of mesh points; triangulating the mesh is a natural way to avoid the issue.
However, the cost of storing and updating the triangulation increases dramatically with dimensionality 29 even if one does not take into account the increase in the needed number of mesh points (which also grows quickly with dimensionality, at least for lower dimensions).
This problem is likely to be decisive if one wanted to apply our method to systems of dimensionality six and higher (corresponding to molecules with four atoms and more). Yet, in this work we demonstrated a significant potential speedup achieved by such algorithms in the simulations of two-and three-dimensional systems.
counterintuitive observation can be explained as follows: The lower the temperature, the greater the quantum delocalization of the ring polymer and the greater the explored region of configuration space. Although this delocalization diminishes at higher temperatures, it is eventually replaced by an ever increasing motion of the center of the ring polymer associated with the classical Boltzmann distribution. However, we cannot see this transition yet in Table III, most likely because the quantum delocalization at lower temperatures is further increased due to the use of mass-scaled direct estimators, which stretch the ring polymer by a factor of into the sum. We still use the "expand_convex_hull" subroutine to make the triangulation computationally cheaper, but we will use the notion of virtual simplices a little later.
As mentioned in Sec. II, it is beneficial to place some mesh points on constraining surfaces (9). In this case each point of the mesh is assigned a logical variable array whose ith element indicates whether the point lies in the constraining plane with index i. To account for constraints, the following modifications should be made to Algorithms 1-2: ALGORITHM 2. Procedure for expanding C mesh once a new point r add is added outside of it.
procedure expand_convex_hull(r add , S new ) find a face F start such that n Fstart (r add ) < 0; |* F conf will consist of all "conflicting" faces F such that n F (r add ) < 0, e bound will consist of edges of F conf not shared by two faces in the array *| call expand_conflict_zone(r add , F start , F conf , e bound ) for each (F ∈ F conf ) do create a simplex S from r add and F; add S to the current triangulation and S new ; | 5,955.6 | 2019-08-19T00:00:00.000 | [
"Physics"
] |
Investigation of the Variation of Near-Circular Polarization in Scarabaeoidea Beetles
Variation in the reflection of circularly polarized light (CP) of a substantial number of beetles, of both the Hybosoridae and Scarabaeidae families, is discussed. Classifications of the spectral shapes were made for Cetonia aurata aurata beetles, which were related to variations within the chiral chitin structure and have been computationally modelled. It was seen that single peaked spectra were not the predominant spectral shape and that more complex structures are responsible for the spectra observed. Two structural perturbations methods to the single pitched structure are proposed to be responsible for the more complex spectral shapes. Further CP analysis of the genus rutelinae:Chrysina was undertaken with variations in broadband reflection observed within the optima species. 17 The Author(s). Published by Elsevier Ltd. This is an open access article under the CC BY license ://creativecommons.org/licenses/by/4.0/). tion and Peer-review under responsibility of The Living Light Conference (May 4 – 6) 2016, San Diego (USA).
Introduction
Circularly polarized (CP) light is relatively uncommon in nature. One of the first described examples was a golden beetle (Chrysina resplendens), which Michelson described as having a "screw-like structure" [1]. From the plant world the fruit of the genus Pollia, which is a metallic blue, reflects both left CP (LCP) and right CP (RCP) light [2]. It has also been observed that LCP light leads to faster growth in pea and lentil plants [3]. From the animal world the mantis shrimp (Odontodactylus), is able to visually detect and also signal with reflections of CP light [4] [5]. Another group which interacts with CP light is Sapphirinidae Copepods, where light passing through their bodies becomes CP [6]. There are several species of firefly (Photuris lucicrescens and Photuris versicolor) whose larvae are CP bioluminescent, and emit LCP and RCP light from opposite lanterns [7]. In the non-living world arrangements such as a water-air interface can produce CP light via total internal reflection [8], as well as in light emitted by some stars [9].
Since Michelson's first observation of such a reflection, there has been interest in CP reflections from scarabs from many difference disciplines [10]. Preliminary work by Pye showed the CP reflection distribution in nine different subfamilies (Fig. 1) [11]. Fossil records of structural color in beetles stretch back 15-47 million years [12].
The origin of the CP reflection is within the beetle's shell (epi-exocuticle layers). It is formed of layers of chitin, which are thread-like molecules, embedded in a protein matrix, all of which have the same orientation. Between adjacent layers the direction of the molecules in the layer gradually alters. As the layers build up eventually a pitch (full 360° rotation) in the orientation of the molecules is achieved [13]. The thread-like nature of the molecules means there is a birefringence in the material, which is enhanced by the presence of uric acid [14]. The pitch ( p ) of the structure is related to the reflection peak observed at wavelength ( p is the average refractive index of chitin [15].
Most previous studies on CP reflection in beetles have focused on few specimens, often single examples of a species [15,16]. In this study a near-normal CP reflection from a large number of beetles mainly of Cetoniinae and Rutelinae, but also of the Ceratocanthinae, Scarabaeinae, Trichiinae, Phaenomeridinae, Dynastinae, Melolonthinae and Euchirinae subfamilies are studied. In this way the variation, or 'finger-print', of spectra of several species and genus can be quantified. While a number of beetles in each group have been examined, in this report typical examples of each of the subfamilies are presented, followed by a study of variation amongst 191 Cetonia aurata aurata beetles, and a study of color variation amongst Chrysina optima.
Specimens
Specimens from all 9 subfamilies seen in Fig. 1 were investigated. The collection location of each specimen can be seen on the world map in Fig. 2. The length of the beetles from the bottom of their elytra to the top of their pronotum varied from 2 mm (Ceratocanthinae:Ceratocanthus sp.) to 52 mm (Euchirinae: Cheirotonus battareli). Specimens of the Rutelinae:Chrysina genus were also investigated which were all collected in Costa Rica. The specimens varied from 23-30 mm in length. Cetoniinae:Cetonia aurata aurata specimens were also investigated, which were collected from various sites throughout Romania and Poland. The specimens measured between 14 and 21 mm in length.
Reflection Spectroscopy Experimental Setup
The measurements were taken using the system shown schematically in Fig. 3. A halogen light source (DH2000-BAL) (430-1000nm) [19] was coupled into an optical fiber (QP600-2-SR-BX), the beam was then collimated and polarized to become CP light. The CP light was produced by first passing unpolarised light through a polarizing cube, which was set to give either ±45° linearly polarized light with respect to the axis of the Fresnel rhomb. In the Fresnel rhomb two internal reflections occur, which produces a π/2 phase shift, converting the linearly polarized light into CP light [20]. The handedness of the CP light (LCP or RCP) is determined by angle of polarization of the input linear light. The light spot size is ~1mm in diameter and was usually focused on the top of the beetle's right elytra. The reflected beam was then collected with a lens and the CP light was passed through the Fresnel rhomb followed by the polarizing cube in a similar arrangement to the input stage. The input and collecting arms are at 45° (22.5° incident angle) to each other to allow space for the optical components. The orientation of the polarizing cubes could be changed between ±45° to analyze LCP and RCP light. The spectrum was measured using an Ocean Optics HR4000 spectrometer. Further details of the experimental setup can be found in [21]. A reference was obtained using a flat front-silvered mirror with the reflectance from the beetle calculated as where Signal(beetle) is the signal of the light reflected from the beetle and Background is the dark response. In the study four CP combinations were made: LL -LCP light incident, LCP analyzed, LR -LCP light incident, RCP analyzed, RL -RCP light incident, LCP analyzed, RR -RCP light incident, RCP analyzed.
Modelling
The modelling was used to compare to the experimental spectra data. The model was implemented as a multilayered transfer matrix method using the Birefringent Film Toolbox described in [22]. The model had perturbations to the single pitch structure (which results in a single spectral peak). The perturbations to the spectra were based on 2 structural changes to the model. The first perturbation was several different pitch thicknesses being present, resulting in several peaks in the reflection spectra (Fig. 4a) [16] . The second perturbation is a sudden jump in the orientation angle of the chitin molecules between adjacent layers (Fig. 4b) [23].
The models do not consider any structures within the beetle responsible for the spectra other than a helicoidal structure. The first spectrum ( Fig. 5(a)) was created using two different pitch values, the second ( Fig. 5(b)) was modelled using the sudden jump in orientation of the molecules. The parameters used in all of the models within this paper are summarised in the Appendix.
Reflectance Spectrometry
The results are presented in the context of three separate but related studies; a broad suvey of the families, a statistical study of the variation across 191 Cetonia aurata aurata and an initial examination of the variation within and between species of Chrysina.
Reflection spectra were obtained with the four CP combinations and the LCP response modelled as above. In the following figures the experimental data is presented with the modelled LL response superimposed. Many different specimens were examined from each of the nine subfamilies shown in Fig. 1. The LL spectral response can be seen to be the dominating feature in all of the spectral examples in Fig. 6. (The spectra of Fig. 6 are from individual beetles chosen to be representative of the spectral shapes observed.) The dominant LL reflectance color was in the green wavelengths (500-560nm), but other wavelengths relating to the colors of red, blue and a broadband gold were observed. The Ceratocanthus sp. (Fig. 6(a)) showed a main LL peak corresponding to a red wavelength with oscillating peaks trailing into the near infra-red (NIR). The Diabroctis mimas (Fig. 6(b)) species has a broadband reflection (550-700nm), which produces a gold color on the front of the beetle's pronotum. The Protaetia angustata pyrodera (Fig. 6(c)) beetle's reflection is strong compared to the other specimens, which has two peaks, close together, which gives the beetle a red/green appearance. The next five beetles (Gnorimus nobilis, Viridimicus aurescens, Phaenomeris besckei, Augoderia nitidula and Melolontha melolontha) (Fig. 6(d)-(h)) showed narrowband LL responses. The Phaenomeris besckei specimen (Fig. 6(f)) had this response was at a lower wavelength (460 nm) (blue). The Melolontha melolontha specimen (Fig. 6(h)) showed the weakest of all the CP responses with LR and RL responses being stronger than the LL, which was a single peak in the orange/red region. There is some spectral variation in shape with the Cheirotonus battareli ( Fig. 6(i)) specimen having a tail feature at wavelengths higher than the peak. All the reflection spectra were taken from a fixed position at the top of the right elytra of the beetles, except the beetles Diabroctis mimas, Augoderia nitidula, Melolontha melolontha and Cheirotonus battareli (Fig. 6(b), (g)-(i)), where reflection from the pronotum were taken, as their elytra were not as significantly optically active. Clear variation in the reflection percentage between beetles can be observed, there re several reasons for this including an increase in number or variation in the value of pitches within the beetle. Surface structure can inhibit the path of light to and from the helicoidal layer. [24]. The larger the curvature of the beetles' shells across the sample area means more light is scattered which is not collected by the detection optics and so the intensity is lower.
Chemical composition difference such as the addition of uric acid which enhances the birefringence and in turn increases the reflection [14] also contributes. The models in Fig. 6(d)-(h) were based on a single pitch, Fig. 6(a) and (i) were based on multiple pitches, Fig. 6(c) was with pitch defects and Fig. 6(b) was modelled with a combination of both. More in-depth studies were carried out on the Rutelinae and Cetoniinae subfamilies, which contain many variations in their CP responses. Variations to the single reflection spectral shape can be observed in species of Cetoniinae:Cetonia aurata aurata (which were distinguished from Protaetia cuprea by their prosternum [25]) beetles from Poland and Romania (Fig. 7). Previous studies have looked at the CP reflection from C. aurata aurata [26][27] using ellipsometric techniques from which the LL response can be determined. This current study looked at a much larger sample set and the experimental arrangement determined the LL response directly. In particular this study looked at the shapes of the spectra and their relation to the structure. Five spectral classifications were identified by the authors (although other classifications could be justified). Those chosen are single peaked (S), double close (DC), triple peaks (T), oscillations after (OA) and oscillations between peaks (X), whilst the remainder were grouped as unclassified (U). The models included in Fig. 7 are based on the same perturbation approach. Fig. 7(a) was based on a single pitch value, Fig. 7(b)-(d) pitch defects and Fig. 7(e) and (f) multiple pitch values. The distributions of LL spectral shapes of 191 Cetonia aurata aurata specimens (Fig. 8), shows that the single peak structure is not the dominating feature. The most common spectral shape was the OA feature, which occurred in 29.9% of the measured spectra, closely followed by DC (27.2%) and T (19.9%), while the single peak spectra only occurred in 13.1%. The genus Rutelinae:Chrysina CP responses were very variable both between and within species ( Fig. 9-10). The genus has a large geographic range from the Southern United States to Columbia. Fig. 9(a), shows the LL reflection from three distinct color forms (where silver is the most common color found) of Chrysina optima (it should be noted that there are more uncommon color forms such as violet [28]). The different color forms relate to the location of the lower wavelength reflection boundary. Where the red beetle's lower reflection boundary is 600nm, the gold boundary was 500nm and the silver boundary was lower than 450nm. Not only did spectral colors (wavelengths) change between species, the spectral intensity also varied. This was linked to the variation in thickness of the pitches within the structure, this was shown in the model ( Fig. 9(b)) and based on the thicknesses and structure of the closely related Chrysina aurigans which has been modelled to include a chirped structure with slight perturbations to the regular structure [29]. The typical reflection spectrum of a silver Chrysina chrysargyrea ( Fig. 10(a)), shows a strong LL response between 450nm and 900nm, with weak uniform LR and RL responses, and a negligible RR response. This LL spectrum with a long wavelength cut-off is distinct from the others modelled in this work and consideration of the structure giving rise to this is the subject of ongoing work. It should be noted that there are several other rare colour forms of C chrysargyrea including red and gold [28].
Discussion and Conclusion
The work presented shows variations within the helicoidal structures, responsible for the selective reflection of CP light, between and within some of the nine different subfamilies of Scarabaeoidae. Variations in spectral shape were observed in Cetonia aurata aurata specimens, which showed the single pitch structure is an atypical structure. The genus Chrysina was investigated and color differences of the broadband reflector Chrysina optima, were also observed. A model was developed to describe two possible perturbations of the helicoidal structure which would give rise to the observed spectra. These were based on a sudden change in the orientation of the chitin molecules, variation in the pitches and multiple pitches. Modelling comparison to the experimental results were made with good agreement. It should be noted that the reasons for the different reflection spectra within species is not fully understood and could be a combination of genetic [30] and environmental factors [31].
Modelling parameters within this paper are described within this appendix. where ± are the transverse principal indices, is the number of turns of pitch (nm) for the ith stack, and is the twist discontinuity (in units of ) between stacks i and i+1. | 3,364.2 | 2017-01-01T00:00:00.000 | [
"Physics"
] |
The 3D Genome Browser: a web-based browser for visualizing 3D genome organization and long-range chromatin interactions
Here, we introduce the 3D Genome Browser, http://3dgenome.org, which allows users to conveniently explore both their own and over 300 publicly available chromatin interaction data of different types. We design a new binary data format for Hi-C data that reduces the file size by at least a magnitude and allows users to visualize chromatin interactions over millions of base pairs within seconds. Our browser provides multiple methods linking distal cis-regulatory elements with their potential target genes. Users can seamlessly integrate thousands of other omics data to gain a comprehensive view of both regulatory landscape and 3D genome structure. Electronic supplementary material The online version of this article (10.1186/s13059-018-1519-9) contains supplementary material, which is available to authorized users.
Background
The three-dimensional (3D) organization of mammalian genomes plays an essential role in gene regulation [1][2][3][4]. At the DNA level, distal regulatory elements such as enhancers have been shown to be in spatial proximity to their target genes. At a larger scale, topologically associating domains (TADs) have been suggested to be the basic unit of mammalian genome organization [5,6]. Several recent high-throughput technologies based on chromatin conformation capture (3C) [7] have emerged (such as Hi-C [8], ChIA-PET [9], Capture-C [10], Capture Hi-C [11], PLAC-Seq [12], and HiChIP [13]) and have provided an unprecedented opportunity to study the genome spatial organization in a genome-wide fashion.
As the volume of chromatin interaction data keeps increasing, efficient visualization and navigation of these data become a major bottleneck for their biological interpretation. Due to the size and complexity of these interactome data, it is challenging for an individual lab to store and explore them efficiently. To tackle this challenge, several visualization tools have been developed, and each of them has its unique features and limitations. The Hi-C Data Browser [8] was the first web-based query tool that visualizes Hi-C data as heatmaps. Currently, it does not support zoom functionalities and only hosts limited number of datasets. The WashU Epigenome Browser [14,15] can display both Hi-C and ChIA-PET data, and it also provides access to thousands of epigenomic datasets from the ENCODE and Roadmap Epigenome projects. Due to the large file size of Hi-C matrices, which could reach hundreds of gigabytes, its speed for uploading and exploring Hi-C data is still not optimal. Furthermore, it does not offer an option to display inter-chromosomal interaction data as heatmaps. Users can also explore Hi-C data in Juicebox [16] and Hi-Glass [17] with great speed, but currently, neither of them provide other types of chromatin interaction data, such as Capture Hi-C or ChIA-PET. Delta browser [18] is another visualization tool with many features and can display both physical view of 3D genome modeling and Hi-C data. However, all the aforementioned tools except for the WashU Epigenome Browser only display Hi-C as a heatmap, which is convenient for visualizing large domain structures such as TADs, but may not be the most informative way for visualizing enhancer-promoter interactions.
Here, we present the 3D Genome Browser (www.3dgenome.org), which is a fast web-based browser that allows users to smoothly explore both published and their own chromatin interaction data. Our 3D Genome Browser features six distinct modes that allow users to explore interactome data tailored toward their own needs, from exploring organization of higher-order chromatin structures at domain level to investigating high-resolution enhancer-promoter interactions. Our browser provides convenient zoom and traverse functions in real time and supports queries by gene name, genomic loci, or SNP rsid. In addition, users can easily incorporate their UCSC Genome Browser and the WashU Epigenome Browser sessions and therefore can simultaneously query and supplement chromatin interaction data with thousands of genetic, epigenetic, and phenotypic datasets, including ChIP-Seq and RNA-Seq data from the ENCODE and Roadmap Epigenomics projects. So far, it has been visited by more than 60,000 unique users from 120 countries surpassing over 600,000 page views. In summary, the 3D Genome Browser represents an invaluable resource and ecosystem for the study of chromosomal organization and gene regulation in mammalian genomes.
Results and Discussion
Overall design and implementation of the system The overall structure of the 3D Genome Browser is summarized in Fig. 1. Currently, our browser hosts more than 300 chromatin interaction datasets of a variety of different types (Table 1), including Hi-C, ChIA-PET, Capture Hi-C, PLAC-Seq, HiChIP, GAM [19], and SPRITE [20], in both human and mouse across multiple genome assemblies, making it one of the most comprehensive and up-to-date high-quality chromatin Fig. 1 The overall design of the 3D Genome Browser interaction data collection (details in Table S1, S2, S3). To increase their impacts and usability, we systematically re-mapped and generated interaction matrices for over 100 Hi-C datasets to the most current genome assembly (GRCh38 and mm10), using the same in-house data processing pipeline.
One of the important discoveries based on Hi-C data analysis is that the mammalian genomes are organized in mega-base pair chromatin domains, termed topologically associating domains (TADs). Therefore, we adopted the same pipeline from Dixon et al. [5] and systematically predicted TADs in all cell/tissue types (Fig. 2a, orange/blue bars) in our browser. Hi-C data has been shown to contain systematic noises [21]; therefore, we performed ICE (iterative correction and eigenvector decomposition) normalization to all the Hi-C datasets in our browser as well. To further assist users to explore 3D genome organization and gene regulation events simultaneously, we also collected the open chromatin data from the same cell type and display them in the same window (Fig. 2a, red bars). Finally, when users query the chromatin interaction information for a gene, we can also display the expression profiles of this gene across 109 cell/tissue types (Additional file 1: Figure S1), which was uniformly processed by the ENCODE consortium. In summary, for a given genomic loci, our browser can display TADs, chromatin interaction, RNA-Seq, and open chromatin region simultaneously and therefore give our users a comprehensive view of these regions.
To facilitate a user's unique interest, our 3D Genome Browser features six distinct modes that allow users to explore interactome data, including (1) intra-chromosomal Hi-C contact matrices as heatmaps, coupled with TADs and available genome annotation in the same cell type; (2) inter-chromosomal Hi-C heatmaps: this mode is particularly helpful for visualizing inter-chromosomal interactions and translocations; (3) compare Hi-C matrices: stacked Hi-C heatmaps from different tissues or even different species; (4) virtual 4C: Hi-C data is plotted as an arc for a queried gene or loci (bait), where the center is the bait region. This mode is particularly helpful for revealing chromatin interactions between two individual loci; (5) ChIA-PET or other ChIP-based chromatin interaction data such as PLAC-Seq and HiChIP; (6) Capture Hi-C or other capture-based chromatin interaction data. Below, we will use several examples to demonstrate these options and also illustrate how the 3D Genome Browser can be used to make novel biological discoveries.
Exploring chromatin interactions using Hi-C data
First, we demonstrate an example of exploring Hi-C data with the 3D Genome Browser for a large genomic region in Fig. 2a. It only takes~5 s to show a 10-Mb region of GM12878 Hi-C interaction map on chr12 (~15-25 Mb) at a 25-kb resolution. The alternating yellow and blue bars are predicted TADs using the same in-house pipeline as in Dixon et al. [5]. The dark red vertical bars are DNase I hypersensitive sites (DHS) in the same cell type. Users can also adjust the color scale to reduce the background signals and make the TAD structure more visible.
Identifying cell/tissue-specific chromatin interactions is important, as it has been shown that chromatin structure plays an important role in determining cellular identity [22,23]. In Fig. 2b, we notice a chromatin interaction in the 5-kb resolution Hi-C contact map in K562 cell line [24] (marked by the black arrow). To interpret biological meaning of this chromatin interaction, we integrated the WashU Epigenome Browser with gene annotation; histone modification H3K4me1, H3K4me3, and H3K27ac; and chromHMM [25] in K562 cells. We found that the two interacting loci are the promoter of SLC25A37 and a putative enhancer predicted by histone modification patterns and chromHMM (Fig. 2b, vertical gray bar). This putative enhancer has been confirmed to exhibit enhancer activities that regulate SLC25A37 expression during late-phase erythropoiesis [26]. Further, we checked the expression patterns profiled by the EN-CODE consortium for SLC25A37 on our browser and it showed high tissue specificity to K562 cells (Additional file 1: Figure S1).
Discovering high-resolution promoter-enhancer interactions using Capture Hi-C and DHS-linkage
While Hi-C data provides a viable way to suggest promoter-enhancer pairing, most of the current published Hi-C maps are at 10-40-kb resolution and therefore are not optimal for uncovering enhancer-promoter interactions. Sequence capture-or pull-down-based methods, such as Capture Hi-C or ChIA-PET, generally have higher resolution and therefore are more effective in identifying chromatin interactions between gene and their cis-regulatory elements. In Fig. 3a, we give an example of Capture Hi-C [27], which seeks long-range interactions that involve selected elements of interests captured with pre-determined sequences (in this case, promoters). Capture Hi-C identified chromatin loops are presented as the green arcs (top track in Fig. 3a). The center of the track is the capture sequence in this region, which is the PAX-5 gene promoter. We observed that the promoter interacts highly with the nearby regions and most of the interacting regions are enriched for strong enhancer marks (H3K4me1 and H3K27ac).
To further examine the predicted promoter-enhancer linkages, we also explored the linkage data by DNase I hypersensitive sites (DHS) in this region (blue curve line, second track in Fig. 3a), which represents another method of linking distal regulatory element with their target genes. It works by computing Pearson correlation coefficients between the gene proximal and distal DHS pairs across more than 100 ENCODE cell types, and only the pairs with PCC > 0.7 and within 500 kb are kept as the linked pairs [28]. In the example shown in Fig. 3a, we observed several interactions involving the promoter of the PAX-5 gene and a potential enhancer (marked by both H3K4me1 and H3K27ac signals) downstream of the ZCCHC7 gene in the naïve B cell Capture Hi-C dataset [27]. One region marked by enhancer-associated histone modifications has indeed been previously determined to be an enhancer for PAX5, and its disruption leads to leukemogenesis [29]. By integrating multiple lines of evidence, our browser provides a valuable resource for investigators to generate hypotheses connecting distal non-coding regulatory elements and their target genes.
Investigating potential target genes for non-coding genetic variants
Resolutions at loci-specific levels also hold significance in the discovery of the functions of non-coding genetic variants, such as single nucleotide polymorphisms (SNPs), which may disrupt transcription factor (TF) binding sites of cis-regulatory elements. In this section, we will first demonstrate how to use virtual 4C mode for such analyses. The 4C (circular chromosomal conformation capture [30,31]) experiment is a chromatin ligation-based method that measures one-versus-many interactions in the genome, that is, the interaction frequencies between a "bait" locus and any other loci. Its data is plotted as a line histogram, where the center is the "bait" region and any peak signals in distal regions indicate the frequency of chromatin interaction events. In our browser, we use the queried region (gene name or SNP) as the bait and extract Hi-C data centered on the bait region, hence, virtual 4C. To bolster the power of the virtual 4C plot, our browser also supplements ChIA-PET and DHS-linkage data. In Fig. 3b, we queried the SNP rs12740374 in the virtual 4C mode. This SNP has been associated with high plasma low-density lipoprotein cholesterol (LDL-C) [32], which could lead to coronary artery disease and myocardial infarction. We plotted virtual 4C and ChIA-PET data from K562 in this region, as high-resolution Hi-C and ChIA-PET data are only available for K562, but not for hepatic cell lines. Since LDLs are processed by the liver, we examined the histone modifications in the Hep2G cell line and found rs12740374 is located within a candidate enhancer region as marked by H3K27ac signals. Hence, virtual 4C, ChIA-PET, and DHS-linkage all support a putative interaction between the enhancer harboring this SNP and the promoter region of SORT1. Further, it has been shown that the rs12740374 minor allele creates a C/EBPα-binding site which enhances SORT1 expression leading to decreased LDL-C levels, thus suggesting that the minor allele confers a gain-of-function effect [33]. Still, despite the unusual conclusions reached by the study-as most minor alleles are usually loss-of-function-the virtual 4C mode of our 3D Genome Browser could aid in the hypothesis generation of not only the cis-regulatory elements and their putative target genes but also the effects of non-coding variants.
Exploring conservation of chromatin structure across species
Studying the evolutionary conservation of TADs could lead to a deeper understanding of their functional significance. The compare Hi-C mode of the 3D Genome Browser facilitates this endeavor by stacking Hi-C heatmaps from homologous regions of different species for visual contrast. In this mode, we observed the conservation of TADs and the genes near or at the TAD boundaries between human and mouse in their homologous region surrounding the BCL-6/Bcl-6 genes (Fig. 4), suggesting the chromatin structure may play a conserved role in the regulation of this proto-oncogene. This mode could be helpful for users to observe conserved or dynamic Hi-C interactions from different tissue/cell types.
(See figure on previous page.) Fig. 2 Examples of using the 3D Genome Browser to explore Hi-C data. a A 10-Mb region of GM12878 Hi-C interaction map on chr12 (~15-25 Mb) at 25-kb resolution. The alternating yellow and blue bars are predicted TADs. The dark red bars are DHS in the same cell type. b Hi-C interaction map in K562 cells at 5-kb resolution. The black arrow points to a potential tissue-specific interaction between the SLC25A37 promoter and a candidate enhancer region (marked by H3K4me1). The ChIP-Seq tracks for histone modifications, and chromHMM are visualized using the WashU Epigenome Browser a b Fig. 3 (See legend on next page.)
Uncovering structural variations in cancer genomes
It has been shown recently that Hi-C data cannot only be used to detect chromatin interactions, but also may be used to denote structural variations [34][35][36][37][38][39]. Certain structural variations, such as deletions, insertions, inversions and translocations, establish signature patterns have been observed in Hi-C heatmaps. A striking structural variation is shown in Fig. 5 through the inter-chromosomal heatmap mode: we confirmed the oncogenic BCR-ABL inter-chromosomal translocations (See figure on previous page.) Fig. 3 Linking distal regulatory elements and SNPs with their target genes with the 3D Genome browser. a Capture Hi-C data in naïve B cells showing potential interactions (green curve lines) with PAX5 promoter region. The Capture Hi-C interactions are consistent with patterns from the 5-kb resolution Hi-C data in GM12878 cells. b Using virtual 4C, DHS-linkage, and ChIA-PET data to hypothesize the target gene for non-coding variant rs12740374. Based on the annotation by chromHMM in HepG2, this SNP is located at a putative enhancer region (orange). According to virtual 4C data, there is a potential interaction between this enhancer and the SORT1 promoter. This linkage is also supported by DHS-linkage, as well as by the H3K4me3 and POL2A ChIA-PET data in K562 cell line Fig. 4 Using the 3D Genome Browser to explore conserved chromatin structure across human and mouse. The similarity between human GM12878 Hi-C data and mouse CH12 Hi-C data at the region surrounding the BCL6/Bcl6 gene indicates an evolutionary conservation event of the chromatin structure between the two species in two chronic myelogenous leukemia (CML) cell lines, K562 and KBM7. Such inter-chromosomal interactions are not observed in the karyotypically normal GM12878 cell line. We also noted that this translocation is reciprocal in KBM7 but not in K562 cells and that the breakpoint in ABL is different in the two cell lines. In addition, with the browser's compare Hi-C mode, the users could contrast the similarities and differences of chromosomal structure between distinct cells/tissues or even different species. Comparing the cancer cell line K562 to the normal cell line KBM7, we noted deletions specific to K562, one of which encompasses the tumor suppressor genes CDKN2A and CDKN2B (Additional file 1: Figure S2), as previously confirmed [40].
New binary Hi-C data format allows faster data retrieval and visualizing users' own Hi-C datasets The 3D Browser supports a variety of features that allow users to browse unpublished data. First, our browser encourages integration with customized UCSC or WashU Epigenome browser sessions, wherein the users could add or modify existing tracks or upload their own genomic/epigenomic data. For example, to view a customized UCSC session, a user would only be required to enter the UCSC session URL. More importantly, the users could view their own Hi-C data by converting the contact matrices into a novel, indexed binary file format called Binary Upper Tri-anguLar MatRix (BUTLR file) developed by us. By hosting the BUTLR file on any HTTP-supported server and providing the URL to the 3D Genome Browser, a user can take full advantage of the features of our browser, without having to upload their Hi-C data since the browser would only query the selected region through binary indexing, rather than searching through the entire matrix. This capability is similar to the bigWig/bigBed mechanism invented by us and UCSC [41].
Additionally, BUTLR format dramatically reduces the file size of high-resolution Hi-C data not only through the binarization but also through the omission of redundant values (Additional file 1: Figure S3a; Additional file 2). The BUTLR file encodes an entire genome-wide chromatin interactions data into a binary, indexed format. While 1-kb resolution hg19 intra-chromosomal Hi-C contact matrices in the tab-delimited format require almost 1 TB, the BUTLR format of those same matrices would only take 11 GB (Additional file 1: Figure S3b). More importantly, the binary file format also greatly improves the query speed: using pre-loaded Hi-C datasets, the 3D browser generally returns the query results as a heatmap in a matter of seconds. We also want to note that our browser is designed as query-based to maximize its usability, and as a result, it excels at exploring locus of interest and gene-element relationship, but can be a little less dynamic than other tools when navigating Hi-C matrix for larger genomic regions.
Conclusion
In summary, we developed an interactive 3D Genome Browser that is defined by simple and easy-to-navigate graphical user interface, fast query-response time, and a comprehensive collection of publicly available chromatin interaction datasets. As our browser simultaneously displays the 3D chromatin interactions, functional (epi)genomic annotations, and disease/trait-associated SNPs, we provide an invaluable online tool for investigators from all over the world for the study of 3D genome organization and its functional implications in mammalian gene regulation.
Backend and user interface
The 3D Genome Browser is supported by the LAMP (Linux, Apache, MySQL, PHP) stack web service on the backend. At the user-interface level, the browser depends on HTML5 and JavaScript and its libraries JQuery and D3.js. All displays are rendered on HTML5 Canvas or Inline SVG.
In-house Hi-C data processing pipeline We followed the pipeline in Dixon et al. [22] for Hi-C data processing. Briefly, raw fastq files were aligned to human reference genome GRCh38 with BWA aligner (0.7.15-r1140). Only uniquely mapped reads and properly paired reads on the same chromosome are retained. The genome is binned at different resolution (e.g., 40 kb and 10 kb) to generate Hi-C matrix. Paired reads were considered to be chromatin interactions connecting two bins. ICE (iterative correction and eigenvector decomposition) normalization was done using the "iced" Python package.
User query submission
The user may provide genomic coordinates or genome features such as gene symbols, RefSeq ID, Ensembl ID, or SNP rsid as queries for all modes of the 3D Genome Browser.
(See figure on previous page.) Fig. 5 Using the inter-chromosomal interaction mode of the 3D Genome Browser to discover structural variations in cancer cells. An interchromosomal translocation event (BCR-ABL fusion) in K562 and KBM7 CML cell lines appears as "inter-chromosomal interactions" on Hi-C maps. Such aberrant patterns are frequently observed in Hi-C maps in cancer cells, because the cancer genome is not available and Hi-C reads are mapped to the reference genome. We also noted that this translocation is reciprocal in KBM7 but not in K562 cells and that the breakpoint in ABL is different in the two cell lines. Such inter-chromosomal interactions are not observed in the karyotypically normal GM12878 cells
External genome browser integration and alignment
For the UCSC Genome Browser, we embed its sessions with the iframe and we align its content with our tracks by manipulating the scroll bars of the div HTML element containing the iframe. The WashU Epigenome Browser provides a JavaScript function for seamless integration into our browser. For both external browsers, it is possible for the user to embed a user-defined session consisting of user-selected tracks and options by providing the session URL to the 3D Genome Browser.
Determining homologous regions
For the compare Hi-C mode, we determine the homologous regions between two species by querying for homologous genes from the NCBI's HomoloGene database [42] as well as utilizing known inter-species chains [43].
BUTLR format
The BUTLR file encodes an entire genome-wide chromatin interactions data into a binary, indexed format. To compress the original contact matrices, BUTLR only stores the nonzero values of the upper triangular matrices of the intra-chromosomal data and the n × m, where n and m are the number of interrogated loci and where n < m of the inter-chromosomal data. The locations of each chromosome or chromosome-pair matrix, row indices of each matrix, and column indices of nonzero values along with nonzero values are binarized and indexed within the BUTLR file structure. Perl scripts that encode and decode BUTLR files are available at http://github.com/yuelab/BUTLRTools. All the Hi-C matrices in this manuscript are converted to BUTLR file format for visualization [5,8,19,20,[22][23][24][44][45][46][47][48][49][50][51][52][53][54].
Additional files
Additional file 1: Figure S1. Gene expression of SCL25A37 across 109 issues. Figure S2. Using the 3D Genome Browser to determine intrachromosomal structural variations. Figure S3. Design and performance of the BUTLR file format. Table S1. List of Hi-C datasets hosted by the 3D Genome Browser. Table S2. List of ChIA-PET, Capture Hi-C, PLAC-Seq and HiChIP datasets. Table S3. List of GAM, DNase Hi-C, and SPRITE datasets. (PDF 1295 kb) Additional file 2: Review history. (DOCX 1354 kb)
Acknowledgements
We thank Dr. Jesse R. Dixon for the help with TADs and compartment calling. We are grateful to the members of Wang lab and Yue lab for useful discussions.
Funding
This work was supported by NIH grants R35GM124820, R01HG009906, and U01CA200060 (F.Y.). F.Y. is also supported by Leukemia Research Foundation, PhRMA Foundation, and Penn State CTSI. T.W. is also supported by NIH grants R01HG007175, R01HG007354, R01ES024992, U24ES026699, and U01HG009391. M.H. is partially supported by NIH U54DK107977. Y.L. is partially supported by NIH R01HG006292 and R01HL129132.
Availability of data and materials All data are available at http://3dgenome.org. The source code of the website is deposited at https://github.com/yuelab/3dgenome [55] and Zenodo (DOI: 10.5281/zenodo.1402785) [56]. The code for the 3D Genome Browser is freely available under an MIT license. No new experimental datasets were generated within this study. Publicly available datasets included in the browser are listed in the supplementary tables.
Review history
The review history for this manuscript is available as Additional file 2.
Authors' contributions YW, TW, and FY conceived the project. YW, FS, and FY designed and implemented the project. YW, TW, and FY wrote the manuscript with input from MC, YL, MH, and RCH. BZ, LZ, JX, DK, DL, and MC helped with the data processing and integration. All authors read and approved the final manuscript.
Ethics approval Not applicable. | 5,511.8 | 2017-02-27T00:00:00.000 | [
"Biology",
"Computer Science"
] |
On a Reaction-Diffusion Model of COVID-19
: Nowadays mathematical models play a major role in epidemiology since they can help in predicting the spreading and the evolution of diseases. Many of them are based on ODEs on the assumption that the populations being studied are homogenous sets of fixed points (individuals) but actually populations are far from being homogenous and people are constantly moving. In fact, thanks to science progresses, distances are no longer what they used to be in the past and a disease can travel and reach out even the most remote places on the globe in a matter of hours. HIV and Covid-19 outbreaks are perfect illustrations of how far and fast a disease can now spread. When it comes to studying the spatio-temporal spreading of a disease, instead of ODEs dynamic models the Reaction-Diffusion ones are best suited. They are inspired by the second Fick’s law in physics and are getting more and more used. In this article we make a study of the spatio-temporal spreading of the COVID-19. We first present our SEIR dynamic model, we find the two equilibrium points and an expression for the basic reproduction number ( R 0 ), we use the additive compound matrices and show that only one condition is necessary to show the local stability of the two equilibrium points instead of two like it is traditionally done, and we study the conditions for the DFE (Disease Free Equilibrium point) and the EE (Endemic Equilibrium point) to be globally asymptotically stable. Then we construct a diffusive model from our previous SEIR model, we investigate on the existence of a traveling wave connecting the two equilibrium thanks to the monotone iterative method and we give an expression for the minimal wave speed. Then in the last section we use the additive compound matrices to show that the DFE remains stable when diffusion is added whereas there will be appearance of Turing instability for the EE once diffusion is added. The conclusion of our article emphasizes the importance of barrier gestures and the fact that the more people are getting tested the better governments will be able to handle and tackle the spreading of the disease.
Introduction
The COVID-19 was declared a pandemic by the WHO on the 30 th Juanary 2020. The responsible agent is a coronavirus (SARS-Cov2) that spreads between people thanks to close contacts, usually via droplets produced by coughing, sneezing or talking. The droplets usually fall onto surfaces or to the ground rather than remaining in the air making it also possible for people to be infected by touching a contaminated surface or any contaminated object. According to the updated information available [27], incubation period ranges from 2 to 14 days and the main symptoms are fever, loss of appetite, shortness of breath, cough, fatigue, muscle aches and pain.
The majority of the infected individuals are asymptomatic and tend not to be tested thou they do play a role in the spreading of the disease. The recovery time which usually ranges from 2 to 6 weeks differs from person to person and it happens that even after that period some people still complain to not be fully recovered.
Depending on the main purpose, dynamic models usually try to encapsulate as much as possible important features of the disease in the simplest way [3][4][5][6][15][16][17][18][19] to provide a comprehensive view on the disease dynamic. That is why the majority of the current models on COVID-19 are very detailed in classes. For instance a quarantined class and/or an hospitalization class are often taken into account leading to models with 5 to 8 classes [20][21][22][23][24][25][26]. Knowing how challenging it is to find front traveling waves for R-D models of three or more than three equations, we have chosen to build a simpler model with only four classes (the susceptibles, the asymptomatic infected individuals, the symptomatic infected individuals and the removed). other approaches are regularly used to investigate on the existence existence of a traveling wave [5,7] but here we use the monotone iterative method by setting up a pair of ordered super-solutions. We consider that no major action is taken to stop the spreading, therefore we have no quarantine and people are still free to move. The interactions between the four classes are given in the Figure 1 waves for R-D models of three or more than three equations, we have chosen to build a simpler model that includes the maximum important features of the disease dynamics. We then consider only four classes: the susceptibles, the asymptomatic infected individuals, the symptomatic infected individuals and the removed. other approaches are regularly used to investigate on the existence existence of a traveling wave [5,7] but here we use the monotone iterative method by setting up a pair of ordered super-solutions. We consider that no major action is taken to stop the spreading, therefore we have no quarantine and people are still free to move. The interactions between the four classes are given in the Figure 1 (1) Every new-born is susceptible i.e there are only horizontal transmission; (2) An asymptomatic infected individual is an infectious person presenting no or very few symptoms; (3) A symptomatic infected individual is an infectious person presenting symptoms of COVID-19; (4) Every contact with an infectious person does not always lead to a transmission of SARS-Cov2; and the assumptions we make are the following: 1. Every new-born is susceptible i.e there are only horizontal transmission; 2. An asymptomatic infected individual is an infectious person presenting no or very few symptoms; 3. A symptomatic infected individual is an infectious person presenting symptoms of COVID-19; 4. Every contact with an infectious person does not always lead to a transmission of SARS-Cov2; 5. After an infectious contact there is always an incubating period but we do not take it into account here; 6. After a susceptible has been infected by either an asymptomatic infected individual or a symptomatic infected individual, he/she will go through an asymptomatic state he can remain into until he/she is totally healed or he/she will leave that state as soon as sufficient symptoms begin to appear; 7. A symptomatic infected individual can either die of COVID-19 or get healed; 8. We do not take into account reinfection by COVID-19. 9. The entire population has a per-capita death rate independent of COVID-19.
A Reaction Model of COVID-19
Consider a population with size N . We can divide it into sub-populations and denote their fractions by S, E, I, and R which respectively represent the fraction of susceptible, the fraction of asymptomatic infected individuals, the fraction of symptomatic infected individuals and the fraction of removed. Thus the sub-populations verify the identity: S + E + I + R = 1. Based on the assumptions made previously we can set a reaction model of COVID-19 as follows: The coefficients used into our model are explained in the following table: The last equation in (1) does not intervene into the transmission of the disease, we can simplify our system by reducing it into three equations as follows: To ensure the well posedness of the system we consider the proportion of the population in: 2.1. Equilibrium points and R 0 The equilibrium points areū for the disease free equilibrium (DFE) and for the endemic equilibrium (E.E).
To find R 0 we use the next generation operator [18,21]. Our DFE is given byū = ( Λ d , 0, 0) = (1, 0, 0) due to the fact that at this equilibrium point the entire population is susceptible i.e the fraction of the healthy people is 1. So we obtain the following sub-matrices Then Hence the basic reproduction number is: Proof. Let us suppose that R 0 ≤ 1 then we have the existence of the EE Thus u * = M N β+ηM , 0, 0 . for the proportion of the population being entirely in the first component we have Thus S * = Λ d and the unique equilibrium point in this situation is the disease free one.
≺ 0. Both components must be positive to ensure the existence of an endemic equilibrium, therefore there is none.
Let us now suppose that R 0 Thus there exists an endemic equilibrium u * as defined in (5).
Stability of the Equilibria
then the DFE given in (4) is locally asymptotically stable in G.
then the E.E given in (5) is locally asymptotically stable in G.
Proof: It suffices to show that the eigenvalues of the two Jacobian matrices at the two equilibria have real negative part. Next we use a property of the additive compound matrices to state: Theorem 2.3. Let Jū and J u * be the Jacobian matrices at the DFE and the EE. If − |Jū| 0 then the DFE given in (4) is locally asymptotically stable in G.
then the E.E given in (5) is locally asymptotically stable in G.
Proof: To show thatū is stable we must prove under which conditions − |Jū| 0 and µ(J The second compound matrix of Jū is given by: Let us use µ 1 as our Lozinskǔ mesure with From the first column we have: We proceed the same way for the second and the third columns and find respectively: Two conditions are necessary to the stability of the equilibriumū. The first one is the sign of the Jacobian Jū.
0 then R 0 ≺ 1 and this condition is necessary for the local stability ofū. If µ J we get a condition on the parameters and this has no much meaning and impact for our model. Theorem 2.4. When η ≤ β and R 0 ≤ 1, then disease-free equilibriumū for (2) is globally asymptotically stable. Proof We use an approach given by Zhisheng Shuai and P. Van Den Driessche to construct our Lyapounov function [22,29] . Let F, V and V −1 defined like in (7) and (8).
If w T = (x y) denotes the left eigenvector of V −1 F then we have and w T = (1, 1).
Algorithms on the calculation of µ(J [2] u ) are given in [8][9][10] and the Lyapounov function is given by: We have Since From the hypothesis, R 0 ≤ 1 and from (13) The endemic equilibrium u * for (2) is globally asymptotically stable when R 0 1 Proof. Let Our Lyapounov function is given by: The associated weighted diagram given in Figure 2 has three vertices and two cycles. Along each cycle, G 21 +G 32 +G 13 = 0 and G 21 + G 12 = 0. Then there exist
A Reaction-Diffusion Model on COVID-19
Assume now that the individual in the population can move (diffuse) with the same diffusion coefficient. If the susceptibles and the asymptomatic infectious are free to move the same way, we suppose that the symptomatic infectious still have contacts with people able to diffuse. Then we can formulate our R-D model like: Using wave coordinates ξ = x + ct in (18) yields: Asymptotically the system (19) satisfies the following boundary conditions: Linearizing (19) about ( Λ d , 0, 0) = (1, 0, 0)we obtain: The second equation in (22) provides the speed of the wave. In fact its characteristic equation is: To ensure the existence of real solutions we must have Hence the minimal speed is 23) and the roots to the characteristic equation are: The solutions of the first equation are: and those of the third: Hence the profile of the traveling wave solution to (22) is
Existence of a Traveling Wave
To prove the existence of a front traveling wave solution to (22) we shall use the monotone iterative method which relies on the following principle: Principle 4.1.
[Monotone Iterative Method] Consider the general second order ODE with Dirichlet boundary conditions value given by : with f : I × R 2 −→ R a continuous function and A, B ∈ R. If in C 2 (I) there exist U (t) a lower solution to (25) and U (t) an upper solution to (25) such that U (t) ≤ U (t) on I. Then the existence of a solution to the problem (25) lying between U (t) and U (t) is proved. Lemma 4.1. Let X(ξ) = (S(ξ), E(ξ), I(ξ)) = (0, 0, 0), then X is a lower solution to (19).
Proof It is obvious the last two equations of (19) vanish and for the first one we have Λ ≥ 0.
For the first equation of (19) we have: For the second equation of (19) we have: by their definitions. For the third equation of (19) we have: Now if ξ 0, then: International Journal of Systems Science and Applied Mathematics 2021; 6(1): 22-34 29 For the first equation of (19) we have: For the second equation of (19) we have: For the third equation of (19) we have: (4.8) and (4.9) are obtained due to the fact that (S * , E * , I * ) is an equilibrium to (2). For the boundary conditions we have: Hence, for both values ξ 0 and ξ ≤ 0, X(ξ) is an uppersolution to (19).
Theorem 4.1. If R 0 1 then there exists a traveling wave solution to (19) with a minimal speed c * = 2 √ η − N . If R 0 ≺ 1 then there does not exist any traveling wave solution to (19).
Turing Instability
When diffusion is added to a dynamic model it can radically change the nature of the equilibrium points and generate diffusion-driven (Turing) instabilities [12][13][14]. In this section we also use the additive compound matrices to investigate whether there will be appearance of Turing instability.
Theorem 5.1. Suppose 0 ≺ β N −η ≺ M , the DFE will be locally asymptotically stable for all diffusion matrix D 0.
Proof
The principal minor matrices to Jū are From (a) and (b) we know that Jū satisfy the minor conditions if 0 ≺ β N −η ≺ M i.e under that condition the DFE will remain locally asymptotically stable even if diffusion is introduced in the Reaction model (2).
Theorem 5.2. There will always be a Turing instability at the EE defined in (5) for all diffusion matrix D 0.
Proof
The principal minor matrices of J u * are |K 3 | = 0 implies that the minor conditions will never be satisfied on J u * . Then if diffusion is introduced into the Reaction model (2) we will have a Turing instability for u * .
Conclusion
The model we built and studied can provide capital information on the dynamic of the pandemic: The expression given in (23) enable us to calculate the minimum speed rate for the appearance of a wave spreading the disease, the stability analysis of the equilibrium points have shown different situations that can occur.
If the DFE is stable then the dynamic model remains stable under the condition R 0 ≺ 1 even if a diffusion term is introduced and there will not be any appearance of a traveling wave solution but some conditions must be fulfilled: • The contact rate given in β must be reduced. Quarantine and barrier gestures remain the best means so far; • In the absence of an effective vaccine, it is important to reinforce immunity within populations by maintaining symptomatic individuals alive long enough for them to acquire immunity; • From both the transfer rate and the contact rate η we understand that more people should be tested especially the asymptomatic infectious kipping in mind that they seem to be the most dangerous in the spreading of the pandemic since they show no symptoms of the disease.
When R 0 1 we have the existence of an endemic equilibrium point the E.E and when diffusion is introduced there is appearance of a Turing instability.
We have fluctuations in the number of infectious individuals even if R 0 1 i.e the EE will somehow lose its asymptotic behavior.
Simulations
We suppose an asymptomatic infected is likely to be in contact with more people than a symptomatic one because he shows no signs for people to be suspicious. The two infection rates are obtained by β = s 1 .p and η = s 2 .p where p is the percentage of contamination for an infected individual in one day, s 1 is the number of contacts that a symptomatic infected person can meet in one day and s 2 is the number of contacts that an asymptomatic infected person can meet in one day. The initial condition is given by X 0 = ( N −100 N , 50 N , 30 N , 20 N ) and from the literature (see [20]- [26]) we can give the following table containing the estimated values of the parameters. We can see that even if the number of contacts for symptomatic infected has been strongly reduced, we still have lots of infections due to the asymptomatic infected contact number which is still high. We can also see it slows down the infection waves. | 3,990 | 2021-03-26T00:00:00.000 | [
"Mathematics"
] |
Acute Metabolic Changes with Thigh-Positioned Wearable Resistances during Submaximal Running in Endurance-Trained Runners
The aim of this study was to determine the acute metabolic effects of different magnitudes of wearable resistance (WR) attached to the thigh during submaximal running. Twenty endurance-trained runners (40.8 ± 8.2 years, 1.77 ± 0.7 m, 75.4 ± 9.2 kg) completed six submaximal eight-minute running trials unloaded and with WRs of 1%, 2%, 3%, 4% and 5% body mass (BM), in a random order. The use of a WR resulted in a 1.6 ± 0.6% increase in oxygen consumption (VO2) for every 1% BM of additional load. Inferential based analysis found that the loading of ≥3% BM was needed to elicit any substantial responses in VO2, with an increase that was likely to be moderate in scale (effect size (ES) ± 90% confidential interval (CI): 0.24 ± 0.07). Using heart rate data, a training load score was extrapolated to quantify the amount of internal stress. For every 1% BM of WR, there is an extra 0.17 ± 0.06 estimated increase in training load. A WR ≥3% of BM was needed to elicit substantial responses in lactate production, with an increase which was very likely to be large in scale (ES ± 90% CI: 0.41 ± 0.18). A thigh-positioned WR provides a running-specific overload with loads ≥3% BM, resulting in substantial changes in metabolic responses.
Introduction
Endurance running attracts millions of participants both recreationally and competitively across the globe. Modifiable factors, such as training, play a vital role in enhancing the qualities that determine running performance. The physiological mechanisms that determine endurance running performance include the maximum volume of oxygen that can be ventilated, delivered and used by the body's cells at sea level (VO 2 max) [1], the percentage of VO 2 max that a runner can sustain before blood lactate accumulation exceeds clearance (%VO 2 max at second ventilatory threshold VT 2 ) [2,3], and the metabolic cost of running at a given velocity (RE) [1]. Many trained runners also use resistance training to improve their running performance, be it by indirectly improving RE, muscular power capabilities or running performance [4]. Moreover, in recreational distance runners, heavy resistance, explosive resistance and muscle endurance resistance training have been found to significantly improve running performance [5]. It is therefore the objective of the practitioner to fully understand the appropriate training determinants and to be able to effectively and efficiently program training, ensuring the optimal transfer to performance and the minimization of injuries. At elite levels, the concepts of progressive overload and specificity become more important.
One means of progressive overload is through wearable resistance (WR), which involves an external load being attached to areas of the body enabling a sport-specific form of loading [6,7].
Previous studies have used a wide magnitude of loads (<1% to 40% body mass (BM)) attached to the trunk or limbs during different sporting actions [6,8]. Though heavier loads (>10% BM) have been used with trunk WR, lighter WR loads (<8.5% BM) have been sufficient to significantly increase metabolic demands compared to unloaded walking and running, indicated by increases in oxygen consumption (VO 2 ), heart rate (HR), energy workload and energy cost [6]. With metabolic demands being greater with lower body limb loading and demands increasing as comparable load is moved to a more distal position [9][10][11], Martin [11] found that adding 0.25 (0.69% BM) and 0.50 kg (1.39% BM) to each thigh at a running velocity of 12 km·h −1 increased VO 2 by 1.7% and 3.5%, respectively. The researchers noted that HR increases were consistent with increases in VO 2 , but also noted that HR was less sensitive to lower extremity loading [11]. The WR loads used by Martin [11] were added via packets of lead pellets, which were sown into pockets within the shorts. Similarly, the majority of the previous WR running research that has investigated metabolic cost involved cumbersome methods for attaching the external load, which may have unfavourably affected the running mechanics [6]. Moreover, limited research into endurance runners and even more so trained endurance runners has been completed. It is also unknown how a spectrum of incremental WR loads will impact metabolic cost and what this relationship looks like. Understanding the metabolic effects from between loads may enable practitioners to prescribe a more appropriate loading %BM to target different aspects of training. Given the improved technology in recent WR loading, and the fact that an incremental spectrum of loading has yet to be investigated, the purpose of this study was to investigate how a magnitude of between 1% and 5% BM WR attached to the thigh affected the acute metabolic responses to submaximal running in endurance-trained runners. It was hypothesized that the additional loading from the WR would overload the leg musculature, resulting in greater metabolic responses during submaximal running.
Subjects
Twenty endurance-trained runners (two female and 18 male, 40.8 ± 8.2 years, 1.77 ± 0.7 m, 75.4 ± 9.2 kg) were recruited for the current study. All of the runners had no history of any major health issues and had completed a minimum of one half-marathon distance event in the 12 months prior to the commencement of the study. In addition, they were required to be actively engaged in endurance running training at the onset of the study and have an average VO 2 max of 59.6 ± 7.9 mL·kg −1 ·min −1 . The ethical approval for this study was obtained from the AUT University Ethics Committee. Before testing, all of the participants provided their informed consent in writing and completed a pre-exercise health questionnaire (Par-Q).
Procedure
All running trials were conducted under stable laboratory conditions (ambient conditions: 21 ± 3 • C, <60% relative humidity) on a motorized treadmill (Woodway, Waukesha, WI, USA) with the gradient set a 1% [12]. The HR response data was collected using an HR monitor (Polar A300, China) and the oxygen consumption data was measured using a carbon dioxide and oxygen analyzer (Metalyzer Cortex, Biophysik GmbH, Leipzig, Germany), which was calibrated before each testing session according to the manufacturer's specifications. All of the capillary samples were drawn from the preferred finger of the runner and lactate (LA) accumulation was measured using a blood LA analyzer (LA Pro 2, Shiga, Japan). Any subjective data was measured by way of the rate of perceived exertion (RPE) using a modified BORG 10-point scale [13]. For the WR conditions, subjects were required to wear a pair of compression shorts with associated loads (Lila TM , Exogen TM , Wilayah Persekutuan Kuala Lumpur, Malaysia). The WR was added via 100 or 200 g increments and the total load for each trial was rounded to the nearest 100 g. The loading schemes began with each WR load placed alternatively from the anterior to the posterior, in a stacked balance of distal to proximal (see Figures 1 and 2). A spectrum of WR loading was used (0%, 1%, 2%, 3%, 4%, and 5% BM) which was comparable to previous WR studies [11,[14][15][16][17], to examine changes between the loading magnitudes. For each participant, the study was conducted over a maximum of 15 days. This included one familiarization session and three testing sessions (see Figure 3) under laboratory conditions. The purpose of the familiarization session was to allow each runner to become accustomed to treadmill running while wearing both the compression shorts and all the metabolic measuring equipment. The For each participant, the study was conducted over a maximum of 15 days. This included one familiarization session and three testing sessions (see Figure 3) under laboratory conditions. The purpose of the familiarization session was to allow each runner to become accustomed to treadmill running while wearing both the compression shorts and all the metabolic measuring equipment. The participants completed a self-paced run for 20 min followed by a 10 min recovery. During this recovery time, the graded exercise test (GXT) protocol was discussed and an HR monitor and gas mask fitted. The participants then completed a further 10 min run-including a sufficient amount of the GXT incremental protocol, to feel comfortable with the procedures-and no data was collected. The participants were instructed to refrain from training on the day of testing session one and to avoid any strenuous training sessions 24 h prior. participants completed a self-paced run for 20 min followed by a 10 min recovery. During this recovery time, the graded exercise test (GXT) protocol was discussed and an HR monitor and gas mask fitted. The participants then completed a further 10 min run-including a sufficient amount of the GXT incremental protocol, to feel comfortable with the procedures-and no data was collected. The participants were instructed to refrain from training on the day of testing session one and to avoid any strenuous training sessions 24 h prior. Testing session one occurred within seven days of completing the familiarization session. The purpose was to generate a VO2 response profile to graded exercise to establish VT1, VT2 and VO2max. The participants completed a self-paced 20 min warm-up on a treadmill and were given a recovery period of 10 min prior to the commencement of the GXT. The starting speed was maintained for 1 min, followed by an increase of 0.5 km·h −1 every 30 s until the point of voluntary exhaustion [18]. The starting speed was adjusted on an individual basis, to ensure volitional exhaustion at between 8 and 12 min. VO2 was tracked continuously at a sampling rate of 0.1 Hz, and the HR and RPE were recorded at each speed increment, with LA being measured immediately after completion of the test. The maximum oxygen consumption was, on average, over 30 s and was considered to be achieved if any one of the following criteria were met: a plateau in VO2 was reached, despite an increase in workload; a respiratory exchange ratio (RER) >1.15 was observed; an HR within five beats of the age predicted maximum (220-AGE) was reached; or a peak exercise blood LA concentration >8 mmol/L was achieved [19]. The testing sessions two and three included all submaximal running trials, to measure metabolic and subjective responses while un-loaded and loaded. Testing session two occurred within 2-5 days of testing session one and testing session three occurred within 2-3 days of testing session two, to ensure no fatigue between all three sessions (Barnett, 2006). Testing session two included three WR loads, testing session three included the final three WR loads and load order was randomized. At the start of both testing sessions two and three, an 8 min warm up-set at a running speed equivalent to VT1-was completed, followed by a 10 min recovery. VT1 was chosen as this is close to the typical training intensity in endurance sports, in line with the polarized model of training. The polarized training model has been shown to be common practice among elite endurance runners, for whom long, slow distance training at lower intensities (<VT2) makes up 75% of an individual's training volume, with shorter, higher intensity bouts of effort (>VT2) making up the remainder of the training program [13]. Each submaximal running trial lasted 8 min, with 10 min seated recovery between each subsequent trial. The oxygen consumption and HR were tracked for 2 min prior to each trial starting, for the 8 min of each trial (with the final 2 min used for analysis) and Testing session one occurred within seven days of completing the familiarization session. The purpose was to generate a VO 2 response profile to graded exercise to establish VT 1 , VT 2 and VO 2 max. The participants completed a self-paced 20 min warm-up on a treadmill and were given a recovery period of 10 min prior to the commencement of the GXT. The starting speed was maintained for 1 min, followed by an increase of 0.5 km·h −1 every 30 s until the point of voluntary exhaustion [18]. The starting speed was adjusted on an individual basis, to ensure volitional exhaustion at between 8 and 12 min. VO 2 was tracked continuously at a sampling rate of 0.1 Hz, and the HR and RPE were recorded at each speed increment, with LA being measured immediately after completion of the test. The maximum oxygen consumption was, on average, over 30 s and was considered to be achieved if any one of the following criteria were met: a plateau in VO 2 was reached, despite an increase in workload; a respiratory exchange ratio (RER) >1.15 was observed; an HR within five beats of the age predicted maximum (220-AGE) was reached; or a peak exercise blood LA concentration >8 mmol/L was achieved [19]. The testing sessions two and three included all submaximal running trials, to measure metabolic and subjective responses while un-loaded and loaded. Testing session two occurred within 2-5 days of testing session one and testing session three occurred within 2-3 days of testing session two, to ensure no fatigue between all three sessions (Barnett, 2006). Testing session two included three WR loads, testing session three included the final three WR loads and load order was randomized. At the start of both testing sessions two and three, an 8 min warm up-set at a running speed equivalent to VT 1 -was completed, followed by a 10 min recovery. VT 1 was chosen as this is close to the typical training intensity in endurance sports, in line with the polarized model of training. The polarized training model has been shown to be common practice among elite endurance runners, for whom long, slow distance training at lower intensities (<VT 2 ) makes up 75% of an individual's training volume, with shorter, higher intensity bouts of effort (>VT 2 ) making up the remainder of the training program [13]. Each submaximal running trial lasted 8 min, with 10 min seated recovery between each subsequent trial. The oxygen consumption and HR were tracked for 2 min prior to each trial starting, for the 8 min of each trial (with the final 2 min used for analysis) and for 2 min post trial. The rate of perceived exertion and LA was recorded immediately post completion of each 8 min trial.
Statistical Analysis
Descriptive statistics-including the means and standard deviations-were calculated for each measure. The statistical aim of this study was to make an inference about the impact on metabolic stress of submaximal running with WR, which requires a determination of the magnitude of an outcome. The traditional sample size estimation and hypothesis testing approach was not appropriate for this study design [20]. Accordingly, inferential statistics were used to examine the qualitative meaning of the observed changes in the metabolic cost (VO 2 , HR, LA) and perception (RPE) of submaximal running, with loaded compared to unloaded examples. The collected data was presented as the mean value for each, with the reported effect size (ES) and percent differences at a 90% confidence interval (CI). The smallest worthwhile change was used to determine if any observed changes were considered trivial, possible or likely, including the magnitude of each change, calculated as a change in score standardized to 0.2 of the between-subject SD from the unloaded condition [21]. The qualitative probabilities were defined by the scale <0.5% most likely trivial increase, <5% very likely trivial increase, <25% likely trivial increase, 25-75% possible small increase, >75% likely moderate increase, >95% very likely large increase, >99.5% most likely very large increase and the outcome was deemed unclear where the 5% and 90% CI of the mean change overlapped with both the positive and negative outcomes [20]. To help quantify the metabolic cost of WR based on relative exercise intensity and duration, HRs were used to extrapolate a training load score (TLS) for each load [22] for 10 min of running. To understand the relationship between metabolic variables (VO 2 , HR and TLS) and load, a scatterplot was created in excel to establish a linear equation and R 2 value for each variable. The formula used for calculating the Training Load Score (Training Stress Score (TSS) [22] was: TLS = (sec × HR × IF)/(VT 2 × 3600) × 100 IF (intensity factor) = HR/VT 2 Key: TLS: Training load score, HR: Heart rate (average heart rate during exercise), IF: Intensity factor, VT 2 : Second ventilatory threshold (the point at which LA accumulation exceeds clearance).
The mean HR response to submaximal running with a load of 1%BM was 158 bpm (±13.42), with a 0.4% (±0.01) increase and resulted in a very likely trivial increase (0.05 ± 0.11) ( Table 2). A Possible small increase was witnessed at 2% and 3% BM (0.17 ± 0.15 and 0.2 ± 0.13 respectively), with mean values of 159.50 (±13.42) and 160 bpm (±12.35) respectively, with 1.5% (±0.01) and 1.8% (±0.01) increases respectively. There was a mean HR response of 162 bpm (±11.99) and a 2.9% (±0.01) increase at 4% BM, reporting a likely moderate increase (0.32 ± 0.16). At 5% BM, a very likely large increase (0.33 ± 0.12) with a mean HR response of 162 bpm (±11.36) and 2.9% (±0.01) increase was seen. Figure 5 contains the percentage change in HR responses from unloaded to loaded (±90% CI). Linear regression was carried out and showed a positive relationship (R 2 = 0.94), representing an additional 0.63% (±0.32) increase in HR response for every 1% BM of additional load. Figure 6 represents the relationship between the TLS extrapolated from the HR data for the equivalent of 10 min of running at VT 1 and load. The regression equation showed a positive linear relationship (R 2 = 0.96), representing an additional 0.17 (±0.06) of internal training stress for every 1% BM of additional load for 10 min of running. The mean HR response to submaximal running with a load of 1%BM was 158 bpm (±13.42), with a 0.4% (±0.01) increase and resulted in a very likely trivial increase (0.05 ± 0.11) ( Table 2). A Possible small increase was witnessed at 2% and 3% BM (0.17 ± 0.15 and 0.2 ± 0.13 respectively), with mean values of 159.50 (±13.42) and 160 bpm (±12.35) respectively, with 1.5% (±0.01) and 1.8% (±0.01) increases respectively. There was a mean HR response of 162 bpm (±11.99) and a 2.9% (±0.01) increase at 4% BM, reporting a likely moderate increase (0.32 ± 0.16). At 5% BM, a very likely large increase (0.33 ± 0.12) with a mean HR response of 162 bpm (±11.36) and 2.9% (±0.01) increase was seen. Figure 5 contains the percentage change in HR responses from unloaded to loaded (±90% CI). Linear regression was carried out and showed a positive relationship (R 2 = 0.94), representing an additional 0.63% (±0.32) increase in HR response for every 1% BM of additional load. Figure 6 represents the relationship between the TLS extrapolated from the HR data for the equivalent of 10 min of running at VT1 and load. The regression equation showed a positive linear relationship (R 2 = 0.96), representing an additional 0.17 (±0.06) of internal training stress for every 1% BM of additional load for 10 min of running. The blood LA responses post submaximal running with a load of 1%BM resulted in a mean accumulation of 2.77 mmol/L (±1.90), however, an unclear effect was exhibited with more data needed (0.0 ± 0.28) ( Table 3) The blood LA responses post submaximal running with a load of 1%BM resulted in a mean accumulation of 2.77 mmol/L (±1.90), however, an unclear effect was exhibited with more data needed (0.0 ± 0.28) ( Table 3). A likely trivial increase at 2% BM (0.08 ± 0.15) with a mean accumulation of 4.83 mmol/L (±2.04) was observed. The loads at 3% and 4% BM reported very likely large increases (0.41 ± 0.18 and 0.42 ± 0.19 respectively) with mean accumulations of 3.27 (±1.79) and 3.30 mmol/L (±2.03) respectively. Loads at 5% BM produced a mean accumulation of 3.52 mmol/L (±2.35) and reported a most likely very large increase (0.49 ± 0.15). The blood LA responses post submaximal running with a load of 1%BM resulted in a mean accumulation of 2.77 mmol/L (±1.90), however, an unclear effect was exhibited with more data needed (0.0 ± 0.28) ( Table 3). A likely trivial increase at 2% BM (0.08 ± 0.15) with a mean accumulation of 4.83 mmol/L (±2.04) was observed. The loads at 3% and 4% BM reported very likely large increases (0.41 ± 0.18 and 0.42 ± 0.19 respectively) with mean accumulations of 3.27 (±1.79) and 3.30 mmol/L (±2.03) respectively. Loads at 5% BM produced a mean accumulation of 3.52 mmol/L (±2.35) and reported a most likely very large increase (0.49 ± 0.15). Post submaximal running with a load of 1% BM resulted in a possible small increase (0.28 ± 0.25) and a mean reported score of 3.35 (±1.16) in RPE (Table 4). There was a likely moderate increase at 2% BM (0.43 ± 0.23), with a mean reported score of 3.68 (±1.44) and there was a mean reported score of 3.73 (±1.33) and very likely large increase at 3% BM (0.52 ± 0.26). Both 4% and 5% BM reported most likely very large increases (0.82 ± 0.29 and 0.86 ± 0.28 respectively), with mean reported scores of 4.20 (±1.26) and 4.38 (±1.57) respectively.
Discussion
The aim of this study was to understand the acute metabolic effects of thigh WRs during submaximal running in endurance-trained runners. It was found that for every 1% BM of additional load, there is an expected 1.59% (±0.62) and 0.63% (±0.32) increase in VO 2 and HR response respectively. A thigh WR of at least 3% BM was needed to have a likely moderate increase (0.24 ± 0.07) in VO 2 response, with a most likely very large increase (0.43 ± 0.07) exhibited at 5% BM. A loading of at least 2% BM was needed to have a possible small increase (0.17 ± 0.15) in HR response, with a very likely large increase (0.33 ± 0.12) at 5% BM. This resulted in a predicted 0.17% (±0.06) increase in internal stress for every 1% BM of additional load for 10 min of running at a speed equivalent to VT 1 . For the mean RPE, a loading of 2% resulted in likely moderate increases, with 3% BM needed for very likely large increases. Findings from this study provide further WR information for practitioners over a magnitude of thigh WR loads, to effectively integrate WR into their training regimes.
The VO 2 and HR data collected in this study agrees with previously reported findings, showing that limb loading during locomotion can increase the metabolic cost compared to unloaded [9,10], however, these studies investigated lower extremity loading (feet). Martin's [11] is the only other metabolic study to use thigh loading, and reported an increase in oxygen consumption of 1.7% and 3.5% when the equivalents of 0.69% and 1.39% BM, respectively, were added to the thighs of highly trained male distance runners, with increases in VO 2 response to load reaching statistical significance (p < 05). From the findings in this study, the incremental loading of 1% to 5% BM resulted in linear increases in VO 2 from 1.7% to 8.1%. Comparatively, an increase in VO 2 of 1.59% for every 1% BM (equivalent to 0.75 kg when extrapolated from the mean weight of our participants) of additional load was also observed. Accordingly, the increase in cost is slightly less compared to the findings of Martin [11]. The statistical method used in the current study (inferential based analysis), demonstrated that thigh WR loading of at least 3% BM was needed to lead to a likely moderate increase (0.24 ± 0.07).
Martin [11] produced statistically significant increases in oxygen consumption at loads lower than 3% BM on the thighs, however, they did not report any effect sizes to establish the magnitude of this change. From studies that used heavier WR attached to the trunk, non-significant, small effect size increases (0.1%-0.3%) in VO 2 were found during 3 min running at four speeds (9.6-13.1 km/h) utilizing 5% and 10% BM loading [23], while moderate and significant increases (6%-17%) were found during walking with heavier WR (10%-20% BM) from 4 min stages at speeds of 3-6.4 km/h [24]. Therefore-despite the lighter loads used in this study (1%-5% BM)-when WR is attached to the thighs, greater VO 2 changes occur compared to trunk placement, most likely due to the greater inertial demands from distal thigh loading.
In terms of HR responses, Martin [11] only reported on mean values, but showed a similar trend to that of VO 2 in that HRs increased slightly with additional loads to the thighs. These changes, however, did not reach statistical significance and the researchers suggested that HR is a less sensitive measure of thigh loading under 1.39% BM. Comparatively, in this study, an increase in HR of 0.63% was found for every 1% BM (equivalent to 0.75 kg when extrapolated from the mean weight of our participants) of additional load, which is less than half that of VO 2 (1.59%) for the same load. Inferential based analysis demonstrated that thigh loading of at least 2% BM was needed to lead to a possible small increase (0.17 ± 0.15) in HR response with 1% BM reporting a very likely trivial increase (0.05 ± 0.11). Using the HR data collected, a TLS was extrapolated to help quantify the amount of internal stress each loaded trial would have over a 10 min running period. Based on the linear regression equation produced for TLS plotted against load, for every 1% BM of additional load, there is an extra 0.17 increase in internal stress. A TLS was calculated, as it is a commonly used method for quantifying training stress for endurance athletes and may help put into context the impact that added load has on HR. Linear increases were found in mean RPE (3.08 to 4.38) with a loading of 2%, resulting in likely moderate increases, while very likely large increases were found with 3% BM. The authors note and acknowledge that the following limitations of the research should be considered for future investigations. This study may only suggest the possible responses found during the repeated application of such loading schemes to short-term submaximal running at a speed equivalent to VT 1 . Due to the subject inclusion criteria (male and female endurance-trained runners), the findings of this study may only be applied to this population. Trials were performed under laboratory conditions, which are not directly comparable to the traditional method of training for endurance-trained runners.
Conclusions
The current findings suggest that using thigh-loaded WRs while running at a speed equivalent to VT 1 will elicit an increase in metabolic response when compared to unloaded running. There is an expected increase in VO 2 and HR response of 1.59% (±0.62) and 0.63% (±0.32), respectively, for every 1% BM of additional load and an increase in exercise stress of 0.17 (±0.06) for the equivalent of 10 min of running for every 1% BM of additional load. However, loads of at least 2% and 3% BM are needed to see substantial increases in HR and VO 2 responses respectively. The findings from the current study provide some evidence for quantifying the potential increases in both VO 2 and HR responses to thigh WR loading during short-term submaximal running. It also gives means for quantifying an expected TLS for loaded submaximal running for a given duration. WRs attached to the legs enable a running-specific form of resistance training to be incorporated into training programing. Practitioners may be interested in moving the load distally away from the hip, as this placement seems to have a greater impact on metabolic cost, compared to commonly used trunk loading. However, this evidence is based only on 8 min of running and the effects of longer duration WR running under these conditions are still unknown and, more importantly, how the nature of this stimulus carries over to performance should be considered for future research. Building an understanding about how WR impacts the body acutely is important to help guide an evidence-based approach to programming, however, ultimately the potential longitudinal adaptations from WR require future investigations. | 6,556.2 | 2019-08-01T00:00:00.000 | [
"Biology"
] |
MIRI MRS Observations of β Pictoris. I. The Inner Dust, the Planet, and the Gas
We present JWST MIRI Medium Resolution Spectrograph (MRS) observations of the β Pictoris system. We detect an infrared excess from the central unresolved point source from 5 to 7.5 μm which is indicative of dust within the inner ∼7 au of the system. We perform point-spread function (PSF) subtraction on the MRS data cubes and detect a spatially resolved dust population emitting at 5 μm. This spatially resolved hot dust population is best explained if the dust grains are in the small grain limit (2πa ≪ λ). The combination of unresolved and resolved dust at 5 μm could suggest that dust grains are being produced in the inner few astronomical units of the system and are then radiatively driven outwards, where the particles could accrete onto the known planets in the system, β Pictoris b and c. We also report the detection of an emission line at 6.986 μm that we attribute to [Ar ii]. We find that the [Ar ii] emission is spatially resolved with JWST and appears to be aligned with the dust disk. Through PSF-subtraction techniques, we detect β Pictoris b at the 5σ level in our MRS data cubes and present the first mid-infrared spectrum of the planet from 5 to 7 μm. The planet’s spectrum is consistent with having absorption from water vapor between 5 and 6.5 μm. We perform atmosphere model grid fitting of the spectra and photometry of β Pictoris b and find that the planet’s atmosphere likely has a substellar C/O ratio.
Introduction
Debris disks are planetary systems that consist of dust, gas, planetesimals, and planets that typically correspond to the late stages of planetary system formation (Wyatt 2008;Hughes et al. 2018).They provide a unique laboratory to study the processes involved in the later stages of planet formation and evolution.Unlike protoplanetary disks, the dust seen in debris disks is thought to be constantly replenished through collisional processes between minor bodies in the system (Hughes et al. 2018).This is because the dust grains in debris disks are subject to radiation pressure and Poynting-Robertson drag that remove dust from orbits around the central star on timescales that are short compared to the age of the system (Guess 1962;Krivov et al. 2006;Wyatt 2008).The detection of these dust grains in debris disks points to ongoing stochastic and steady-state collisions between planetesimals that actively replenish the small dust grains (Backman & Paresce 1993).
The particles in debris disks range in size from the parent planetesimals to the collisionally produced dust, but it is the dust that is observable in both thermal emission at mid-infrared to millimeter wavelengths (e.g., Holland et al. 1998;Koerner et al. 1998;Telesco et al. 2005;MacGregor et al. 2018) and scattered light at optical and near-infrared wavelengths (e.g., Smith & Terrile 1984;Kalas et al. 2006;Esposito et al. 2020;Ren et al. 2023).Infrared spectra and the spectral energy distributions (SEDs) of debris disks can reveal the temperature of the dust (e.g., Ballering et al. 2013), which depends on the location, size, and composition of the dust particles.SED modeling as well as multiwavelength imaging have revealed some debris disks to have multiple populations of dust at various stellocentric distances (e.g., Chen et al. 2014;Jang-Condell et al. 2015;Gáspár et al. 2023).Knowledge of the innermost regions of debris disks has been mostly limited to infrared spectroscopic and photometric analyses, mainly because of the low spatial resolution of previous space-based infrared observatories like Spitzer.JWST provides a unique opportunity to study the spectra and structure of dust in debris disks at high angular resolution with the MIRI Medium Resolution Spectrograph (MRS).
β Pictoris (β Pic hereafter) is a ∼23 Myr (Mamajek & Bell 2014) A6V star that is host to the first ever imaged debris disk (Smith & Terrile 1984).The β Pic disk is oriented close to edge on from our line of sight and has been studied in scattered light as well as thermal emission at mid-infrared and millimeter wavelengths (e.g., Telesco et al. 2005;Golimowski et al. 2006;Matrà et al. 2019;Rebollido et al. 2024).At a distance of 19.6 pc (Gaia Collaboration et al. 2023), β Pic provides a great laboratory for studying the spatial structure of its debris disk.Scattered light imaging with the Hubble Space Telescope revealed a warp in the inner disk (Golimowski et al. 2006) that was later attributed to interactions with the now confirmed Original content from this work may be used under the terms of the Creative Commons Attribution 4.0 licence.Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.
giant planet β Pic b (∼9 M J , a = 9.9 au; Lagrange et al. 2009;Dawson et al. 2011;Lagrange et al. 2012;GRAVITY Collaboration et al. 2020;Nowak et al. 2020).Radial velocities provided evidence for the presence of a second planet in the system, β Pic c (Lagrange et al. 2019; ∼8 M J , a = 2.7 au), which was later directly confirmed by interferometric observations with the Very Large Telescope (VLT)/GRAV-ITY (Nowak et al. 2020).
As a young nearby system with both giant planets and a debris disk, β Pic provides a unique opportunity to study the interactions between dust in the disk and the giant planets in the system.Giant planets present in debris disks are thought to impart structures on the dust in debris disks (e.g., Dawson et al. 2011;Crotts et al. 2021), however, it is unclear if and to what extent the dust and minor bodies in debris disks can affect giant planets via accretion.Giant planets likely form within ∼10 Myr, before the debris disk evolutionary stage (Williams & Cieza 2011;Li & Xiao 2016), although it is still possible for material from debris disks to accrete onto giant planets (e.g., Frantseva et al. 2020;Kral et al. 2020).In our solar system for instance, the impact of comet Shoemaker-Levy 9 in 1994 delivered refractory material to Jupiter's atmosphere (Harrington et al. 2004;Fletcher et al. 2010).Marley et al. (2012) suggested that micron-and submicron-sized silicate grains can remain at higher altitudes in the atmospheres of young, low surface gravity giant planets and brown dwarfs and thus affect their observed spectra.If young giant planets are accreting dust from their debris disks, it is then possible for the dust grains to remain in their atmosphere and affect the observed spectrum of the planet.For young giant planets that are in systems with debris disks, the amount of dust that is accreted from the disks onto the planets is not well observationally constrained.
Mid-infrared spectroscopy of β Pic has revealed the presence of small submicron silicate grains in the system (Okamoto et al. 2004;Li et al. 2012;Lu et al. 2022).By modeling silicate features at 10, 18, and 23 μm detected with Spitzer IRS, Lu et al. (2022) found that the features are best produced by dust grains in the Rayleigh limit (2πa = λ), indicating the presence of submicron-sized silicate grains that are subject to blowout from radiation forces.Depending on the location of the dust, these subblowout-sized grains have the potential to interact with the known planets in the system as they are being radiatively driven outwards.Lu et al. (2022) also found tentative evidence for an infrared excess at 5 μm and the presence of ∼600 K dust in the system.Because of the limited angular resolution of Spitzer, the location of this hot dust was not clear.
In this paper, we present JWST MIRI MRS observations of the β Pic system, which provide higher spatial and spectral resolution space-based data in the mid-infrared than previously obtained with Spitzer.The observations and data reduction are described in Section 2. In Section 3, we present our main findings including (1) an infrared excess from 5 to 7.5 μm, (2) the discovery of a spatially resolved hot dust population, (3) the first detection of [Ar II] at 6.986 μm in the β Pic disk, and (4) the extraction of a low-resolution MRS spectrum of β Pic b.In Section 4, we discuss the implications of the new spatially resolved hot dust population and how, along with the 5 μm excess, it provides evidence for an outflowing wind of small dust grains that could be blown onto the known planets where they may be accreted.In Section 5, we state our conclusions and summarize.
Data Acquisition
As a part of GTO program 1294, we observed β Pic (K = 3.48; Ducati 2002) with MIRI MRS (Wells et al. 2015;Argyriou et al. 2023) on 2023, January 11, preceded by dedicated background observations and followed immediately by observations of the nearby star N Car (A0II, K = 4.218 ;Houk 1978;Cutri et al. 2003), which we used as a point-spread function (PSF) reference star.N Car is offset from β Pic by 7°.6.All observations were taken in all three grating settings (short, medium, and long) in all four channels with the FASTR1 readout pattern to cover the wavelength range 4.9-28 μm.We used a four-point point-source dither pattern with the negative dither orientation10 to observe β Pic and N Car.We observed both stars with target acquisition to ensure that both stars were well aligned on the detector to optimize reference PSF subtraction.Both β Pic and N Car were used themselves as target acquisition stars.The aperture position angle of JWST was 23°.8 for the observations of β Pic to make sure that the MRS image slicers were orthogonal to the edge-on disk.This was to ensure that the MRS cross artifact (Argyriou et al. 2023) would not mimic disk signatures in the integral field unit (IFU) data cubes.
For the dedicated background observations, we used a twopoint dither pattern.The position of the dedicated background field was R.A., decl.= 05 47 6.3034, −51 02 55.85, which is 1 7 offset from β Pic.The exposure time for N Car was longer than for β Pic to ensure that the N Car observations had a similar signal-to-noise ratio (S/N) as the β Pic observations.For β Pic, the number of groups and integrations for each dither position for channels 1 and 2 (4.9-11.71μm) was 5 and 14, respectively (exposure time = 921 s).The number of groups and integrations for each dither position for channels 3 and 4 (11.55-28.1 μm) was 15 and 5, respectively (exposure time = 877 s).For N Car, the number of groups and integrations for each dither position for channels 1 and 2 was 5 and 28, respectively (exposure time = 1854 s).The number of groups and integrations for each dither position for channels 3 and 4 was 15 and 10, respectively (exposure time = 1765 s).For the dedicated background observation, the number of groups and integrations for each dither position for channels 1 and 2 was 5 and 16, respectively (exposure time = 264 s).For channels 3 and 4, the number of groups and integrations for the background observations was 15 and 6, respectively (exposure time = 264 s).11
Data Reduction
We processed the raw detector files for β Pic and N Car using v1.11.0 of the JWST calibration pipeline (Bushouse et al. 2023) using Calibrated Reference Data System context "jwst_1094.pmap."The pipeline consists of three stages: Detector1, Spec2, and Spec3.We processed the β Pic and N Car raw files with the exact same pipeline setup.
The Detector1 stage converts the raw ramp images to uncalibrated slope images and also includes a jump detection step that flags jumps in the ramp between consecutive groups.This step mitigates ramp jumps that are often caused by cosmic-ray hits.We changed the three group rejection threshold to be 100σ in the jump detection step because the default setting in the pipeline overflags jumps in the raw data and creates artifacts in the final spectrum.The default three group rejection threshold in the pipeline was 6σ.The Spec2 pipeline takes the uncalibrated slope images and applies calibrations and instrumental corrections, including a fringe correction.The MRS is known to suffer from fringing that can have effects of up to 30% of the spectral baseline (e.g., Argyriou et al. 2020).In the Spec2 pipeline, we applied both fringe corrections available in the pipeline, the fringe flat and the residual fringe correction step, where the residual fringe correction is not turned on by default.In Spec2, we also performed the stray light subtraction step, which is necessary to subtract the cross artifact.
We used the Spec3 pipeline to combine all four dither positions into spectral cubes for each of the 12 subbands.We created the spectral cubes with the IFUalign coordinate system so that the data cubes were aligned with the MRS image slicers, and the PSFs of β Pic and N Car were well aligned for optimal PSF subtraction.The background subtraction from the dedicated background observations was done in the Spec3 pipeline along with the outlier rejection step.We built the spectral cubes using the drizzle algorithm in the pipeline (Law et al. 2023).We left the pixel size to be the pipeline default, which is 0 13 for channel 1, 0 17 for channel 2, 0 20 for channel 3, and 0 35 for channel 4. The output of the Spec3 pipeline was calibrated spectral cubes, which we used for the analysis in the following sections.
Point-spread Function Subtraction
To search for resolved disk structure as well as β Pic b in our MRS data, we used N Car to perform classical reference differential imaging PSF subtraction on the calibrated spectral cubes.Because of position repeatability issues of the dichroic grating wheel assembly (DGA) of the MRS (Patapis et al. 2023), the science star β Pic and reference star N Car were not exactly centered in the same location on the detector.The position repeatability of the DGA wheel is ∼30 mas radially (Patapis et al. 2023).To achieve the best contrast performance from PSF subtraction, we aligned the PSF reference N Car with the PSF of β Pic prior to subtracting.
We first measured the position of the center of the PSF of β Pic and N Car in the spectral cubes by fitting a 2D Gaussian to each wavelength slice and taking the median of the best-fit center of all the wavelength slices.We did this for each subband separately (1A, 1B, etc.) because not all subbands were observed simultaneously, leading to position changes of the PSF in the spectral cube between the different subbands.We then calculated the difference between the position of the center of the PSF of β Pic and N Car and shifted and interpolated the N Car PSF to the center location of the β Pic PSF using the SciPy ndimage.shiftfunction.After aligning the N Car PSF to the β Pic PSF, we scaled the N Car PSF to have the same spectrum of the unresolved point source of β Pic.We scaled to the spectrum of β Pic rather than a photosphere model because there is an unresolved infrared excess in the central point source as shown in Figure 1, and scaling to just the photosphere model does not completely subtract the β Pic PSF.
Once the N Car PSF was aligned and scaled to the flux level of β Pic, we subtracted the N Car PSF from β Pic in each wavelength slice of the spectral cube, producing a PSFsubtracted spectral cube.We were left with large PSFsubtraction residuals at the center location of the star; so to search for faint resolved disk structure as well as β Pic b, we masked out spaxels within a 3 spaxel radius of the center of the PSF in channel 1 where the spaxel size is 0 13.We also binned every 100 image slices in the data cube to increase the S/N of the individual image slices.
Spectrum of the Central Point Source
To extract the spectrum of the unresolved point source of β Pic, we performed aperture photometry on each wavelength slice of the spectral cubes.To find the center of the PSF, we collapsed each spectral cube along the wavelength axis and fit a 2D Gaussian to the collapsed PSF.We placed the center of the circular aperture for each wavelength slice at the center of the PSF determined from the Gaussian fit.We then used a wavelength-dependent aperture radius of 1.5 times the FWHM of the PSF radius at each wavelength where we determined the FWHM of the PSF by taking a line cut through the center of the PSF in the direction orthogonal to the disk.We fit a Gaussian to the line cut of the PSF to measure the FWHM.At 5 μm, the FWHM of the PSF from the Gaussian fit was 0 325.This is larger than a diffraction-limited PSF at 5 μm, which would have a FWHM of 0 2. Law et al. (2023) similarly found that the size of the MRS PSF at 5 μm is larger than what is expected from a diffraction-limited PSF.After we extracted the flux within the circular aperture in each wavelength slice of the spectral cube, we applied an aperture correction to account for flux missed outside the extraction aperture.We applied an aperture correction for an extraction radius of 1.5 times the PSF FWHM derived in Argyriou et al. (2023) to the extracted spectra.We also applied a correction to the 1D spectrum, as described in Gasman et al. (2023), for the spectral leak at 12.2 μm.
After performing aperture photometry on the spectral cubes, there were still noise sources, like fringing, present in the extracted spectra that were not completely removed from the pipeline processing.To refine the calibration systematics, mitigate noise sources, and obtain the highest S/N spectrum of the unresolved point source of β Pic, we used the observations of N Car to create an RSRF and applied it to the spectrum of the unresolved point source of β Pic.The RSRF was also used to align the different subbands.
We extracted the spectrum of N Car using the exact same method as for β Pic described above.We then fit the UBVGJHK s (Cutri et al. 2003;Reed 2003;Gaia Collaboration 2018) photometric points for N Car with a T = 8800 K, log (g) = 4.0, and [Fe/H] = 0 BT-NextGen stellar photosphere model (Allard et al. 2011) using synthetic photometry and then divided this photosphere model by the N Car spectrum.This removed the stellar continuum and photospheric absorption lines from the N Car spectrum and provided an RSRF (consisting of detector effects present in the spectra that are present after pipeline processing).We then multiplied the RSRF into the β Pic spectrum to remove the noise sources captured in the RSRF.In channel 1 (∼5-7.5 μm), applying the RSRF increased the S/N of the spectrum from ∼130 to ∼230.
The final MRS spectrum of the unresolved point source of β Pic is shown in Figure 1.A surprising difference between the MIRI MRS spectrum shown here and the Spitzer IRS spectrum of β Pic is that the sharp 18 μm silicate feature seen with Spitzer (Lu et al. 2022), is not present in our MRS spectrum.Here, we show the MRS spectrum and leave the analysis of the properties of the silicate feature and the discussion of the disappearance of the 18 μm silicate feature for a future paper (C.H. Chen et al. 2024, in preparation).
5 μm Infrared Excess
We compared our extracted MRS spectrum to a model of the β Pic photosphere from Lu et al. (2022) to search for an IR excess at the shortest MRS wavelengths.This photosphere model is a BT-NextGen model fit to UBVRJHK s photometry as well as an Infrared Telescope Facility (IRTF) SPEX spectrum of β Pic from 0.7 to 2.5 μm.This photosphere model is used throughout the rest of the analysis.
At 5 μm, we find an infrared excess that is 4% above the stellar photosphere model (see Figure 1), potentially indicating the presence of dust within a stellocentric radius ∼ 6.5 au (from the size of the extraction aperture) in the system.To determine the temperature of the dust emitting from 5 to 7.5 μm, we subtracted the photosphere model from the MRS spectrum, smoothed it with a boxcar smoothing filter with a kernel of three data points, and then fit the channel 1 spectrum (4.9 to 7.5 μm) with a blackbody.We also included the L-band photometric point of β Pic from Bonnefoy et al. (2013) of 3.454 ± 0.003 mag.To subtract the photosphere from the Lband point, we calculated synthetic photometry using the stellar photosphere model of β Pic and subtracted this value from the measured L-band photometry.Figure 2 shows the best-fit blackbody to the 5-7.5 μm excess.This blackbody fit gives a dust temperature of 500 ± 20 K.There appears to be some residual structure in the spectrum shown in Figure 2.This structure is likely due to an imperfect subtraction of the absorption lines in the stellar photosphere model.We think this because the structure in the spectrum (such as the dips at ∼5.6 and ∼7.5 μm) are at the same wavelength as the stellar photosphere lines shown in Figure 1.
Even if the absolute flux calibration of the MRS spectrum was incorrect by 4%, the shape of the spectrum still indicates an excess between 5.5 and 7.5 μm.If we pin the shortest wavelength of the MRS spectrum (4.9 μm) to the stellar photosphere model, there is still an ∼3% infrared excess at 5.5 μm.Performing the same photosphere subtraction and blackbody fitting analysis, but with the spectrum pinned to the photosphere model at 4.9 μm, gives a dust temperature of 370 K instead of 500 K.This still indicates the presence of dust in the inner few astronomical units of the system, just at a lower temperature.However, given that dust emission is spatially resolved at 5 μm (see Section 3.3), the excess we see in the spectrum at 5 μm is likely real and not due to uncertainties in the flux calibration.We are mainly interested in the temperature of the hot dust, so we only fit a blackbody to the 5-7.5 μm excess and exclude longer wavelengths, including the silicate feature from ∼8 to 12 μm.Fitting the MRS spectrum at wavelengths longer than 12 μm would require a twocomponent, two-temperature model, and here we are only interested in the hot component.Adding a cold dust component does not affect the blackbody fit from 5 to 7.5 μm.A complete modeling of the entire MRS spectrum including the silicate features will be done in an upcoming paper (C.H. Chen et al. 2024, in preparation).
Assuming that the dust grains are blackbodies in radiative equilibrium, and using the luminosity of β Pic of 8.13 L e (Bonnefoy et al. 2013), we calculated the blackbody location of 2022) that was best fit to a near-infrared spectrum.This shows an infrared excess from 5 to 7 μm which can be explained by hot dust in the system.The vertical dashed gray line shows the detection of [Ar II] emission at 6.986 μm.The other spikes in the spectrum do not appear to be real emission lines.This is because they only cover one wavelength bin, unlike the [Ar II] line which covers five.Furthermore, these spikes also only appear after applying the relative spectral response function (RSRF) and are well aligned to noise features seen in the spectrum of N Car.They then likely result from noise that is injected by performing the RSRF with N Car.The [Ar II] line, however, is present both before and after applying the RSRF.
the 500 K dust around β Pic to be 0.9 au.This blackbody distance is a lower limit to the stellocentric distance of the grains, as smaller dust grains can be at higher temperatures further away from the star than blackbody grains.This is further discussed in Section 3.3.
Spatially Resolved Hot Dust
With Spitzer, Lu et al. (2022) found a tentative infrared excess at 5 μm that could indicate the presence of hot dust within the inner parts of the system.Interferometric observations in H band found evidence for an infrared excess that could either be from a population of hot dust (1500 K) at a distance of less than 4 au from the star, or scattered light from the outer part of the edge-on disk (Defrère et al. 2012).Here, we found additional evidence for the 5-7.5 μm infrared excess seen by Lu et al. (2022) in the spectrum of the unresolved point source (see Figure 1).We confirmed that this population of hot dust emitting at 5 μm is spatially extended in our PSFsubtracted image cubes.
The binned and PSF-subtracted image slice at 5.2 μm is shown in Figure 3 with the beam of Spitzer and the JWST extraction aperture overlaid.The purple line shows a 5σ contour of the detected dust emission, indicating that we detect spatially resolved hot dust emitting at 5 μm at the 5σ level.Figure 3 illustrates that the excess thermal dust emission at 5 μm was spatially unresolved given the angular resolution of Spitzer, but is now spatially resolved with JWST out to ∼20 au, where the disk flux density drops below the 5σ level.By applying PSF subtraction to the MRS cubes, we have discovered a new population of spatially extended hot dust in the β Pic debris disk emitting at 5 μm.
For dust at 10-20 au to emit at 5 μm, the dust grains are likely to be submicron to micron sized.In radiative equilibrium, the energy absorbed by a dust grain at a certain stellocentric distance is equal to the energy emitted from that dust grain, which can be written as where Q abs is the absorption coefficient at each wavelength for a specific material, J λ (T * ) is the mean intensity of the star, and B λ (T d ) is the Planck function representing emission from a dust grain of a certain temperature T d .In the small-grain limit, Q abs can be approximated as Q abs ∝ (1/λ), where λ is the wavelength of light.Using this small-grain approximation and solving for the dust grain temperature as a function of stellocentric distance as in Jura et al. (1998), we get where T d is the temperature of the dust, R * is the stellar radius, D is the stellocentric distance of the dust, and T * is the effective temperature of the star.In the blackbody limit for larger dust grains, where Q abs = 1 at all wavelengths, the equation for dust temperature as a function of distance has the same form, however, the exponent changes from 1/5 to 1/4.We calculated the temperature as a function of stellocentric distance from β Pic for dust grains in the blackbody and small-grain limits assuming stellar parameters of β Pic of T * = 8000 K (Lu et al. 2022) and R * = 1.73 R e (Kervella et al. 2004).The temperature as a function of distance for dust grains around β Pic in both the small-grain and blackbody approximations is shown in Figure 4.
To test whether the dust is better explained in the small-grain limit (∼300-400 K, see Figure 4) versus the blackbody approximation (∼100-160 K), we first summed the flux from all the pixels in the extended disk within the 5σ contour shown in Figure 3.This gives a total flux density from the extended emission at 5.2 μm of 0.03 ± 0.01 Jy.This is about 10 times less than the flux density of the spatially unresolved excess at the same wavelength shown in Figure 2. We then estimated the number of dust particles that would be required to produce an observed flux density of 0.03 Jy at 5.2 μm if the dust was in the blackbody approximation or the small-grain limit.At a stellocentric distance of 10 au, the dust temperature predicted by the blackbody approximation is 150 K.The observed flux from a single dust grain at a given temperature is given by the equation where B(T, λ) is the Planck function, a is the radius of the dust grain, and d is the distance of the dust to Earth.The number of dust particles required to produce the observed flux density from the extended emission at 5.2 μm is then where F obs = 0.03 Jy.For a dust temperature of 150 K in the blackbody limit and assuming a grain size equal to the wavelength of light of 5 μm, we found N dust = 1 × 10 34 .We then used Equation (3) to calculate the predicted flux density at 20 μm from this number of dust particles at 150 K.We found that the predicted 20 μm flux is 450 Jy, which is much greater than, and thus inconsistent with, the observed flux at 20 μm shown in Figure 1.We performed the same calculation, but with dust in the small-grain limit (a = 1 μm) and at a temperature of 350 K (corresponding to a distance of 10 au, as shown in Figure 4).We found that the number of dust particles required to produce the 0.03 Jy flux density was (2013).The best-fit blackbody temperature is 500 ± 20 K.
N dust = 8 × 10 30 and that the predicted 20 μm flux from this number of dust grains at a temperature of 350 K was 0.2 Jy.This predicted flux density is less than the observed excess at 20 μm and thus consistent with the observations.Because the number of dust particles in the blackbody approximation required to produce the observed flux density from the spatially extended dust at 5 μm is inconsistent with the observed flux density at longer wavelengths, the spatially extended 5 μm excess is best explained by hotter submicron-sized grains in the small-grain limit.This calculation yields the same conclusion regardless of the assumed grain size and dust temperature for all temperatures in the small-grain and blackbody approximations within the red colored area in Figure 4.
We thus cannot explain the spatially extended thermal emission at 5 μm out to ∼20 au with blackbody grains, because at temperatures of 100-160 K, they do not emit significant flux at 5 μm.The spatially extended dust seen at 5 μm is better explained by dust grains in the small-grain approximation.In the small-grain limit (2πa = λ) at 5 μm, the grains would have submicron radii and could be below the blowout size for the system (see Section 4.2 for a calculation of the blowout size).
The Detection of [Ar II] Emission
We searched for emission from atomic and molecular gas in our MRS spectrum of the unresolved point source.We did not have any clear detections of molecular gas emission (we search for H 2 O, CO, CO 2 , and CH 4 ).We do, however, detect an emission line at 6.986 μm that we believe to be due to [Ar II] (rest wavelength of 6.9853 μm; Yamada et al. 1985).To verify this emission line further, we reduced each dither position independently and found that the emission line is present in all four dither positions.We also did not detect the emission line in N Car's spectrum, suggesting that the emission line is not due to a instrument systematic or pipeline artifact.We fit the spatially unresolved component of the [Ar II] line with a Gaussian profile to determine its line flux, line width, and radial velocity.The argon emission line with the best-fit Gaussian is shown in Figure 5.
From the best-fit Gaussian, we measured the FWHM of the line to be 86 ± 7 km s −1 and the line center to be at a velocity of 21.0 ± 2.4 km s −1 .At the end of channel 1C (7.65 μm), the spectral resolution of the MRS is ∼83 km s −1 (Argyriou et al. 2023), so the [Ar II] line is consistent with being spectrally unresolved.The measured barycentric radial velocity of β Pic is 20.0 ± 0.7 km s −1 (Gontcharov 2006), which is consistent with the center velocity of the [Ar II] line.We calculated a line flux of 2.4 × 10 −14 erg s −1 cm −2 .
We checked to see if the emission from [Ar II] is spatially resolved to determine if it is from circumstellar gas by subtracting a slice in the spectral cube outside of the argon line (6.9824μm) from the slice of the spectral cube at the peak of the [Ar II] line (6.9856μm).Doing this subtracted off the PSF of the unresolved point source as well as the continuum emission from the dust and planet (β Pic b) so only emission from [Ar II] gas remains.The continuum-subtracted cube slice of the [Ar II] emission is shown in Figure 6.In the 6.9856 μm slice, there is a ∼5-10σ detection of spatially resolved [Ar II] emission, where σ is the standard deviation of the pixels in the annulus, at this wavelength slice, centered on the star with an inner radius of 1″ and an outer radius of 2 5.By visual inspection, the argon appears to have a similar spatial distribution as the dust (see Figure 6), indicating that the argon is indeed a part of the β Pic disk.To the best of our knowledge, this is the first detection of argon in the β Pic disk as well as in any debris disk.We used the line flux from the unresolved and resolved where M [Ar II] is the mass of [Ar II], d is the distance to the star, F is the line flux, h is Planck's constant, ν is the frequency of the light, A ul is the Einstein A coefficient, χ u is the fraction of atoms in the upper state, and m s is the mass of argon.We used an Einstein A coefficient of 5.3 × 10 −2 s −1 from Yamada et al. (1985).We calculated χ u assuming the argon is in local thermodynamic equilibrium (LTE) and that it has an excitation temperature equal to the radiative equilibrium temperature profile shown for blackbody grains in Figure 4.These assumptions of the excitation temperature and LTE make the estimated [Ar II] mass highly uncertain because, with only one [Ar II] line, we have no knowledge of the excitation temperature of the gas or if it is in LTE.
Since we expect the gas temperature to change as a function of disk radius, we calculated the line flux separately for the spatially resolved [Ar II] component.We did this by using rectangular extraction apertures of 2 pixels in height and the width was set by the 3σ contour of the [Ar II] emission shown in Figure 6.We placed these apertures centered at stellocentric distances of 10 and 15 au on both the northeast and southwest sides of the disk.We then fit the spectra from each aperture with a Gaussian and integrated the best-fit Gaussian to obtain a line flux as described above.We then summed together the line fluxes from the two sides of the disk at the same stellocentric distance, giving a total [Ar II] line flux at 10 and 15 au.We then used Equation (4) and excitation temperatures at 10 and 15 au equal to the radiative equilibrium temperatures shown in Figure 4 (150 and 125 K, respectively) to calculate the [Ar II] mass at each stellocentric distance.
For the unresolved [Ar II] emission, the location and LTE temperature of the gas is unknown because the spectral resolution of the MRS is not high enough to resolve gas kinematics.We then assumed that the unresolved [Ar II] has a temperature (180 K) corresponding to a stellocentric distance equal to the PSF FWHM at 6.986 μm, which is ∼7 au.This assumption is likely incorrect; however, we made it because it then gives an upper limit to the total gas mass in the unresolved point source.We then computed the total mass by summing the calculated mass at each stellocentric distance.This gives the total [Ar II] mass of 1 × 10 −3 M ⊕ .Given the uncertainty on the inner edge of the argon disk and the excitation of the gas, there is likely at least 1 to 3 orders of magnitude of uncertainty on this [Ar II] mass.
β Pictoris b
We also searched for β Pic b in our binned PSF-subtracted spectral cubes as it was predicted to be at an angular separation of 0 54 on 2023, January 11, the day of our MRS observations (Lacour et al. 2021;Wang et al. 2021).We detect a point source in channels 1A, 1B, and 1C in the PSF-subtracted cubes.Slices from the PSF-subtracted cubes from all three subbands of channel 1 showing the detection of a point source are shown in Figure 7.We detect this point source throughout channels 1A, 1B, and about half of channel 1C (up to ∼6.8 μm).At longer wavelengths, thermal emission from the edge-on disk becomes too bright and we cannot recover the point source from the emission of the cospatial disk (see Figure 8).At the three different wavelength slices shown in Figure 7, we fit a 2D Gaussian to the point source and measured its separation and position angle from the PSF center of the star in each subband.We computed the median of the position angle and angular separation of the point source for the three image slices from the three different subbands and took their standard deviations to be the uncertainty.We measured a separation of the point source from the star of 0 55 ± 0 02 and a position angle of 31°± 1°.The predicted location of β Pic b on the date of our observations, based on high-precision GRAVITY astrometry measurements, was a separation of 0 540 ± 0.″003 and a position angle of 31°.571 ± 0°.006 (Lacour et al. 2021), which are consistent with the location of the detected point source in our PSF-subtracted spectral cubes.
We computed contrast curves by masking out the planet and then calculating the standard deviation (1σ) in concentric annuli of a 2 pixel width in the binned PSF-subtracted image slices as a function of angular separation from the center of the PSF.We then divided these standard deviations as a function of angular separation by the spectrum of the unresolved point source to estimate the contrast performance.We did this both in the direction of the disk, to determine how well we recover β Pic b, and also outside of the disk in order to show the general contrast performance of our PSF subtraction with the MRS.To compute the contrast curves in the direction of the disk, we included all columns of pixels that included disk emission at the 3σ level.The contrast curves in the direction of the disk and outside of the disk for the three wavelength slices in Figure 7 are shown in Figure 9.The contrast curves computed in the direction of the disk show that we are achieving a 5σ contrast∼ 1 × 10 −4 at a separation of 1 0 at 5.3 and 5.7 μm.At 6.5 μm, we achieve a 5σ contrast of 3 × 10 −4 .The decrease in the contrast performance from 5.7 to 6.5 μm between separations of 0 6-2 5 seen in the left plot of Figure 9 is likely because the thermal emission from the disk is brighter and extended out to larger stellocentric distances at longer wavelengths.We also computed contrast curves with the extended disk emission masked out.This was done to estimate the general contrast performance of our PSF-subtraction technique with the MRS without contamination from the extended disk emission.These contrast curves are shown in the right panel of Figure 9.With the disk emission masked, our PSF subtraction and binning method achieves 5σ contrast levels ∼ 1 × 10 −4 at all three wavelengths at separations 1″.
We estimated the contrast of β Pic b by dividing its spectrum, extracted from aperture photometry in Section 3.7, by that of the central unresolved point source (star + disk) from Section 3.1.We then compared this contrast to the calculated contrast curves shown in Figure 9 to estimate the S/N of β Pic b.The S/N of β Pic b at 5.3, 5.7, and 6.5 μm in our binned MRS image cubes is 5.1, 6.3, and 5.5 respectively.
Cross-correlation
We also searched for β Pic b using cross-correlation methods with a template planetary atmosphere spectrum.This method involves cross-correlating the spectrum of each spaxel of the data cube with a planetary atmosphere model with various molecular absorption lines to search for molecules present in the atmosphere of the planet.This creates a cross-correlation map, where each spaxel in the IFU cube has one computed cross-correlation value.At the location of the planet in the spectral cube, where there is molecular absorption lines from its atmosphere, the cross-correlation function will peak, while the spaxels in the cube that do not contain a planet with the absorption features of the specific molecule being tested will appear as a featureless background.
For the atmosphere template spectrum, we generated petitRADTRANS (Mollière et al. 2019) ).For each model we generated, we only included absorption from one molecular species to search specifically for each individual species in the atmosphere of β Pic b.We searched for H 2 O, CO, CH 4 , CO 2 , and NH 3 in all wavelength channels.For each molecule, we input a mass fraction of 1 × 10 −4 into the atmosphere model.We tried a range of mass fractions from 1 × 10 −3 to 1 × 10 −6 and found that the result of the cross-correlation detections did not change for any molecule.
We first removed the continuum from the atmosphere model by fitting the model with a B-spline of 10 breakpoints and subtracting off this continuum fit, leaving only the molecular absorption lines.The same was done for the spectrum of each spaxel in the spectral cube, to remove the continuum spectrum.Once we removed the continuum of the atmosphere model and each spaxel, we then cross-correlated the atmosphere model templates with the continuum-subtracted spectra of each spaxel to calculate a cross-correlation value.We then estimated the S/N per spaxel of any potential cross-correlation detection by dividing by the standard deviation of cross-correlation values of all the spaxels, which is similar to what is done by Mâlin et al. (2023).
This produces a S/N detection map where each spaxel in the image cube has a S/N value.Figure 10 shows the crosscorrelation map of the full channel 1 cube with a petitRADTRANS atmosphere model containing H 2 O as the only absorbing molecular species.We find a bright pixel in the cross-correlation map at the same location as the brightest pixel of the point source recovered in the PSF-subtracted images slices.Out of all the molecules and MRS channels we tested, H 2 O in channel 1 was the only one that resulted in at least one pixel above a S/N value of 5.The bright pixel in our detection map has a S/N value of 5. Previous studies that use cross-correlation techniques with near-infrared ground-based data have detections of planets that cover multiple IFU spaxels (Hoeijmakers et al. 2018), while here we only detect one spaxel with a significant crosscorrelation value, which lines up with the position of the planet from the PSF-subtracted image slices.Doing the same crosscorrelation method with all the molecular species listed above did not result in any spaxel in the cubes having a cross-correlation value greater than five in any of the MRS channels (1-4), so finding a spaxel with a S/N value of 5.7 using H 2 O does not appear to be random chance.Furthermore, with 1498 spaxels in the channel 1 data cube, it is unlikely that the crosscorrelation method would have randomly selected the same spaxel as the point source in the PSF-subtracted cubes to have a significant cross-correlation value.
The detection of a point source through PSF subtraction at the predicted location of β Pic b based on orbit fits from Lacour et al. (2021) in three MRS subbands as well as the tentative detection of water through cross-correlation at the same location as the point source provides enough confidence for us to conclude that we are in fact detecting β Pic b in the MRS data.
Spectrum of β Pictoris b
We performed aperture photometry on β Pic b in each slice of the binned PSF-subtracted spectral cubes to extract a spectrum of the planet.We found that the colocated spatially extended dust emission contaminates the spectrum of the planet.In order to figure out the best strategy for subtracting the contribution of the dust, we performed an injection-andrecovery test using the N Car PSF.We first scaled the PSF of N Car in each wavelength slice such that the brightest pixel of the N Car PSF matches the brightest pixel of β Pic b in the first wavelength slice in each subband.In the last wavelength slice of each subband, the brightest pixel in the injected point source was within 5% of that of β Pic b.The spectrum of N Car was not changed aside from the scaling.We then injected the scaled N Car PSF on the opposite side of the disk at the same separation as β Pic b before performing PSF subtraction.We PSF subtracted each spectral cube with N Car injected the same way as described in Section 2.3 and produced a binned and PSF-subtracted spectral cube, which is shown in Figure 11.
We performed aperture photometry on the injected planet and subtracted off the disk background by placing a background aperture of the exact same size at a further separation of 0 8 on the same side of the disk as the injected planet.This location of the background aperture was chosen because it is the closest location to the injected planet without the extraction and background apertures overlapping.We used Figure 9. Left: contrast curves in the direction of the disk from the binned PSF-subtracted image cubes at wavelength slices in channels 1A, 1B, and 1C (same slices shown in Figure 7).The red point shows the contrast of β Pic b calculated at 5.7 μm from our data.We do not show the contrast of β Pic b at the other two wavelengths for clarity.Right: contrast curves from the binned PSF-subtracted image cubes at the same wavelength slices as in the left plot but over the entire field of view with the extended emission from the disk masked.This shows the contrast performance of our PSF subtraction with the MRS without contamination from the extended disk emission.
an aperture radius of 0.5 times the FWHM of the PSF because we found that using a larger aperture includes more thermal emission from the disk, which results in a worse recovery of the injected spectrum.We created our own aperture correction for this aperture size using N Car.We binned the spectral cube of N Car the same way as for the PSF-subtracted cubes, and then extracted the spectrum of N Car the same way as described above for the unresolved point source of β Pic.We also extracted the binned spectrum of N Car using an aperture of 0.5 times the FWHM and divided the two to derive an aperture correction for an aperture size of 0.5 times the FWHM.Using this background-subtraction method and extraction aperture size, we were able to recover the general shape of the spectrum of the injected point source; however, we were unable to recover the absolute flux density values.The injected and recovered spectra for channel 1B are shown in Figure 12.At the same wavelength that we were unable to detect β Pic b (∼7 μm), we were unable to recover the injected point source as well (see Figure 11).This is potentially because beyond this wavelength, the thermal emission from the disk is bright enough that it dominates over the planet flux, and we cannot recover the planet at longer wavelengths.We calculated the difference between the injected and recovered spectra and then subtracted this off of the extracted spectrum of β Pic b.This was to remove the excess flux from the cospatial disk that was not subtracted with the background aperture, as indicated by the injection-and-recovery test.
We used the same background aperture, same aperture size, and same aperture corrections for extracting the spectrum of β Pic b as we did for the injected fake planet.We placed the extraction aperture on the center of β Pic b measured by the 2D Gaussian fit for each subband.We estimated the uncertainty on the spectrum of β Pic b from the contrast curves shown in Figure 9.That is, the standard deviation of the annulus containing the separation of β Pic b in each wavelength slice is used as the uncertainty.The final spectrum of β Pic b, along with an ExoREM (Charnay et al. 2018;Blain et al. 2021) planetary atmosphere model, is shown in Figure 13.
Atmospheric Model Fitting
We used the Species package (Stolker et al. 2020) along with three different sets of model grids of synthetic spectra to fit the spectra of β Pic b.The first was the ATMO model grid (Tremblin et al. 2015) presented in Petrus et al. (2023).The second model grid we used was the DRIFT-PHOENIX model grid (Helling et al. 2008) and the third was the ExoREM grid (Charnay et al. 2018;Blain et al. 2021).The model parameters and the prior ranges for all three model grids are shown in Table 1.We used uniform priors for each parameter in each of the model grids.In the modeling fitting, we included our MRS spectrum, the GRAVITY K-band spectrum from GRAVITY Collaboration et al. (2020), and the GPI YJH spectra from Chilcote et al. (2017).We also included photometric measurements from Magellan and Gemini NICI (Males et al. 2014) and VLT/NACO (Bonnefoy et al. 2013).Since we were unable to recover the absolute flux density of the injected planet in the injection-and-recovery test because of the cospatial thermal dust emission, we left the overall flux scaling of the MRS spectrum as a free parameter in the model fitting, similar to what was done in Kammerer et al. (2021) to align spectra from GPI and GRAVITY for the substellar object HD 206893 B. We weighted each spectrum and photometric point in the log-likelihood function in the Species model fitting such that each data set, spectral and photometric, had the same weighting.This was to ensure that the model fitting is not dominated by the GRAVITY spectrum, which contains the most data points.We then infered the posterior distribution of the parameters using nested sampling with PyMultiNest (Feroz et al. 2009;Buchner et al. 2014;Feroz et al. 2019).The MRS spectrum of β Pic b is shown in Figure 13 with the best-fit ExoREM model.The posterior distributions for the model fits are shown in the Appendix.
Figure 14 shows the best-fit DRIFT-PHOENIX, ATMO, and ExoREM models to all of the spectra and photometry points.Table 1 shows the best-fit model parameters for each of the model grids.Both the ATMO and DRIFT-PHOENIX model grids give consistent measurements within the uncertainties for log(g) and effective temperature.The ExoREM grid gives a lower effective temperature by ∼200 K than the other two model grids.The temperatures from the ATMO and DRIFT-PHOENIX grids are within the uncertianties of each other.Fits with ExoREM models have given lower effective temperatures than other model grids in other fits in the literature as well (e.g., GRAVITY Collaboration et al. 2020;Kammerer et al. 2021).All three model grids produce inconsistent metallicities.The ATMO model grid gives a best-fit metallicity that is subsolar, while the DRIFT-PHOENIX model grid gives a supersolar metallicity and the ExoREM grid gives a larger supersolar metallicity (see Table 1).The uncertainty in the metallicity from the ATMO grid does include solar metallicity, although the uncertainties of the metallicity between the three model grids do not overlap, suggesting that we do not constrain the metallicity of the planet well with this method of grid model fitting.
To test how the new MRS data affect the constraints on the atmosphere of β Pic b from our modeling, we repeated the grid model fitting excluding the MRS spectrum.We find that the addition of the MRS spectrum does not significantly change the results from the grid model fitting.All of the atmospheric parameters from fitting each of the three model grids are consistent with each other (within the uncertainties) when we performed the model fits with and without the MRS spectrum.The C/O ratio from the ExoREM and ATMO model fits is similarly not significantly changed.Excluding the MRS spectrum with the ExoREM grid yielded a C/O ratio of -+ 0.38 0.02 0.10 , which is consistent with the C/O ratio obtained when including the MRS spectrum (See Table 1).Excluding the MRS spectrum from the ATMO model grid fit gives a C/O ratio of -+ 0.38 0.06 0.10 , which is consistent with the ATMO fit that included the MRS spectrum.The shape of the MRS spectrum of β Pic b is therefore consistent with what is predicted by the atmosphere model fits to the near-infrared spectra and photometry.
β Pictoris b's Atmosphere Composition
The composition of a giant planet's atmosphere, usually the C/O ratio, can potentially be used as a tracer to infer formation mechanisms and location in the parent protoplanetary disk (Öberg et al. 2011).There are two main hypothesis for forming a giant planet.The first is through gravitational collapse, which is similar to star formation (Bodenheimer 1974).In this scenario, a region of the protoplanetary disk becomes gravitationally unstable and quickly collapses to form a planet that then cools over time.The second is core accretion, where a solid core is formed that slowly accretes gas from the surrounding disk until the mass of the accreted gas is similar to that of the core.Then, a phase of runaway gas accretion occurs and the planet accretes a significant amount of gas over Figure 12.Channel 1B spectrum of the injected point source (orange) and the spectrum we recover from the injected point source (blue).We generally recover the overall slope of the injected spectrum, however, we are unable to recover the absolute flux density of the spectrum.Even with subtracting off the flux density in a background aperture in an attempt to remove contamination from the spatially extended disk, the recovered spectrum still has a greater flux density than the injected spectrum by a factor ∼ 1.3-1.4.
Figure 13.MIRI MRS spectrum of β Pic b (black) with the best-fit ExoREM model overplotted (blue) and a best-fit blackbody (orange).The broad absorption bands from 5.1 to 6.0 μm and from 6.4 to 6.8 μm are due to absorption from water vapor.The local maximum in the spectrum at 6.3 μm is due to a relative absence of water vapor lines near this wavelength.The spectrum is better fit by the atmosphere model that contains water absorption rather than the blackbody with no molecular features since the blackbody cannot reproduce the shape of the spectrum between 6 and 6.5 μm.The spectrum showing signs of water absorption is consistent with the tentative detection of water from the cross-correlation technique.a short period (Lissauer & Stevenson 2007).As discussed by GRAVITY Collaboration et al. (2020), in the gravitational collapse scenario, the C/O ratio of the planet is expected to match that of the host star because the formation happens quickly and all solid and gaseous material in the disk that collapses into the planetary atmosphere have a combined stellar C/O ratio.The core accretion scenario takes longer than the gravitational collapse scenario, which provides more time for the accretion of solid materials, including planetesimals.In the core accretion scenario, without enrichment from solids before Middle: same as the top panel but now with the best-fit model from the ATMO grid.Bottom: same as the top panel but with the ExoREM grid.The gray spectral models are sampled from the posterior distribution.The residuals from the best-fit model are shown in the bottom panel of all plots with the two dotted lines representing ±5σ residuals.All three models appear to be well fit, but the ExoREM grid yields a systematically lower temperature than the other two.
Note.The c red 2 value reported here is for all the photometry and spectra shown in Figure 14, not just the MRS spectra.
the runaway gas accretion phase, the atmosphere of the planet is not comprised of a combination of gas and solids like in the gravitational collapse scenario, but is only composed of the gas.Therefore, without the enrichment of solids, the C/O ratio of the atmosphere in this scenario is expected to match the C/O ratio of the gas in the disk, which is superstellar.However, with enrichment from solids before the runaway gas accretion stage, the C/O ratio of the atmosphere can be lowered to substellar values as there is more time to accrete solid material in the core accretion scenario than the gravitational collapse scenario (GRAVITY Collaboration et al. 2020).
The wavelength range of our MRS spectrum of β Pic b contains a water absorption band, which, along with the CO band in the GRAVITY spectrum of β Pic b, could potentially help constrain the C/O ratio of the planet's atmosphere.From our modeling with the ATMO grid of all the spectra and photometry in Figure 14 Using the C/O ratio of a planetary atmosphere to infer its formation history requires knowledge of the C/O ratio of the host star; however, there is not a published value for the C/O ratio of β Pic in the literature.Because of this, we compared to the C/O ratio measured from the β Pic moving group member HD 181327 as in Reggiani et al. (2024).The C/O ratio of HD 181327 was used as a proxy for β Pic since they likely formed out of the same molecular cloud.The C/O ratio of HD 181327 is 0.62 ± 0.08 (Reggiani et al. 2024).The C/O ratio for β Pic b we got from the ATMO grid fitting is substellar and does not contain the stellar C/O in its uncertainty.Similarly, the best-fit C/O ratio from the ExoREM grid does not contain the stellar C/O.Both C/O ratios from the two model grids are within the uncertainties of each other and suggest a substellar C/O ratio for the planet.
The two C/O ratios we inferred from the grid model fitting are both consistent within the uncertainties with the C/O ratio measured from a free retrieval of the GRAVITY and GPI spectra of
Dust Accretion onto β Pictoris b and c
The infrared excess seen at 5 μm suggests that there is dust produced in the inner few astronomical units of the system.The dust emitting at 5 μm that we see to be spatially extended out to ∼20 au in the PSF-subtracted image slices is best explained by grains in the small-grain limit (2πa = λ).This spatially extended dust seen at 5 μm is then likely to be below the blowout size for the system (see Figure 15 for the blowout size) and can be driven outwards over time by radiation pressure.The blackbody stellocentric distance for the 500 K dust (from our best-fit blackbody to the 5-7.5 μm excess) is 0.9 au.If small dust grains are being produced at 0.9 au, and are then radiatively driven outwards, some will likely collide with the planets β Pic b and β Pic c, which have semimajor axes of 9.9 ± 0.05 au and 2.72 ± 0.02 au (Nowak et al. 2020), respectively.We used our MIRI MRS data to perform an order-of-magnitude estimate for the dust accretion rate from the inner hot dust onto the two known planets in the β Pic system.
To estimate the dust accretion rate onto β Pic b and β Pic c, we first calculated the dust mass from the 5 μm unresolved excess using the following equation from Lisse et al. (2009) where F λ is the flux density from the dust at a given wavelength, D is the distance to the star (19.6 pc; Gaia Collaboration et al. 2023), B λ (T) is the blackbody function at a given temperature T, Q abs (λ) is the absorption coefficient of the material at a given wavelength, S is a scaling factor for the particle size distribution, a is the grain radius, and dn da is the particle size distribution.We assumed the particle size distribution that is expected for collisional equilibrium where µ - dn da a 3.5 (Dohnanyi 1969).We assumed a dust temperature of 500 K based on the fit to the 5-7 μm excess and for the absorption coefficient, we calculated Q abs (λ) assuming Mie theory and the optical constants of amorphous olivine (Mg 2 SiO 4 ) from Jäger et al. (2003) for the different dust grain sizes spanning the region = a 0.03 min μm to = a 10 max μm.Because we do not know the composition of the dust grains producing the 5 μm excess, we assume they are amorphous silicate grains, since these were detected by Spitzer around β Pic (Lu et al. 2022).Because of this uncertainty in composition, we also repeat this calculation assuming optical The black dashed horizontal line shows where β = 0.5.Grains sizes with β > 0.5 become unbound and are blown out of the system.The vertical black dashed dotted line shows a = λ/(2π) for λ = 5 μm, which is a = 0.8 μm.The small-grain limit is defined as 2πa = λ.
At 5 μm, the flux density from the excess dust emission is 0.2 Jy (see Figure 2).We determined a dust mass estimate by scaling the particle size distribution (using the scaling factor S) such that the dust flux density calculated on the right-hand side of Equation ( 5) is equal to the observed dust flux density at 5 μm.We then calculated the dust mass by integrating over the scaled particle size distribution, using the equation where M is the dust mass, S is the scaling factor from Equation (5), ρ is the bulk density, and a is the grain radius.The p a 4 3 3 term is the volume of a spherical grain and a −3.5 is from the assumed particle size distribution.We assumed a bulk density of ρ = 3.3 g cm −3 .Integrating Equation (6) gives a dust mass from the 5 μm excess of 7 × 10 21 g, assuming that all of the excess is produced from olivine.Doing the same calculation for pyroxene grains gives a dust mass of 1 × 10 22 g.
We next calculated the blowout size for olivine and pyroxene silicate grains around β Pic.The ratio of the radiation force to the gravitational force can be written as where L * is the stellar luminosity, M * is the mass of the star, c is the speed of light, G is the gravitational constant, a is the radius of the dust grains, ρ is the bulk density of the material, and 〈Q pr 〉 is the average radiation pressure coupling coefficient for a given material.We used a mass of β Pic of 1.75 M e (Crifo et al. 1997).〈Q pr 〉 is given as pr pr where F λ is the flux density at a given wavelength of the star.
For F λ , we used the photosphere model from Lu et al. (2022) and we calculated Q pr assuming Mie theory and amorphous olivine and pyroxene grains.Dust grains with β-values >0.5 become unbound and are blown outwards, setting the blowout size for the system (Krivov et al. 2006).Figure 15 shows the calculated β-values as a function of grain size along with a horizontal dashed black line indicating where β = 0.5.Grain sizes with β-values above the horizontal black line are blown out of the system, which sets the blowout radius for the system to be between 0.03 and 1.14 μm, assuming olivine grains.For pyroxene grains, the blowout radius is between 0.03 and 1.24 μm.
Using the β-values of the dust grains, we calculated their terminal velocity and the time it takes for the dust grains to reach the planets assuming that their initial location is at the blackbody distance of the 5 μm excess (0.9 au, since we do not know the physical location of the grains producing the unresolved excess).We also assumed that the dust has a distribution that is azimuthally symmetric, and that the dust grains travel at their terminal velocity until they accrete onto the planets.We calculated the terminal radial velocity of the dust grains using this expression from Su et al. (2005) as where v r is the terminal radial velocity and r init is the stellocentric distance where the dust grains are released (we assumed the minimum blackbody distance of r init = 0.9 au).
To calculate the dust accretion rate onto β Pic b and β Pic c, we binned the scaled particle size distribution we determined above into radius bins of length 0.1 μm from 0.03 to 1.14 μm (the blowout size range for olivine) and calculated a dust mass in each radius bin by integrating Equation (6) over each radius bin.We then calculated the terminal velocity of the dust grains in each radius bin by calculating β for each bin center radius using Equation (7).Assuming that the dust grains start at 0.9 au and using the semimajor axes of β Pic b and β Pic c of 9.9 and 2.72 au, respectively, we then calculated the time it takes for the dust to travel from their initial locations to the distance of the planets assuming that they travel the entire distance at the terminal velocity.We checked this assumption by integrating the equation of motion of a dust grain in orbit around β Pic in two dimensions, assuming that the dust grain starts out on a circular orbit with Keplerian velocity at 0.9 au.The force in the radial direction for a dust grain under gravitational and radiation forces, as shown in Lisse et al. (1998) and Krivov et al. (2006), is where r is the distance from the dust grain to the central star.
By numerically integrating Equation (10) in two dimensions with a β-value of β = 0.5, we found that the dust grain will reach its terminal velocity in about a tenth of the time it takes to reach the semimajor axis of β Pic b and 40% of the time it takes to get to β Pic c.For dust grains of larger β-values, they reach their terminal velocity quicker.Thus, the assumption that the dust grains travel the entire distance at the terminal velocity likely does not have a large effect on the estimated dust accretion rate.
We estimated the dust accretion rate in each radius bin of the particle size distribution using the equation where M is the accretion rate, M d is the dust mass in each size bin, R p is the planet radius, t is the time it takes the dust in each radius bin to reach the planet from its origin location, d is the distance between the dust origin radius and the planet semimajor axis, and Ω is the solid angle subtended by the disk.We used planetary radii of 1.36 R J for β Pic b (GRAVITY Collaboration et al. 2020) and 1.2 R J for β Pic c (Nowak et al. 2020).We summed this equation across all radius bins in the particle size distribution to get a total dust accretion rate.
To estimate Ω, we first measured the vertical profile of the spatially resolved dust at 5 μm by collapsing the disk along the midplane to create a single integrated vertical profile of the disk with a high S/N.Then, we fit the vertical profile with a Lorentzian to measure the FWHM of the disk.The disk vertical structure and the best-fit Lorenztian is shown in Figure 16.To measure the FWHM of the disk, we first subtracted in quadrature the FWHM of the PSF from the best-fit FWHM of the Lorentzian.We used the measured FWHM of the PSF described above.The FWHM of the disk measured from the best-fit Lorentzian after deconvolving with the PSF FWHM is 11.5 ± 0.3 au.
We calculated the solid angle Ω using the vertical profile of the 5 μm spatially extended disk determined above.We calculated the opening angle of the 5 μm disk by using half the FWHM of the vertical profile (11.5 au) and the outer radius of the disk (20 au) and then solving ( ) q = h r tan , where h is 5.75 au and r is 20 au.This gives an opening angle of θ = 16°.We then integrated the differential for solid angle dΩ = ( ) q q f d d sin over the opening angle and over 2π radians azimuthally.This gives a solid angle of Ω = 1.92 sr.Then, using Equation (11), we calculated a dust accretion rate for β Pic b and β Pic c of = ´- M 2 10 17 M J yr −1 and = ´-M 2 10 15 M J yr −1 , respectively, assuming olivine dust grains.Repeating the calculation with pyroxene dust grains gives dust accretion rates on β Pic b and β Pic c of = ´-M 3 10 17 M J yr −1 and = ´- M 3 10 15 M J yr −1 , respectively.
We also performed this calculation for an Fe/Mg ratio of 1:1 for olivine (MgFeSiO 4 ) and pyroxene (Mg 0.5 Fe 0.5 SiO 3 ), and we found that the β-values increase for these more iron-rich grains compared to the pure magnesium grains.For pyroxene, using a composition of Mg 0.5 Fe 0.5 SiO 3 , the β-values increased by about a factor of 1.4 from those shown in Figure 15.The blowout size for Mg 0.5 Fe 0.5 SiO 3 is a 1.8 μm.For this composition of 50% Fe and Mg, the value for β does not come below 0.5 at the small particle size like it does for MgSiO 3 and Mg 2 SiO 4 , as Figure 15 shows.Instead, all particles below 1.8 μm are blown outwards, so we have to estimate the smallest size grains in the particle size distribution to get a limit on the smallest particle sizes that are present in the disk.Krijt & Kama (2014) estimated the smallest particle size in a debris disk and relate it to the blowout size with the equation where a is the semimajor axis of the dust, L * is the luminosity of the star, f is the ratio of the relative velocity of the colliding bodies and their Keplerian velocity, η is the fraction of the precollision kinetic energy of the colliding bodies that is converted into making a new surface, and γ is the surface energy per unit surface area of the material.As done in Krijt & Kama (2014), we assumed that the collisions are below the hypervelocity regime and f = 10 −2 and η = 10 −2 .We used γ = 0.05 J m −2 for silicate grains (Krijt & Kama 2014) and we assumed the dust grains are at the blackbody distance inferred from the blackbody fit above of 0.9 au.We find that for silicate grains, the minimum dust particle radius in the disk is 0.028 μm.We then used this size as the minimum particle size for the more iron-rich grains (MgFeSiO 4 and Mg 0.5 Fe 0.5 SiO 3 ) and calculated the dust accretion rate using the same method as above.We estimated a dust accretion rate onto β Pic b for Mg 0.5 Fe 0.5 SiO 3 of 6 × 10 −17 M J yr −1 and 5 × 10 −17 M J yr −1 for MgFeSiO 4 .For β Pic c we obtained dust accretion rates of 6 × 10 −15 M J yr −1 and 5 × 10 −15 M J yr −1 for Mg 0.5 Fe 0.5 SiO 3 and MgFeSiO 4 respectively.In this calculation, we did not assume that the grains have any porosity; however, Arnold et al. (2019) showed that for silicate grains with 97.5% porosity around β Pic, the blowout size increases to ∼10 μm, meaning that more dust particles are blown outwards, increasing the dust accretion rates onto the planets.
Given that the masses of these planets are 9.0 ± 1.6 M J for β Pic b and 8.2 ± 0.8 M J for β Pic c (Nowak et al. 2020), these dust accretion rates do not contribute significantly to the total masses of these planets, even over the ~23 Myr age of the system.If, however, the small dust grains that are accreted can remain aloft in the planet's atmosphere, they could potentially impact the observed spectra of the planet at near-infrared and mid-infrared wavelengths.Based on near-infrared photometry of β Pic b, Bonnefoy et al. (2013) found that the atmosphere of β Pic b is likely dusty, which suggests that there could be an absorption feature in the spectrum of β Pic b due to silicates at 10 μm.However, since the thermal emission of the disk at 10 μm dominates over the flux of the planet in our MRS data (see Figure 8), we were unable to search for silicate absorption in the atmosphere of β Pic b.Based on these dust accretion rate estimates, we expect the spectrum of β Pic c to be more affected by dust than β Pic b, although we cannot test this here because we cannot recover β Pic c with the MRS since it is too close to the host star.
[Ar II]
The detection of circumstellar [Ar II] emission around β Pic with the MRS is surprising, so here, we speculate on possible explanations for this detection of argon.Similar to the dust in the disk, the gas around β Pic is also subject to blowout from radiation pressure (Beust et al. 1989).The radiation pressure for the atomic gas around β Pic depends on the number and strength of ground state transitions for a given species (see Fernández et al. 2006).Fernández et al. (2006) found that singly ionized argon experiences zero radiation force and is not subject to radiative blowout because it does not have any ground state transitions in the 0.1-5 μm range.It is then possible for Ar II to accumulate over time in the β Pic disk if it is continuously produced through the destruction of minor bodies in the system.Other atomic species detected around β Pic are C, O, Na, Mg, Al, Si, S, Ca, Cr, Mn, Fe, and Ni (Brandeker et al. 2004;Roberge et al. 2006), which are thought to be of secondary origin (Fernández et al. 2006), meaning not from the protoplanetary disk, but rather created from the destruction of minor bodies.Similarly, the molecular CO in the β Pic disk is thought to be of secondary origin (Matrà et al. 2017).
If the argon is also produced from the destruction of minor bodies, then it could persist in the disk since it does not feel radiation pressure.If we assume that the argon production rate is constant over the age of the system (∼20 Myr), we estimate an argon production rate of 3 × 10 14 kg of argon per year to produce the total argon mass seen with the MRS.For reference, this is roughly equivalent to producing the mass of Halley's Comet (2.2 × 10 14 kg; Hughes 1985) of Ar II each year in the β Pic system.This estimate is highly uncertain given the uncertainties in the argon mass and the likely incorrect assumption that the gas can persist over millions of years in the disk since it does not feel radiation pressure.
In the solar system, minor bodies such as comets and meteorites have been found to contain argon.The argon found in solar system meteorites is thought to be primordial presolar gas that has become "trapped" within the matrices of meteorites (Huss et al. 1996;Patzer & Schultz 2002;Vogel et al. 2003).Argon has also been detected in the coma of comets 67P/ Churyumov-Gerasimenko (Balsiger et al. 2015) and Hale-Bopp (Stern et al. 2000).Argon sublimates at a temperature of 40 K, so the detection of argon in comet 67P/Churyumov-Gerasimenko was interpreted to mean that the comet formed in a cold outer region of the protosolar nebula (Balsiger et al. 2015).If the minor bodies in the β Pic system also contain trapped primordial argon like those in the solar system, it is possible that the argon detected around β Pic is from the destruction of minor bodies.
Although the production of argon in the β Pic system could potentially be explained by the destruction of minor bodies, the mechanism of ionization is unclear.Using a photoionization code, Fernández et al. (2006) found that argon should remain mostly neutral throughout the β Pic disk with an ionization fraction on the order of 10 −6 .Their calculation considered UV photons from the star, the interstellar radiation field, and ionization from cosmic rays, and they found that the UV radiation field dominated the ionization.They did not consider, however, coronal X-ray emission from β Pic, which was detected by Hempel et al. (2005) and Günther et al. (2012).The ionization fraction of argon predicted by Fernández et al. (2006) would require an unphysically large amount (thousands of Earth masses) of neutral argon in the disk to produce the detected [Ar II] emission seen with the MRS.The ionization of neutral argon could then potentially be due to X-ray emission from the corona of β Pic.However, confirmation of this would require a photoionization model that considers coronal emission from the star, which is beyond the scope of this paper.
Conclusion
As a part of JWST GTO program 1294, we present MIRI MRS observations of the β Pic exoplanetary system.Our main findings are: 1. We detect an infrared excess from the unresolved point source from 5 to 7 μm that is best fit by a 500 K blackbody, indicating the presence hot dust in the inner few astronomical units of the system.
2. Through PSF subtraction, we detect a spatially resolved dust population emitting at 5 μm extending out to stellocentric distances ∼ 20 au.This dust population is best explained by small subblowout-sized grains because larger blackbody grains in radiative equilibrium are not hot enough between 10 and 20 au to emit significant radiation at 5 μm. 3. We detect and spatially resolve circumstellar [Ar II] emission for the first time around β Pic.This is also the first argon detection in a debris disk.4. The unresolved and resolved hot dust population suggests that dust is produced in the inner few astronomical units of the system where the subblowout-sized grains are radiatively driven outward where they could accrete onto β Pic b and β Pic c.We use our MRS data to estimate a dust accretion rate onto β Pic b and β Pic c of = M -10 17 M J yr −1 and = - M 10 15 M J yr −1 , respectively.5. We detect β Pic b with both PSF subtraction and crosscorrelation with atmosphere models.We find that water is the only molecule we detect in cross-correlation.We also present the first mid-infrared spectrum of the planet from ∼5 to 7 μm, which includes a water absorption band.6. Grid model fitting for β Pic b reveals that the metallicity of the planet is entirely model dependent and that the C/O ratio is likely between 0.3 and 0.5.The atmospheric modeling with both ExoREM and ATMO mode grids appears to favor a substellar C/O ratio for β Pic b.
Figure 1 .
Figure1.Left: MRS spectrum of the β Pic unresolved point source of channels 1-4.The feature from ∼8 to 12 μm is from emission from silicate grains.Right: channel 1 MRS spectrum of the β Pic central point compared with a stellar photosphere model of the β Pic star fromLu et al. (2022) that was best fit to a near-infrared spectrum.This shows an infrared excess from 5 to 7 μm which can be explained by hot dust in the system.The vertical dashed gray line shows the detection of [Ar II] emission at 6.986 μm.The other spikes in the spectrum do not appear to be real emission lines.This is because they only cover one wavelength bin, unlike the [Ar II] line which covers five.Furthermore, these spikes also only appear after applying the relative spectral response function (RSRF) and are well aligned to noise features seen in the spectrum of N Car.They then likely result from noise that is injected by performing the RSRF with N Car.The [Ar II] line, however, is present both before and after applying the RSRF.
Figure 3 .
Figure 3. Binned 5.2 μm slice of the PSF-subtracted spectral cube showing the extended emission from hot dust.We apply an image normalization where =v 200 min
Figure 4 .
Figure 4. Dust temperature as a function of stellocentric distance from β Pic for dust grains in the small-grain approximation (black solid line) and for dust grains in the blackbody approximation (black dashed line).The red colored area shows the approximate location of the spatially resolved dust seen at 5 μm (based on where the flux drops below a 5σ detection).
Figure 5 .
Figure 5. [Ar II] emission line at 6.986 μm from the β Pic unresolved point source with the best-fit Gaussian overlaid.The best-fit blackbody, as shown in Figure 2, was subtracted from the spectrum in this figure.
[Ar II] emission to estimate the total mass of [Ar II].Under the assumption that the emission is optically thin, the total mass of [Ar II] is related to the line flux by the equation
Figure 6 .
Figure 6.Left: [Ar II] image created by subtracting the 6.9824 μm slice (outside the argon line) from the 6.9856 μm slice (the peak of the argon line).The red contours shown are 3, 5, and 10σ contours.The black circle shows the FWHM of the JWST PSF at this wavelength, indicating that the [Ar II] emission is spatially resolved out to ∼20 au.The red cross shows the position of the central star.The north-east orientation of the images are the same as in Figure 3. Right: binned PSF-subtracted cube slice at 6.5 μm with 3, 5, and 10σ contours of the continuum-subtracted [Ar II] emission overlaid.The point source on the northeast side is β Pic b.
models at their full spectral resolution with absorption from various molecules.We used the physical parameters of β Pic b determined by GRAVITY Collaboration et al. (2020; T eff = 1742 K, ( ) = log g 4.34
Figure 7 .
Figure 7. Binned PSF-subtracted image slices showing the detection of a point source at the predicted location of β Pic b (separation = 0 54, PA = 31°.57; Lacour et al. 2021; Wang et al. 2021).The left panel shows a binned slice from channel 1A, the middle panel shows one from 1B, and the right panel a binned slice from 1C.The red star indicates the position of the central star determined from the 2D Gaussian fit to the PSF.The pixels within a 3 pixel radius of that center position are masked for clarity.The S/N of the brightest pixel in the point source, going from left to right, is 5.1, 6.3, and 5.5.
Figure 8 .
Figure 8. PSF-subtracted and binned slice of the channel 2B data cube at 10 μm.The red star indicates the position of the central star and the red cross indicates the position of β Pic b at the position angle and separation we found from the channel 1 data cubes.The pixels within a 2 pixel radius of the PSF center have been masked for clarity.The image slice is dominated by thermal emission from the disk and we do not detect a clear point source on one side of the disk as we did in channel 1.The orientation of this image relative to north is the same as those shown in Figure 7.
7. Mâlin et al. (2023) simulated cross-correlation techniques using simulated MRS observations of the β Pic system and similarly found that H 2 O was the only molecule that yielded a significant S/N value for β Pic b (see Figure C.2 of Mâlin et al. 2023).
Figure 10 .
Figure10.Cross-correlation map of the channel 1 IFU cube with an atmosphere template spectrum with water being the only molecular absorber.We detect a bright pixel with a cross-correlation S/N value of 5.7 at the same pixel location as the brightest pixel of the point source shown in Figure7.The red cross shows the position of the central star.
Figure 11 .
Figure 11.Left: same as the 5.7 μm slice shown in Figure 7 but now with N Car injected as a fake planet on the opposite side of the disk.The point source on the northeast side of the disk is β Pic b.Right: same as the left panel but for the 7 μm slice, where there is no clear detection of the injected point source or of β Pic b.
Figure 14 .
Figure14.Top: best-fit DRIFT-PHOENIX model (black) with spectra and photometry of β Pic b.Middle: same as the top panel but now with the best-fit model from the ATMO grid.Bottom: same as the top panel but with the ExoREM grid.The gray spectral models are sampled from the posterior distribution.The residuals from the best-fit model are shown in the bottom panel of all plots with the two dotted lines representing ±5σ residuals.All three models appear to be well fit, but the ExoREM grid yields a systematically lower temperature than the other two.
, we get a C/O ratio for β Pic b of -+ 0.39 0.06 0.10 .This is consistent with the C/O ratio determined by GRAVITY Collaboration et al. (2020) from a petitRAD-TRANS retrieval of the GRAVITY and GPI spectra of β Pic b.This retrieval yielded a C/O ratio of -+ 0.43 0.03 0.04 .The ExoREM model grid gives a C/O ratio of -+ 0.36 0.05 0.13 , which also contains the C/O ratio from GRAVITY Collaboration et al. (2020) within the uncertainty.
(GRAVITY Collaboration et al. 2020).This substellar C/O ratio was argued to favor the core accretion scenario with planetesimal enrichment to lower the C/O ratio of the planetary atmosphere.Our C/O ratios from the grid fitting are consistent with what was found by GRAVITY Collaboration et al. (2020).The MIRI MRS data and grid model fitting presented, however, are not enough to constrain the C/O ratio, and thus the formation history of β Pic b, better than what was presented by GRAVITY Collaboration et al. (2020); so for a complete discussion on the formation history of β Pic b, see GRAVITY Collaboration et al. (2020).
Figure 15 .
Figure15.β (ratio of radiation pressure to gravitational force) as a function of particle radius for olivine (Mg 2 SiO 4 ) and pyroxene (MgSiO 3 ) silicate grains.The black dashed horizontal line shows where β = 0.5.Grains sizes with β > 0.5 become unbound and are blown out of the system.The vertical black dashed dotted line shows a = λ/(2π) for λ = 5 μm, which is a = 0.8 μm.The small-grain limit is defined as 2πa = λ.
Figure 16 .
Figure 16.Vertical cut of the disk at 5.2 μm with the best-fit Lorentzian profile overlaid.
Figure 17 .
Figure 17.Posterior distribution for the model fitting of the β Pic b spectra and photometry with the DRIFT-PHOENIX model grid.The values show the 68% confidence intervals around the median.
Figure 18 .
Figure 18.Same as Figure 17 but for the ExoRem model grid.
Table 1
Best-fit Model Parameters for β Pictoris b | 20,220.2 | 2024-03-28T00:00:00.000 | [
"Physics"
] |
Functional ultrasound localization microscopy reveals brain-wide neurovascular activity on a microscopic scale
The advent of neuroimaging has increased our understanding of brain function. While most brain-wide functional imaging modalities exploit neurovascular coupling to map brain activity at millimeter resolutions, the recording of functional responses at microscopic scale in mammals remains the privilege of invasive electrophysiological or optical approaches, but is mostly restricted to either the cortical surface or the vicinity of implanted sensors. Ultrasound localization microscopy (ULM) has achieved transcranial imaging of cerebrovascular flow, up to micrometre scales, by localizing intravenously injected microbubbles; however, the long acquisition time required to detect microbubbles within microscopic vessels has so far restricted ULM application mainly to microvasculature structural imaging. Here we show how ULM can be modified to quantify functional hyperemia dynamically during brain activation reaching a 6.5-µm spatial and 1-s temporal resolution in deep regions of the rat brain.
Please include a "Data availability" subsection in the Online Methods. This section should inform readers about the availability of the data used to support the conclusions of your study, including accession codes to public repositories, references to source data that may be published alongside the paper, unique identifiers such as URLs to data repository entries, or data set DOIs, and any other statement about data availability. At a minimum, you should include the following statement: "The data that support the findings of this study are available from the corresponding author upon request", describing which data is available upon request and mentioning any restrictions on availability. If DOIs are provided, please include these in the Reference list (authors, title, publisher (repository name), identifier, year). For more guidance on how to write this section please see: http://www.nature.com/authors/policies/data/data-availability-statements-data-citations.pdf CODE AVAILABILITY Please include a "Code Availability" subsection in the Online Methods which details how your custom code is made available. Only in rare cases (where code is not central to the main conclusions of the paper) is the statement "available upon request" allowed (and reasons should be specified).
We request that you deposit code in a DOI-minting repository such as Zenodo, Gigantum or Code Ocean and cite the DOI in the Reference list. We also request that you use code versioning and provide a license.
For more information on our code sharing policy and requirements, please see: https://www.nature.com/nature-research/editorial-policies/reporting-standards#availability-ofcomputer-code MATERIALS AVAILABILITY As a condition of publication in Nature Methods, authors are required to make unique materials promptly available to others without undue qualifications.
Authors reporting new chemical compounds must provide chemical structure, synthesis and characterization details. Authors reporting mutant strains and cell lines are strongly encouraged to use established public repositories.
More details about our materials availability policy can be found at https://www.nature.com/natureportfolio/editorial-policies/reporting-standards#availability-of-materials ORCID Nature Methods is committed to improving transparency in authorship. As part of our efforts in this direction, we are now requesting that all authors identified as 'corresponding author' on published papers create and link their Open Researcher and Contributor Identifier (ORCID) with their account on the Manuscript Tracking System (MTS), prior to acceptance. This applies to primary research papers only. ORCID helps the scientific community achieve unambiguous attribution of all scholarly contributions. You can create and link your ORCID from the home page of the MTS by clicking on 'Modify my Springer Nature account'. For more information please visit please visit <a href="http://www.springernature.com/orcid">www.springernature.com/orcid</a>. Please do not hesitate to contact me if you have any questions or would like to discuss these revisions further. We look forward to seeing the revised manuscript and thank you for the opportunity to consider your work.
Best reagrds, Nina
Nina Vogt, PhD Senior Editor Nature Methods Reviewers' Comments: [Redacted] Decision Letter, first revision: Thank you for your letter detailing how you would respond to the reviewer concerns regarding your Article, "Functional Ultrasound Localization Microscopy reveals brain-wide neurovascular activity on a microscopic scale". We have decided to invite you to revise your manuscript as you have outlined, before we reach a final decision on publication.
In the revised manuscript, please do add the analysis of CBF in different brain regions as well as the additional example of imaging in deep brain regions as shown in your response to my email. Please discuss the other issues brought up by the reviewer in the revision.
Please do not hesitate to contact me if you have any questions or would like to discuss these revisions further.
When revising your paper: * include a point-by-point response to the reviewers and to any editorial suggestions * please underline/highlight any additions to the text or areas with other significant changes to facilitate review of the revised manuscript * address the points listed described below to conform to our open science requirements * ensure it complies with our general format requirements as set out in our guide to authors at www.nature.com/naturemethods * resubmit all the necessary files electronically by using the link below to access your home page [Redacted] This URL links to your confidential home page and associated information about manuscripts you may have submitted, or that you are reviewing for us. If you wish to forward this email to coauthors, please delete the link to your homepage.
We hope to receive your revised paper within 4 weeks. If you cannot send it within this time, please let us know. In this event, we will still be happy to reconsider your paper at a later date so long as nothing similar has been accepted for publication at Nature Methods or published elsewhere.
REPORTING SUMMARY AND EDITORIAL POLICY CHECKLISTS
When revising your manuscript, please update your reporting summary and editorial policy checklists.
Reporting summary: https://www.nature.com/documents/nr-reporting-summary.zip Editorial policy checklist: https://www.nature.com/documents/nr-editorial-policy-checklist.zip If your paper includes custom software, we also ask you to complete a supplemental reporting summary.
Software supplement: https://www.nature.com/documents/nr-software-policy.pdf Please submit these with your revised manuscript. They will be available to reviewers to aid in their evaluation if the paper is re-reviewed. If you have any questions about the checklist, please see http://www.nature.com/authors/policies/availability.html or contact me.
Please note that these forms are dynamic 'smart pdfs' and must therefore be downloaded and completed in Adobe Reader. We will then flatten them for ease of use by the reviewers. If you would like to reference the guidance text as you complete the template, please access these flattened versions at http://www.nature.com/authors/policies/availability.html.
Finally, please ensure that you retain unprocessed data and metadata files after publication, ideally archiving data in perpetuity, as these may be requested during the peer review and production process or after publication if any issues arise.
DATA AVAILABILITY
Please include a "Data availability" subsection in the Online Methods. This section should inform readers about the availability of the data used to support the conclusions of your study, including accession codes to public repositories, references to source data that may be published alongside the paper, unique identifiers such as URLs to data repository entries, or data set DOIs, and any other statement about data availability. At a minimum, you should include the following statement: "The data that support the findings of this study are available from the corresponding author upon request", describing which data is available upon request and mentioning any restrictions on availability. If DOIs are provided, please include these in the Reference list (authors, title, publisher (repository name), identifier, year). For more guidance on how to write this section please see: http://www.nature.com/authors/policies/data/data-availability-statements-data-citations.pdf
MATERIALS AVAILABILITY
As a condition of publication in Nature Methods, authors are required to make unique materials promptly available to others without undue qualifications.
Authors reporting new chemical compounds must provide chemical structure, synthesis and characterization details. Authors reporting mutant strains and cell lines are strongly encouraged to use established public repositories.
More details about our materials availability policy can be found at https://www.nature.com/natureportfolio/editorial-policies/reporting-standards#availability-of-materials ORCID Nature Methods is committed to improving transparency in authorship. As part of our efforts in this direction, we are now requesting that all authors identified as 'corresponding author' on published papers create and link their Open Researcher and Contributor Identifier (ORCID) with their account on the Manuscript Tracking System (MTS), prior to acceptance. This applies to primary research papers only. ORCID helps the scientific community achieve unambiguous attribution of all scholarly contributions. You can create and link your ORCID from the home page of the MTS by clicking on 'Modify my Springer Nature account'. For more information please visit please visit <a href="http://www.springernature.com/orcid">www.springernature.com/orcid</a>. Please do not hesitate to contact me if you have any questions or would like to discuss these revisions further. We look forward to seeing the revised manuscript and thank you for the opportunity to consider your work. Remarks to the Author: The authors have addressed all my concerns. I think it is an exceptionally nice manuscript, and I congratulate the authors on their work. I fully support its publication in Nature Methods.
Reviewer #2:
Remarks to the Author: This paper, transferred from [Redacted], deals with a modification and technological advancement of ultrasound localization microscopy (ULM) using microbubbles (MB) to visualize blood flow in vessels. The authors were able to increase the spatial and temporal resolution of the method enabling the dynamic assessment of cerebrovascular flow in superficial and deep brain vessels. Therefore, changes in MB flow induced by cortical activation (whisker deflection or visual stimuli) can be simultaneously detected in pial arterioles at the brain's surface an in intraparenchymal arterioles deep in the cortex and in subcortical brain regions, like the thalamus and culliculi.
This method offers several advantages that may lead to advance the understanding of neurovascular coupling (NVC), including: dynamic assessment of flow in different vascular compartments, ability to monitor superficial and deep brain regions, sufficient resolution to monitor flow at the arteriolar to capillaries transition, and potential for non-invasive assessment not requiring a craniotomy. These characteristics are well suited to investigate the microvascular dynamics in the entire microvascular network as advocated in a recent critical review of the state-of-the-art in NVC.
However, there are also limitations in the method, some of which are mentioned in the text, but need to be more explicitly stated or addressed with experiments to provide proof-of-principle evidence of the full potential of the method to investigate NVC: 1. The data presented are in rat, a species that is no longer the first choice in neuroscience studies. Using this technique in mice would open the way to using genetically modified models and other molecular tools that are essential for mechanistic investigations of NVC.
2. The invasiveness of the large craniotomy is also a drawback. Transcranial imaging, which could be perfected (as indicated in the paper), would be a major advance, since competing imaging approaches currently used do not require craniotomy or can be applied to thin skull preparations.
3. The application to awake animals needs to be explored. Most recent studies on NVC have examined awake mice, since NVC is uniquely sensitive to anesthetics, which may lead to spurious results. The need for large volume injections and repetitive stimulation may also be limitations in awake behaving animals. 4. As noticed by one to the previous reviewers, the method involves injection of a large volume of fluid the impact of which on the physiological state of the animals is unclear. The assessment of heart rate and breathing is not sufficient for this purpose since cerebral blood flow is highly sensitive to changes in blood pressure, blood gasses, hematocrit, blood volume, brain temperature, etc. Lacking a careful assessment of the impact of the fluid injection on these critical variables would preclude a correct interpretation of the changes in MB flow in hypothesis-testing situations.
5. The need to use repetitive stimulation to enhance SNR in slow flowing microvessels is also a limitation. More and more NVC is being studied during natural behaviors engaging the brain as a whole, which will be well suited to the present method which has the potential to image several brain regions at the same time.
6. To this end, more definitive evidence of the ability of the method to provide reliable vascular signals from slow flowing microvessels of deep brain regions would be desirable.
7. The heterogeneity of cerebral microvascular cells highlighted by single cell RNAseq studies requires microvascular assessment with cell-type specificity, which has an impact on microvascular function. The ability of fULM to monitor the full vascular network in combination with approaches to provide cell-type identification would provide a major advance to the field.
Reviewer #3: Remarks to the Author: I am satisfied with the authors' response to the questions raised. Only a very minor mistake to correct: in the manuscript "Positron Electron Tomography" were mentioned twice (page 2 and 38) but should it be "Position Emission Tomography"?
Author Rebuttal, first revision:
Reviewers' Comments:
Reviewer #1: Remarks to the Author: The authors have addressed all my concerns. I think it is an exceptionally nice manuscript, and I congratulate the authors on their work. I fully support its publication in Nature Methods.
We deeply thank reviewer 1 for his positive comments and his help to improve the final manuscript quality.
Reviewer #2: Remarks to the Author: This paper, transferred from [Redacted], deals with a modification and technological advancement of ultrasound localization microscopy (ULM) using microbubbles (MB) to visualize blood flow in vessels. The authors were able to increase the spatial and temporal resolution of the method enabling the dynamic assessment of cerebrovascular flow in superficial and deep brain vessels. Therefore, changes in MB flow induced by cortical activation (whisker deflection or visual stimuli) can be simultaneously detected in pial arterioles at the brain's surface an in intraparenchymal arterioles deep in the cortex and in subcortical brain regions, like the thalamus and culliculi.
This method offers several advantages that may lead to advance the understanding of neurovascular coupling (NVC), including: dynamic assessment of flow in different vascular compartments, ability to monitor superficial and deep brain regions, sufficient resolution to monitor flow at the arteriolar to capillaries transition, and potential for non-invasive assessment not requiring a craniotomy. These characteristics are well suited to investigate the microvascular dynamics in the entire microvascular network as advocated in a recent critical review of the state-of-the-art in NVC.
We deeply thank the reviewer 2 for these positive comments However, there are also limitations in the method, some of which are mentioned in the text, but need to be more explicitly stated or addressed with experiments to provide proof-of-principle evidence of the full potential of the method to investigate NVC: The data presented are in rat, a species that is no longer the first choice in neuroscience studies. Using this technique in mice would open the way to using genetically modified models and other molecular tools that are essential for mechanistic investigations of NVC.
We thank the referee for this interesting comment. In fact, this technique could be applied straightforwardly in mice. [Redacted] 2. The invasiveness of the large craniotomy is also a drawback. Transcranial imaging, which could be perfected (as indicated in the paper), would be a major advance, since competing imaging approaches currently used do not require craniotomy or can be applied to thin skull preparations.
Here, we are convinced that transcranial propagation should not be an issue. Indeed, we decided to perform a craniotomy in most of our experiments as this work corresponds to the proof of concept of a new methodology and we wanted to study the imaging method in optimal conditions. However, we did not sufficiently insist on the fact that the technique can also be applied in thinned skulls configurations or also in transcranial configurations. We agree that the experiments on transcranial fULM imaging in rats could be improved, but we have many other ongoing works (some published and some unpublished) with transcranial ultrasound localization microscopy showing that transcranial ULM imaging is feasible with convincing image quality. We now discuss carefully this point in the discussion part of the revised manuscript.
In order to support our opinion, we provide some further examples of images of transcranial ULM in rats and mice: • In rats, a recent article was published by Chavignon et al (IEE TMI 2021), an independent group from a former member of our lab, in which a Raw Column Arrays (RCA) probe was used for transcranial ULM imaging in rats. The image below comes from this article (Chavignon et al, IEEE Transactions on Medical Imaging, 2021) and clearly shows that transcranial localization microscopy is feasible in rats. [Redacted] • Transcranial fULM will also strongly benefit from the addition of aberration corrections techniques and the further improvement of localization algorithms.
In conclusion, in the near future, these fULM experiments will be performed transcranially both in rats and mice.
3.
The application to awake animals needs to be explored. Most recent studies on NVC have examined awake mice, since NVC is uniquely sensitive to anesthetics, which may lead to spurious results. The need for large volume injections and repetitive stimulation may also be limitations in awake behaving animals.
We are very confident that awake Functional ULM could be applied in further works on a head fixed experiments. Indeed, recent studies, using exactly the same probe and electronics, as the one used in these experiments, have shown that the sensitivity of functional ultrasound imaging of the brain activity (even without contrast agents) is sufficient to be used in transcranial + head fixed + awake configurations (See for example Bertolo et al, Whole-Brain 3D Activation and Functional Connectivity Mapping in Mice using Transcranial Functional Ultrasound Imaging, Journal of Visualized Experiments 2021 JOVE). Other independent groups are also performing head fixed awake functional ultrasound imaging with comparable technologies (see for example: Mace E. et al Whole-brain functional ultrasound imaging reveals brain modules for visuomotor integration, Neuron 100 (5), 1241-1251. e7. See also: Brunner C. et al, A platform for brain-wide volumetric functional ultrasound imaging and analysis of circuit dynamics in Awake Mice, Neuron 108 (5), 861-875. e7).
Note also that these publications performed in functional ultrasound imaging on awake mice imaging in a head fixed setup are performed with the same type of acquisition parameters, but without microbubbles. The application of transcranial functional ULM in mice would only require the simultaneous addition of microbubble injection. [Redacted] It would therefore be quickly adapted to future head fixed imaging experiments in awake mice.
In conclusion, such head fixed setups could be used to perform fULM in awake animals. This goes beyond the scope of the proof-of-concept paper, but it will be carried out in further works.
We agree this is an important point needs to be to discussed in our discussion. We have now added this point in the discussion of the revised manuscript.
4.
As noticed by one to the previous reviewers, the method involves injection of a large volume of fluid the impact of which on the physiological state of the animals is unclear. The assessment of heart rate and breathing is not sufficient for this purpose since cerebral blood flow is highly sensitive to changes in blood pressure, blood gasses, hematocrit, blood volume, brain temperature, etc. Lacking a careful assessment of the impact of the fluid injection on these critical variables would preclude a correct interpretation of the changes in MB flow in hypothesis-testing situations.
We agree this is an important point and we are now discussing this particular point both in the discussion and in the material and methods sections. As detailed below, we have strong arguments to suggest that the volume injected was moderate and had a limited impact on the physiological state of the animal and the measures performed.
1.
First of all, international recommendations for the dose volume guidelines exists. The 3Rs Translational and Predictive Sciences Leadership Group -Contract Research Organization Working Group of IQ (international consortium for innovation and quality in pharmaceutical developments) has provided such international recommendations for the dose volume guidelines (document attached) based on an extensive literature. This document includes dose volume guidelines that have been researched and published as well as standards that have gained acceptance through empirical use across multiple members of the IQ 3Rs leadership group (LG) and partner CROs. The recommendation of this international committee for the maximal volume of intravenous injection in rats is 20 ml/kg with a slow injection (between 3 and 10 minute long). Our rats' weight was 300 g, meaning a maximum injection volume should not exceed 6.0 ml. In our experiments, we used a continuous slow injection at 3.5 ml/h during 20 min, corresponding to 1.1 ml (~1/6 of the maximum dose) at a rate 3 to 8 times slower than the slow injection described in these international recommendations (20 ml/kg in 5 to 10 minutes). Therefore, we humbly think that the total fluid injection is not a critical as it may seem.
2.
Secondly, in addition to heart rate and breathing, we also have precise access in our data to possible changes of CBF during the experiment, by measuring the flow of microbubbles. Our results clearly show that the MB/s baseline does not vary significantly during the 20 minutes continuous injections. This was already shown in our results (in the supplementary figure 1). However, it is true that we did not sufficiently insist on the importance of this figure and its interpretation for this particular argument.
In the revised manuscript, the mention to this point and the supplementary fig. 1. were improved, by adding the baseline temporal signal of MB/s in different regions of the brain, rather than just the global signal (see proposed suppl. Figure 1 below). We also provide further comments in the discussion section, reinforcing this point.
Furthermore, regarding your comment on the hematocrit, our injection volume of 1.1 ml for a 21 ml total blood volume (70 ml total blood volume for 300 g rat) corresponds only to a 5% change in the total blood volume, due to the injection, suggesting a very limited change in the hematocrit.
Supplementary figure1: Evolution of the Microbubbles flow injection profile. Continuous perfusion of MB provides a stable delivery over time: MBs detection count per ultrafast image (representative of the cerebral blood flow) for the whole brain, cortical and thalamic regions in a representative animal over the whole acquisition (23 minutes).
3.
Finally, the fULM method is just at its early stages and there are many rooms of improvement to decrease the injection volume in the next years : on the processing side: to date, the number of detected bubbles per ultrafast image is typically N=80. Nevertheless, we keep only roughly 38% (N~30) of these detected events during the tracking process. There is definitely some room for improvement in this high rejection rate. Increasing the number of detected microbubbles would allow us to decrease the injected volume.
on the contrast agent side: the gas concentration of microbubbles used in these experiments was 5 µl/ml. While keeping such very low gas concentration, the number of detectable microbubbles could be strongly increased by making them smaller. A decrease in diameter by a factor 2 would allow us to increase the number of microbubbles by a factor 8 with the same total gas content and injection volume. So, the size of microbubbles could be easily decreased from some micrometers down to 0.5-1 micrometer in order to strongly increase the number of MB per ml (one to two orders of magnitude) without increasing the gas content.
We propose to add some comments in the discussion and in the materials and methods of the manuscript and cite this document and other publications:
5.
The need to use repetitive stimulation to enhance SNR in slow flowing microvessels is also a limitation. More and more NVC is being studied during natural behaviors engaging the brain as a whole, which will be well suited to the present method which has the potential to image several brain regions at the same time.
As the reviewer knows, no brain imaging modality is able to image the functional blood flow variations at microscopic scale over the whole brain. Asking to perform this huge challenge (what our study is bringing forward) additionally for single trials (without stimulus repetition) and in natural behavior is asking to demonstrate a kind of perfect brain imaging modality. Every imaging modality has its own limitations and we always have to use different techniques to answer various scientific questions. One single modality cannot be suited to solve all the challenges of neuroimaging.
Of course, we agree that the need for repetitive stimuli is a limitation, however this limitation stands only for the smallest tiny vessels. For example, in typical 30 micrometer diameter arterioles, the number of microbubbles per second is sufficient to provide dynamic fULM even without repetition, i.e. in single trials. In smaller pre-capillary arterioles, the number of required repetitions is typically 5 and finally in the small higher order branches capillaries, the number of repetitions should be around 10 as Suppl. figure 9 shows, that the improvement from 10 repetitions to 20 repetitions remains marginal.
Yet, in functional neuroimaging, N=10 repetitions is not considered excessive. In fMRI and electrophysiology studies in behaving animals, tens of repetitions are very often required to provide significative results. This is even the reason why the sensitivity of functional Ultrasound has been shown to be so interesting as it enables to provide single trial experiments compared to fMRI and implanted electrodes recordings (see recent work: Dizeux A. et al Nature Comm 2019 in behaving primates) for cognitive studies in non-human primates and paves the way to Brain Machine Interfaces based on Brain ultrasound imaging (See Norman S. et al, 2021 Neuron).
Furthermore, although the stimuli repetition is necessary for tiny vessels, Fig. 4G shows that the signal processing based on the global SVD data decomposition enables to estimate the temporal profile of each single trial without requiring to perform the data averaging over repetitions.
Here, we managed to provide a functional response at the microscopic scale over the whole deep brain in a limited number of repetitions, similarly to many other functional imaging modalities. Although we fully respect the challenging and interesting comments of the reviewer, we must confess that we cannot solve all problems of functional brain imaging in a single paper.
6.
To this end, more definitive evidence of the ability of the method to provide reliable vascular signals from slow flowing microvessels of deep brain regions would be desirable.
We agree that this would improve the manuscript.
We already provided images of vascular responses in the deep thalamic and colliculus regions. Results in fig. 3C and former fig 12C-D (corresponding to thalamic deep regions) clearly showed the ability of the technique to quantitatively demonstrate reliable vascular measurements in small vessels. These vessels were about 30 µm diameter and it was possible to see the blood flow profile within the vessels.
In order to show the same in deep seated and very small microvessels, we also added a new supplementary figure (Suppl. Fig. 6) showing the vascular response in tiny vessels of the colliculus. For example, we can clearly see the functional vascular response in vessel that has a 15 µm diameter.
7.
The heterogeneity of cerebral microvascular cells highlighted by single cell RNAseq studies requires microvascular assessment with cell-type specificity, which has an impact on microvascular function. The ability of fULM to monitor the full vascular network in combination with approaches to provide cell-type identification would provide a major advance to the field.
We thank the reviewer for this positive comment. It is true that the combination of fULM with single cell RNAseq studies is going to be an extremely powerful combination for Neuroscience community. We will mention the interest of this combination in the discussion.
Reviewer #3:
Remarks to the Author: I am satisfied with the authors' response to the questions raised. Only a very minor mistake to correct: in the manuscript "Positron Electron Tomography" were mentioned twice (page 2 and 38) but should it be "Position Emission Tomography"?
We corrected this typo.
We deeply thank the reviewer for his positive comments and his help to improve the final quality of the manuscript.
Decision Letter, second revision:
Dear Mickael, Thank you for submitting your revised manuscript "Functional Ultrasound Localization Microscopy reveals brain-wide neurovascular activity on a microscopic scale" (NMETH-A46993B). It has now been seen by one of the original referees and their comments are below. The reviewer finds that the paper has improved in revision, and therefore we'll be happy in principle to publish it in Nature Methods, pending minor revisions to satisfy the referees' final requests and to comply with our editorial and formatting guidelines.
We are now performing detailed checks on your paper and will send you a checklist detailing our editorial and formatting requirements in about a week. Please do not upload the final materials and make any revisions until you receive this additional information from us.
TRANSPARENT PEER REVIEW Nature Methods offers a transparent peer review option for new original research manuscripts submitted from 17th February 2021. We encourage increased transparency in peer review by publishing the reviewer comments, author rebuttal letters and editorial decision letters if the authors agree. Such peer review material is made available as a supplementary peer review file. Please state in the cover letter 'I wish to participate in transparent peer review' if you want to opt in, or 'I do not wish to participate in transparent peer review' if you don't. Failure to state your preference will result in delays in accepting your manuscript for publication.
Please note: we allow redactions to authors' rebuttal and reviewer comments in the interest of confidentiality. If you are concerned about the release of confidential data, please let us know specifically what information you would like to have removed. Please note that we cannot incorporate redactions for any other reasons. Reviewer names will be published in the peer review files if the reviewer signed the comments to authors, or if reviewers explicitly agree to release their name. For more information, please refer to our <a href="https://www.nature.com/documents/nr-transparentpeer-review.pdf" target="new">FAQ page</a>.
ORCID
IMPORTANT: Non-corresponding authors do not have to link their ORCIDs but are encouraged to do so. Please note that it will not be possible to add/modify ORCIDs at proof. Thus, please let your co-authors know that if they wish to have their ORCID added to the paper they must follow the procedure described in the following link prior to acceptance: https://www.springernature.com/gp/researchers/orcid/orcid-for-nature-research Reviewer #2 (Remarks to the Author): The authors have addresses my comments.
Final Decision Letter:
Dear Mickael, I am pleased to inform you that your Article, "Functional Ultrasound Localization Microscopy reveals brain-wide neurovascular activity on a microscopic scale", has now been accepted for publication in Nature Methods. Your paper is tentatively scheduled for publication in our August print issue, and will be published online prior to that. The received and accepted dates will be September 1st, 2021 and June 14th, 2022. This note is intended to let you know what to expect from us over the next month or so, and to let you know where to address any further questions.
In approximately 10 business days you will receive an email with a link to choose the appropriate publishing options for your paper and our Author Services team will be in touch regarding any additional information that may be required.
You will not receive your proofs until the publishing agreement has been received through our system. Your paper will now be copyedited to ensure that it conforms to Nature Methods style. Once proofs are generated, they will be sent to you electronically and you will be asked to send a corrected version within 24 hours. It is extremely important that you let us know now whether you will be difficult to contact over the next month. If this is the case, we ask that you send us the contact information (email, phone and fax) of someone who will be able to check the proofs and deal with any last-minute problems.
If, when you receive your proof, you cannot meet the deadline, please inform us at<EMAIL_ADDRESS>immediately.
If you have any questions about our publishing options, costs, Open Access requirements, or our legal forms, please contact<EMAIL_ADDRESS>Once your manuscript is typeset and you have completed the appropriate grant of rights, you will receive a link to your electronic proof via email with a request to make any corrections within 48 hours. If, when you receive your proof, you cannot meet this deadline, please inform us at<EMAIL_ADDRESS>immediately.
Once your paper has been scheduled for online publication, the Nature press office will be in touch to confirm the details.
Content is published online weekly on Mondays and Thursdays, and the embargo is set at 16:00 London time (GMT)/11:00 am US Eastern time (EST) on the day of publication. If you need to know the exact publication date or when the news embargo will be lifted, please contact our press office after you have submitted your proof corrections. Now is the time to inform your Public Relations or Press Office about your paper, as they might be interested in promoting its publication. This will allow them time to prepare an accurate and satisfactory press release. Include your manuscript tracking number NMETH-A46993C and the name of the journal, which they will need when they contact our office.
About one week before your paper is published online, we shall be distributing a press release to news organizations worldwide, which may include details of your work. We are happy for your institution or funding agency to prepare its own press release, but it must mention the embargo date and Nature Methods. Our Press Office will contact you closer to the time of publication, but if you or your Press Office have any inquiries in the meantime, please contact<EMAIL_ADDRESS>Please note that Nature Methods is a Transformative Journal (TJ). Authors may publish their research with us through the traditional subscription access route or make their paper immediately open access through payment of an article-processing charge (APC). Authors will not be required to make a final decision about access to their article until it has been accepted. Find out more about Transformative Journals Authors may need to take specific actions to achieve compliance with funder and institutional open access mandates. If your research is supported by a funder that requires immediate open access (e.g. according to Plan S principles) then you should select the gold OA route, and we will direct you to the compliant route where possible. For authors selecting the subscription publication route, the journal's standard licensing terms will need to be accepted, including self-archiving policies. Those licensing terms will supersede any other terms that the author or any third party may assert apply to any version of the manuscript.
If you have posted a preprint on any preprint server, please ensure that the preprint details are updated with a publication reference, including the DOI and a URL to the published version of the article on the journal website.
To assist our authors in disseminating their research to the broader community, our SharedIt initiative provides you with a unique shareable link that will allow anyone (with or without a subscription) to read the published article. Recipients of the link with a subscription will also be able to download and print the PDF. As soon as your article is published, you will receive an automated email with your shareable link.
Please note that you and your coauthors may order reprints and single copies of the issue containing your article through Nature Research Group's reprint website, which is located at http://www.nature.com/reprints/author-reprints.html. If there are any questions about reprints please send an email to<EMAIL_ADDRESS>and someone will assist you.
Please feel free to contact me if you have questions about any of these points.
Best regards, | 8,288 | 2022-08-01T00:00:00.000 | [
"Medicine",
"Engineering",
"Physics"
] |
ELABORATION OF THE HIERARCHICAL APPROACH TO SEGMENTATION OF SCANNED DOCUMENTS IMAGES
Об’єктом дослідження є процес розпізнавання областей зображень відсканованих документів. У роботі запропоновано ієрархічний підхід до сегментації зображень відсканованих документів. Даний підхід представляє собою зображення відсканованого документа у вигляді багаторівневої структури. На кожному рівні структури виділені зображення, що містять структурні області. Об’єкти нижнього рівня строго співвідносяться з певною областю зображення верхнього рівня: області фото та графіки співвідносяться із зображенням, що містить ілюстрації, а області тексту та фону – із зображенням, що містить і текст, і фон одночасно. Використання ієрархічного підходу дозволяє виконувати обробку окремо для кожної області зображення, а саме: спочатку на оригінальному зображенні відсканованого документа за допомогою аналізу зв’язкових компонент виділяються області ілюстрацій. Тим самим перший рівень ієрархії складають зображення, що містить ілюстрації, та зображення, що містить текст із фоном. Потім області ілюстрацій розділяються на фото та графіку за допомогою розбиття областей ілюстрацій на блоки певного розміру, а текстові області виділяються з фону за допомогою обробки в околиці кожного пікселя. Тим самим другий рівень ієрархічної структури представляють зображення, що містять однорідні області: фото, графіку, текст і фон. Ієрархічний підхід до сегментації дозволив скоротити час обробки в середньому в 80 разів. Скороченню часу обробки зображення сприяло те, що на кожному рівні і в, свою чергу, в окремій частині ієрархічної структури була можливість врахувати структурні ознаки однорідної області зображення, яке відповідає даному рівню. А також вибрати ознаки ідентифікації цих областей з високою обчислювальною ефективністю, використання яких також дозволило скоротити час обробки відсканованого документа. Ключові слова: ієрархічний підхід, відскановані документи, сегментація зображень, обробка зображень, однорідні області. Ishchenko А., Zhuchkovskyi V.
Introduction
The rapid development of digital technologies has led to the conversion into electronic form of all types of materials, including documents from archives, libraries, and enterprises to create electronic archives.
An important stage in the processing of images of scanned documents is segmentation, which consists in dividing the image into homogeneous areas that are similar in one feature or set of features.
In the literature, there are 2 main approaches to the segmentation of scanned documents images [1]: descending and ascending. Segmentation methods that use a descen ding approach [2][3][4], first determine the objects of a higher level of the page structure -text and graphic elements, and then -columns of text, paragraphs, lines, symbols of text. These methods are characterized by high speed, but low quality segmentation, since it is not always possible to process non-rectangular areas of text or headings that occupy several columns of text. Segmentation methods that use an upward approach to segmentation [5,6] begin processing with text characters that are combined into paragraphs, paragraphs, and columns. Next, the resulting objects are classified as text areas until the entire text areas are highlighted. These methods are distinguished by high quality segmentation, since they process images with a complex shape well, but have low speed, due to the fact that they require processing of each pixel first, and then the document areas. When digitizing a large number of printed documents, the requirements for the speed of their processing increase. As a consequence, an urgent task, which is solved in this work, is increasing the speed of processing images of scanned documents by reducing their processing time, using a hierarchical approach to image segmentation. Therefore, the object of research is the process of recognizing the areas of images of scanned documents. The aim of research is using a hierarchical approach to the segmentation of images of scanned documents to reduce the processing time of images with sufficient quality segmentation.
Methods of research
The basis of any scanned document is its structure that is the mutual arrangement of graphic material and text. Regions that include uniform content, such as text only, graphics only, or only photos, form structural regions.
One of the main stages of image processing of scanned documents is image segmentation. Areas of text, gra phics and photos are highlighted when segmentation on the images. Each of these structural areas has different properties; therefore, it is difficult to select a feature system for ISSN 2226-3780 selecting text areas, as well as graphic and photo areas from the background.
The existing methods of segmentation of scanned documents images do not simultaneously satisfy the requirements for short processing time with sufficient segmentation quality.
The need for a hierarchical organization of the system is necessary if its implementation requires the expenditure of a large amount of time, which is unacceptable for this system. Therefore, in order to reduce the processing time of a document image in this paper, let's propose to use a hierarchical approach to the segmentation of images of scanned documents. This approach consists in the representation of an image in the form of a multilevel structure, in which there is a division of the set of its constituent objects into subsets of different levels with the property of integrity [7]. The hierarchical structure allows the processing of individual information arrays. That is, this approach allows processing for each level of image representation. According to the hierarchical approach, the image is first decomposed into areas of illustrations, including both photos and graphics, followed by their classification, as well as areas containing text and background, followed by highlighting text areas from the background. As a result of image segmentation using the proposed approach, the image of the scanned document is presented as separate areas: text, graphics, photo and background. The hierarchical approach is shown in Fig. 1.
According to the hierarchical approach to image segmentation ( Fig. 1), the first level corresponds to the original image of the scanned document. As the base model for the representation of this image, the most frequently used model of mixed raster content in literature is Mixed Raster Content [8]. According to this model, the image of the document is represented as an image of a mask, as well as images of the foreground and background. Each of these images contains objects of a certain class and is independently compressed by certain encoders. The mask contains information on the relative position of the foreground and background objects in the image. In order to be able to extract information from structural regions, it is necessary to present an image that represents these structural regions in separate images. Therefore, in this research, the model of work [9] is used as an image representation. This model is different in that it represents an image of a scanned document as a set of images, each of which contains one class of a uniform area -graphics, or photos, or text on a uniform background. This image model represents text areas as a structural texture with text symbols as non-derivative elements, and illustration areas as areas of constant intensity. Such a representation of the image of the scanned document allows to take into account the structural properties of homogeneous areas and choose a system of signs for their identification, which have high computational efficiency, which entails a reduction in the processing time of the image with sufficient quality segmentation.
The next step is dividing the image of the scanned document into an image containing illustrations and an image containing text areas and a background. To do this, use the method developed in [10] to select illustrations on an image of a scanned document using averaging filtering. According to this method, homogeneous areas of text and illustrations are distinguished by analyzing connected components. The use of analysis of connected components allows to distinguish the areas of illustrations from areas of text, since they are connected components that correspond to the characters of the text, differ from the connected components corresponding to the illustrations, in their form and periodicity. That is, the size of the connected components corresponding to the characters of the text acts as a sign for identifying the text. Using a hierarchical approach to segmentation allows to compute simple, computationally, features for text areas and segment the original image of the scanned document into 2 images: one of them contains illustrations on a uniform background, the other is text and background. The use of «simple» signs reduces the processing time of the image, and the use of averaging filtering when selecting areas of illustrations preserves a sufficiently high image segmentation quality.
According to the hierarchical approach (Fig. 1), the third level of the hierarchy presents images containing photos, graphics, text, and background, separately from each other.
First, the areas of illustrations are divided into homogeneous areas, which differ in their structure: photos and graphics. To do this, let's consider the part of the hierarchical structure in which the image is located, containing areas of illustrations. To this end, a method proposed in [11] was proposed for identifying graphic and photo areas using statistical and geometric features. To reduce image processing time when identifying photo and graphics areas, the illustration areas are divided into blocks of fixed size. When block processing is usually the quality of segmentation is reduced, it is therefore necessary to use such signs of identifying homogeneous areas that have high computational efficiency, that is, provide sufficient quality segmentation with a short image processing time. Therefore, it is proposed to use the aspect ratio of objects and the estimate of the expectation of the height of the intensity drop at the borders of homogeneous areas as identification signs in [11]. As the first, a feature is chosen that characterizes an object representing graphics as an image that contains linear objects [12], represented by contours of different widths and lengths. The second feature allows to distinguish the graphic areas from the photo, given that the graphic areas are characterized by fewer shades of gray than the photos, which entails an increase in the height of the intensity differences between the gradations. Next, the text areas on the images of scanned documents are separated from the background. For this purpose, the method of selection of text areas [13] is used, which allows to take into account the distances between text characters when processing image lines and the distance between text lines when processing images in columns. Processing in the vicinity of each pixel of the image reduces processing time and maintains a sufficiently high image quality.
Research results and discussion
To estimate the processing time of scanned documents images, using a hierarchical approach to image segmentation, images of scanned articles and journals of the MediaTeam Oulu document database [14] were used.
Experimental studies were conducted using a computer that has the following technical and system characteristics: Intel Core i5-3210 processor, 2.5 GHz CPU, 6 GB RAM, 64-bit Windows 7 operating system. 92 test images of scanned documents were selected that contained the main text, headlines, photos and graphics. Images of the used document base are 3200 × 2300 pixels in size and scanned with a resolution of 300 dpi, which gives a rather high quality of the scanned document. Examples of analyzed images are presented in Fig. 2.
The results of the experiment were compared with the results of [2,3,5], in which the studies were performed using a 2.4 GHz dual-core processor and a 64-bit Windows 7 operating system. The technical and system characteristics of this study and the analyzed works are comparable, therefore the results of these works can be added to the comparison.
The results of the processing time to use a hierarchical approach to segmentation and known approaches are given in Table 1.
The system and hardware resources of computers that were used for experiments in [2,3,5] and in this study had comparable characteristics. The results presented in Table 1 allow to conclude that the use of the proposed hierarchical approach to the segmentation of images of scanned documents has significant advantages in speed.
Further studies can be aimed at solving the segmentation problem in order to increase speed by introducing parallel processing of homogeneous areas at one level of the structure.
Conclusions
In this article it is proposed to use a hierarchical approach in the segmentation of scanned documents ima ges. This approach allows homogeneous regions of an image to be represented using a tree structure in which the regions of the lower level strictly correlate with a specific region of the image of the upper level. According to this approach, the areas of illustrations and areas containing text and background are objects of the same level of structure. A field of photos, graphics, text and background presented at a different level. This allows to perform processing separately for each image area, as well as significantly reduce processing time at each level.
Acknowledgments
The author expresses gratitude and deep appreciation for the valuable and constructive advice and comments when working on this research to colleagues from the Department of Applied Mathematics and Information Technologies of the Odessa National Polytechnic University (Ukraine): -Doctor of Technical Sciences, Associate Professor M. Poliakova; -Doctor of Technical Sciences, Prof. V. Krylov. | 2,982 | 2019-06-30T00:00:00.000 | [
"Computer Science"
] |
A Brief Overview of Key Issues in Second Language Writing Teaching and Research
This paper briefly reviews the literature on writing skill in second language. It commences with a discussion on the importance of writing and its special characteristics. Then, it gives a brief account of the reasons for the weakness of students’ writing skill as well as addressing some of the most important topics in L2 writing studies ranging from disciplinary to interdisciplinary to metadisciplinary field of inquiry. In addition, it presents a historical sketch of L2 writing studies, consisting of approaches to teaching writing including behavioristic and contrastive rhetoric as well as discussing approaches to the study of writing including product-oriented, process-oriented, and post-process ones. It also introduces different types of feedback in writing consisting of peer feedback, conferences as feedback, teachers’ comments as feedback, and selfmonitoring. Finally, it deals with holistic vs. analytic dichotomy in administration of writing
IMPORTANCE OF WRITING SKILL
Writing is an important communication skill and has an essential role in second language learning process (Chastain, 1988, as cited in Simin & Tavangar, 2009).This language skill assumed to be of great importance to academic success since it is the commonest assessment measure for academics to evaluate their students, and students' weak writing ability may put their academic success considerably at risk.(Tan, 2011).Therefore, most students, more or less proficient alike, see writing a difficult task that they have to struggle with in order to pass their exams.(Yavuz & Genc, 1998, as cited in Yavuz-Erkan & İflazoğlu-Saban, 2011).In addition, due to its active and productive nature, writing in a foreign language is really challenging for students.As Celce-Mercia (1991, as cited in Yavuz-Erkan & İflazoğlu-Saban, 2011) puts it, accurate and coherent expression of ideas in written form in a foreign or second language is a great accomplishment.Hence, for foreign language learners, writing is an intricate activity that necessitates a confident level of writing conventions, linguistic knowledge, grammar, and vocabulary and needs thinking strategies that let the language learners to express themselves proficiently in the other language (Yavuz-Erkan & İflazoğlu-Saban, 2011).However, in spite of all the stress laid on writing instruction, learners' writing lingers a regular grumble in both the first and second language educational environments (Tan, 2011).
CHARACTERISTICS OF WRITING
It seems that a comparison between writing and speech, as two productive language skills, would be beneficial in enlightening the nature of writing.Accordingly, acknowledging the close relationship between writing and speech, Mullany and Stockwell (2010) assumed significant differences between them regarding vocabulary choice, fluency, clause length and complexity, address forms and so forth.They presented several distinctions between speech and writing.First, writing is more monologic than speech.Second, the basic unit of writing is the clause, while for speech the utterance or turn acts as the building stone.Third, writing is more planned whereas speech is more spontaneous.Forth, writing tends to have more standardized and socially accepted forms, while speech is more open to variation in accents.Fifth, writing is more likely to be displaced in time and space (free from context) whereas speech tends to be face-to-face.Finally, writing is visual and relies more on shape and structure, while speech is aural (Mullany & Stockwell, 2010, pp. 84-85).
Clearly, writing is a cognitive activity; thus, teachers can have a prominent role in assisting students with improving their writing skills through notifying them about the significance of decent writing skills for prosperous progress in their career.They can offer and organize efficient courses in writing that will empower students in acquiring skills and knowledge in helpful writing strategies (Gupta &Woldemariam, 2011).
IJELS 6(2):15-25 However, since writing is an emotional activity as well as a cognitive one, all aspects of the writing process are affected by its affective factors (McLeod, 1987, as cited in Yavuz-Erkan & İflazoğlu-Saban, 2011).Some researchers have already investigated the affective variables in writing.For example, Yavuz-Erkan & İflazoğlu-Saban (2011) studied writing performance of Turkish tertiary-level EFL learners relative to some affective components.As a result, they proposed learners' beliefs about their writing abilities, their apprehension level about writing, and their attitudes towards writing are significant indexes of academic writing performance in EFL students.In another research, Gupta and Woldemariam (2011) examined how motivation and attitudes affect the use of writing strategy of undergraduate students in an EFL setting.They suggested the instrumental motivation towards writing was one of the main factors in developing writing skills of these students.More specifically, their study revealed that high motivation of students leads to high level of confidence, enjoyment, perceived ability, positive attitude towards efficient teaching methods of writing, and to drawing on writing strategies most recurrently.
In addition, one more characteristic of writing, which entails pedagogical implications for language teachers, is that process of writing activity requires writers to progress recursively through prewriting, writing, evaluating, and revising stages (Stapa & Abdul-Majid, 2009).To emphasize the importance of the prewriting stage, Thompkins (1990, as cited in Stapa & Abdul-Majid, 2009) pointed that seventy percent of writing time needs to be spent on prewriting.Stapa and Abdul-Majid (2009) investigated the first language use in second language teaching and found a significant development in the writing of students who used their first language to create ideas before writing in second language.Accordingly, they suggested the use of L1 by writing teachers particularly in producing ideas among low-level ESL students.
REASONS FOR THE WEAKNESS OF STUDENTS' WRITING SKILL
Several factors can account for students' poor writing ability.Some of these reasons are as follows.First reason is due to a reductionist approach to writing, which disregards writing as integrated with other language skills and gives way to more teacher-centered approach.Thus, it overemphasizes correcting surface errors in writing and robs students the chance of selecting their own favorite writing topics (Clippard, 1998, as cited in Tan, 2011).Second reason is attested to writing apprehension or fear of writing, which might be leading from the product approach that only focuses on the product of writing without considering its process (Stapa & Abdul-Majid, 2009).Unproductive lecture method in teaching writing can be seen as the third reason (Tan, 2011).Forth one, which is particularly noticeable in the EFL/ESL settings, is attributable to large size of writing classes (Warschauer andWare, 2006, as cited in Tan, 2011).Last but not least reason is the disintegration of print culture and the TV onset, radio, songs, video games, multimedia, computers, and movies (McLuhan andFiore, 1967, as cited in Tan, 2011).
TOPICS OF L2 WRITING RESEARCH
Studies on second language writing has been extensively recognized from a disciplinary to an interdisciplinary field of inquiry (for about sixty years), and more recently to a metadisciplinary field of inquiry in s applied linguistics and second language research (Fujieda, 2006;Matsuda et al., 2003).The aim of disciplinary inquiry in second language writing is literacy acquisition and instruction over and above the construction of knowledge about the nature of second language writers and writing (both processes and texts) (Matsuda et al., 2003).On the other hand, due to the interdisciplinary nature of this field, the position of the field is frequently expressed in relation to different other fields, such as foreign language studies, composition studies, bilingual education, Teaching English to Speakers of Other Languages (TESOL), and applied linguistics.Few studies also investigated the correlation between second language writing and TESOL in general (Matsuda et al., 2003).In addition, during the 90s a new field of inquiry named as metadisciplinary inquiry has emerged, which is defined as "inquiry into the nature and the historical development of a field of inquiry as well as its philosophical and methodological orientations" (Matsuda, 1998, as cited in Matsuda et al., 2003).Metadisciplinary inquiry goes one stage backward and studies how the disciplinary inquiry functions; meaning that it highlights queries like "who we are, what we do, and how we do what we do" (Matsuda et al., 2003).Metadisciplinary inquiry in the field of second language writing has taken a lot of various forms that one of its types is the definition of the field -its characteristics, scope, and status (Matsuda et al., 2003).
More specifically speaking of the topics of writing studies, although the main focus of foreign language writing studies has been predominantly on the syntax area, some researchers have investigated other issues such as the pragmatics of metadiscourse.For instance, Simin and Tavangar (2009) found the positive correlation between proficiency in a foreign language and the use of metadiscourse markers.They also pointed the facilitative effect of instruction in this regard.
Furthermore, the topics of second language writing studies encompass diverse issues including literacy development, L2 writing theories, reading-writing connections, ideology and politics, text interactions, research methodology, curriculum design, writing assessment, technology-assisted writing, material design, and so on (Fujieda, 2006).In addition, L2 writing researchers need to consider other factors such as the critical effect of social, cultural, and educational aspects on second language writing investigations (Fujieda, 2006).
As a concise summary, Archibald and Jeffery (2000) categorized the contemporary writing studies into four major domains, including the process of writing, the product of writing, the context of writing, and the teaching of writing.The process of writing generally deals with analysis of composing strategies, modeling cognitive operations, changes in processes over time, and individual differences.The product of writing on the other hand, involves error analysis and contrastive analysis, contrastive rhetoric, and text analysis.The context of writing typically investigates social construction, and analysis of the individual's knowledge genre analysis, needs, and motivation.The teaching of writing as the last but not the least area of research into writing investigates learning strategies, learning processes, classroom procedures, development of language proficiency, and assessment.However, these areas of research should not be seen as mutually exclusive but as related and complementary body of research to shaping a comprehensive theory of writing in both L1 and L2 (Archibald & Jeffery, 2000).
The mounting field of L2 writing still develops pedagogically and theoretically ranging from practical, pedagogical, methodological, and theoretical perspectives as well as to literacy education (Fujieda, 2006).Actually, different extemporized changes -technological, disciplinary, and demographic -and writing L2 researchers' attempt to react to those changes drive the changing intellectual currents in the field of L2 writing (Matsuda et al., 2003).
HISTORICAL OVERVIEW OF THE SECOND LANGUAGE WRITING STUDIES
A historical overview of L2 writing results in gaining fruitful knowledge of this field of enquiry, developing important learning, and reconstructing uniqueness as a legitimate field of intellectual studies.Hence, the current section gives a chronological account of the prevalent approaches to the teaching writing as well as studying of writing.
Prevalent Approaches to Teaching Writing
Since writing has always been seen as a daunting task, even for learners who are adept at other language skills, researchers have long looked for appealing and practical ways to improve students' writing skill.Consequently, over the decades, approaches to teaching writing have gone through a myriad of changes (Yavuz-Erkan & İflazoğlu-Saban, 2011).In this respect, the current section reflects on the way in which the discipline of second language writing has been developed, examines the empirical L2 writing investigations chronologically, and specifically elaborates on two prevalent approaches to teaching writing namely behaviorist and contrastive rhetoric (Fujieda, 2006).
Behaviorist approach
During the 1950s, there were only few studies, which investigated L2 writing.Teaching English as a second language in the North America were almost restricted to Spanish-speaking learners, while teaching English as a foreign language was not considered seriously as an important issue in this period.Oral rather than written proficiency was given unjustifiable emphasis by the prevalent teaching method during the 1950s (Fujieda, 2006).
In the 1960s, along with the entrance of substantial body of foreign students aspiring to study in higher education in the U.S., first language writing teachers noticed considerable divergences in writing between first and second language learners.Perceiving these dissimilarities resulted in controversial division of writing pedagogy into L1 and L2 issues, which in its own turn leads to establishing the "disciplinary division of labor" between L2 studies and composition (Matsuda et al., 2003).Then, compensating for the negligence of teaching writing in English to ESL learners in the past, ESL writing turned out to be a noteworthy subfield of L2research (Fujieda, 2006).During this decade, writing instruction for L2 learners, following a behaviorist approach, focused on the L2 structure via a prescriptive controlled practice as in the Audiolingual Method which was the dominant mode of instruction during the sixties (Btoosh & Taweel, 2011).As a result, writing was restricted to drills such as fill-ins, substitutions, transformations, and completion.It used to strengthen or examine the perfect application of grammatical rules.This kind of focus neglected the enormous complexity of writing (Derakhshan, 1996).
Contrastive rhetoric approach
Afterwards, writing instructors, encompassing the structural exercises of paragraphs, recognized the necessity of adopting progressive practices of writing beyond the sentence level.Such a practical application of syntactic structure to paragraph formations resulted in the appearance of Contrastive Rhetoric (CR) by Kaplan (1966) who stresses the fundamental notion of cultural differences and variations, which is consecutively, reflected in students' writing.This approach also concentrates on the expectations of readers outside the classroom.The reader is considered as the representative of a large discourse community, and rhetorical forms, rather than grammatical forms, are regarded as paradigms (Derakhshan, 1996).Kaplan (1966), after investigating 700 L2 students' compositions, claimed that native language and cultural impact cause idiosyncratic rhetorical patterns of ESL writing.He proposed a diagram of five special linguistic characteristics (English, Oriental, Semitic, Russian, and Romance) that he called "cultural thought patterns."Based on his contrastive rhetoric research, English-speaking writers utilized a linear structure with specific details to maintain the theme.In contrary to English speakers, he reported distinctively on the rhetorical models of other languages.Arabic writers, in their compositions, tapped into a lot of coordination words in comparison to English learners.French and Spanish learners digressed from the theme with unrelated descriptions.Asian students demonstrated an illogical structure, encircling the topic.Empirical investigations beyond CR characteristics (CR in addition to syntactic explorations) were carried out across different languages after Kaplan's contrastive rhetoric research (Fujieda, 2006).
As an example, Kobayashi (1985, as cited in Fujieda, 2006) investigated the variations of writing organizations between English and Japanese students.Her study demonstrated that English-speaking students employed general-specific patterns.They cited a general statement at the beginning and went on with details while Japanese writers utilized specific-general cases in which they initially mentioned specifics that reflected a general description.In addition, Hinds (1983Hinds ( , 1984, as cited in Fujieda, 2006) studied the argumenta-tive writing structures between English and Japanese.The English writers, as in Kaplan's rhetorical model, benefited the linear and deductive pattern while the Japanese students used the Japanese rhetorical model.
Although, according to Scollon (1997), contrastive rhetoric has originated from earlier studies on error analysis and on the weak version of the Sapir-Whorf Hypothesis which postulates a close interrelationship between language and culture, Ying (2000, as cited in Btoosh & Taweel, 2011) argued that Hymes ' (1962, p. 964) ethnography of communication approach is regarded an influential chronological predecessor for contrastive rhetoric.Hymes (1966, as cited in Scollon, 1997) suggested that the Whorfian hypothesis ought to be tested again, and claimed that before being able to make any statement about the interrelationship between culture and language, we should have examined the function of culture and language in the person's life.In order to argue that language learners write with a particular structure due to some other language they speak or, more indirectly, due to a particular culture of which they are members, we should have developed a clearer portrait of how that language, culture, and group are assimilated (Scollon, 1997).
Although Kaplan's (1966) pioneering study on CR had a strong influence on second language writing research, his investigation caused much debate too.A few specialists criticized the deterministic rhetorical model for continuing a negative complex towards the writing patterns of L2 learners (e.g.Zamel, 1997;Kubota, 1997Kubota, , 1999;;Leki, 1991;Ferris and Hedgcock, 2005, as cited in Fujieda, 2006), and "privileging the writing of native English speakers, as well as for denigrating linguistic and cultural differences in writing among different languages" (Connor, 1996).In other words, Kaplan's diagram particularly generalized L2 writing characteristics.After Kaplan's study, the subject of contrastive rhetoric still sparks theoretical and educational perspectives with different methods.Specifically, several composition investigators argued for a critical writing pedagogy to adjust L2 learners to the target discourse community even when applying contrastive rhetoric research to EFL/ESL writing classrooms drawing remarkable criticism from writing researchers (Ramanathan& Atkinson, 1998;Kubota, 1999;Connor, 2001, as cited in Fujieda, 2006).However, Ramanathan and Atkinson (1998, as cited in Fujieda, 2006) warned that teaching EFL/ESL students writing in English to follow the English rhetorical pattern overtly could cause an ideological problem, and disgrace the values of the students' social as well as cultural individuality.
Contrastive rhetoric analysis had a serious impact on second language writing studies.It signified the nature of L2 writers' texts, and underlined influence of the writers' cultural contexts on the texts comprising grammatical and lexical characteristics (Fujieda, 2006).Contemporary studies on contrastive rhetoric are redefined with the innovative prospects of contrastive rhetoric pedagogy.In a more recent definition of contrastive rhetoric, Connor (1996) describes it as "an area of research in second language acquisition that identifies problems in composition faced by second language writers, and, by referring to the rhetorical strategies of the first language, attempts to explain them" (p.5).She expounds on how other fields has benefited from and contributed to contrastive rhetoric.The three main viewpoints on the writing process derived from rhetoric and composition theory: expressionist, cognitive, and social constructivist.These approaches offer the structure for analyzing the product and process of writing in a second language.Based on Connor (1996), contrastive rhetoric has heavily drawn on methodologies of text linguistics in analyzing such text attributes as coherence, narrative structure, or morphosyntactic features.The contrastive rhetoric researchupdated by text linguistics has enlightened dissimilarities between first and second language texts plus among texts of various genres.Connor (2002, as cited in Btoosh & Taweel, 2011) classified the studies on contrastive rhetoric over the last three decades into four major categories, including studies of writing as cultural and educational activity, contrastive text linguistic studies, genre-specific investigations, and classroom-based contrastive studies.Regarding the applications of contrastive rhetoric for research and pedagogy, she views contrastive rhetoric as improving learners' writing proficiency as well as enriching their culture, especially those in EFL contexts, In her point of view "contrastive rhetoric is an excellent resource for advanced-or college-level ESL/EFL writing teachers, both for gaining understanding in culturally different writing patterns and for designing writing programs in light of genre, cultural, or rhetorical concerns" (Connor, 1996).As for future implications, Connor (2001, as cited in Fujieda, 2006) states that "future contrastive rhetoric research should be sensitive to the view that writers be seen not as belonging to separate, identifiable cultural groups but as individuals in groups that are undergoing continuous change" (p.76).
In conclusion, although both behaviorist and contrastive rhetoric approaches to teaching writing emerged in chronological order as a critical reaction to a previous one, they are not mutually exclusive.Currently these approaches are all widely applied and have somehow made teaching writing a demanding task; since, as Raimes (1991, as cited in Derakhshan, 1996) puts it, today, teachers have to take account of several different approaches, "their underlying assumptions, and the practice that each philosophy generates" (p.412).
Approaches to the Study of Writing
Traditionally, there are two prevalent approaches to the study of writing: the product-orientated approach and the process-orientated approach (Arefi, 1997).According to Blackmore-Squires (2010), the distinction between the aforementioned approaches mostly is that the process approach deals with the way through which reaches the finished product.In contrast, the product approach deals with the final product and its evaluation.The two approaches were developed concurrently as reaction to each other.It is also worth mentioning that even the process approach pays attention to the product or the final piece of work.However, in this approach more emphasis is laid on how to get there and skills development along the way (Blackmore-Squires, 2010).However, more recently, a new stage of development in L2 writing, namely post-process has been commenced, which pays more attention to social dimensions of the field (Fujieda, 2006) and consequently, dramatically increased popularity of collaborative writing (Godwin-Jones, 2018).
Study of the written product, according to Arefi (1997), has focused on language and linguistic structures in students' writing (e.g.Hunt,1977 in using the T-Unit; Mehan, 1979 in using syntactic complexity;Vann, 1980 in using syntactic maturity) as well as more general characteristics such as communicative effectiveness and other traits proposed by rhetorical theories (e.g.Carlisle, 1986;Lloyd-Jones,1977).Examining the product of writing, Vann (1980, as cited in Arefi, 1997) suggested syntactic length as a measure of writing proficiency.In another study, Cooper (1983, as cited in Arefi, 1997), suggested that in good writing the sentences are needed be related to each other.This connection is significant since it "provides the structural and semantic relations between words across sentences, from the link between specific words across sentences to abstract, global thematic and structural patterns" (Arefi, 1997).
In the writing process, according to Flower and Hayes (1981), three elements interact with each other to produce a composition.First, there is the task environment that consists of everything outside the writer's body, such as audience and written text itself.Next, there is the writer's knowledge of the topic as well as the writer's knowledge of writing rhetorical problems, plans, genre, and conventions.It is noteworthy the writer can elicit from memory during the composing process.The final element is the writing processes that are the major thinking processes that writers employ in complicated ways during their writing.
Considering this final element and based on protocol 1 analysis Hayes and Flower (1980) put forward a process approach theory to the study of process of writing in the early 80's.According to this theory, it was postulated that the writer's long-term memory and the task affects writing.Process approach theory has three phases encompassing planning, translating, and reviewing.Planning is producing content, organizing it, and determining goals and procedures for writing.Prior to the beginning of writing, the writer requires a form of mental activity, and therefore, planning acts as a thinking activity.Translating is expression of the content of planning in a composition.Hereby, a writer is attempting to produce and develop her or his idea in a meaningful statement.When translating, sometimes a writer has to go back to planning for the act of translating.The act of reviewing refers to the evaluation of what has been written and planned.Sometimes due to the unsatisfactory result of this evaluation, the writer needs to revise a composition (Arefi, 1997).The process oriented approach according to Connor (1987) emphasizes processes of writing; instructs strategies for discovery and invention; takes account of purpose, audience, and the context of writing; stresses recursiveness -involving the writers to read back in order to write forward -in the writing process; and differentiates goals and mode of discourse (e.g.expository, expressive, persuasive, classification narration, evaluation, and description).It is also worth noting that more recently, (Johari, 2018) showed that the amalgamation of process writing and task-based approach significantly improve second language learners' academic writing.
However, Reid (2001, as cited in Fujieda, 2006) considers the controversy between the process and product approach in ESL/EFL contexts as "a false dichotomy," and states that many L2 writers were directed by "process writing strategies to achieve effective written communication (product), with differences occurring in emphasis" (p.29).
Moreover, although process approach theory has been a commonly acknowledged model of the writing process, it has caused criticism since it is based on studies with L1 writers, and consequently, input from long-term memory of the L2 writer is not the main concern within the process (Blackmore-Squires, 2010).
In the early nineties, writing scholars noticed the essential difference between the product and process approach.Process-oriented research originate the issues of institution which stress a specific aim such as EAP (English for Academic Purposes) and ESP (English for Specific Purposes) to value the audience in writing rather than the writer (Fujiada, 2006).After 2000, an original process inquiry into more social issues, the post-process, has emerged in the L2 writing context.Atkinson (2003, as cited in Fujieda, 2006) proposed that the post-process, which previously controlled students' process feature as a cognitive process, ought to be disregarded.Subsequently, he mentioned that in the post-process "we seek to highlight the rich, multifocal nature of the field," and "go beyond now-traditional views of L2 writing research and teaching." In summary, the product oriented traditional approach focused on expository writing; made style the most significant component in writing, and postulated that the writing process is linear, specified by writers before they begin to write (Kaplan,1966).On the other hand, the process oriented approach, emphasizes the activity (process) of writing, investigates the individual writer's approaches to writing, and highlights the way in which students managed to follow the process through writing.
Today, process inquiry in L2 writing has entered the period of "post-process era," which puts in more social aspects to writers, and ignores cognitive science to exceed prevalent perspectives in L2 writing studies and teaching (Fujieda, 2006).In other words, Post-process approach considers writing as a collaborative and social act rather than a certain technique that can be codified and taught to individual learners (Kalan, 2014).More specifically, written texts need to be viewed as products of a complicated web of social interactions, cultural practices, discursive conventions, and power differentials (Howard, 2001;Casanave, 2003;Atkinson, 2003, as cited in Kalan, 2014).Moreover, Kalan (2014) reviewing the relevant literature regarding post-process theory since 1990, presents seven focal arguments in a more comprehensive definition as follows: 1. Writing cannot be reduced to a single codified process to be taught.2. Essayist literacy and the rhetoric of assertion should be challenged in order to broaden genre possibilities.3. Writing should liberate students' agencies.4. Writing is not an individual activity taught through a simple classroom pedagogy.
5. Teachers need to move beyond the classroom as the only rhetorical situation and their role as the possessor of the techne 2 of writing.6. Written texts should be regarded as products of a complicated web of cultural practices, social interactions, power differentials, and discursive conventions.7. Teaching writing is basically teaching rhetorical sensitivity and hermeneutic guessing through a large number of literate activities.
Most definitions of post-process theory stipulates that post-process does not try to reject process theory but tries to broaden its prospects through critical re-readings of it (Couture, 1999, p. 31;Atkinson, 2003;Foster, 1999, p. 149, as cited in Kalan, 2014) and regards it as the recognition of the multiplicity of second language writing pedagogies and theories (Matsuda, 2003).
Although a few studies (e.g.Chow, 2007;Ahn, 2002, as cited in Fazilatfar et al., 2016) showed an enhancing effect of post-process or genre approach regarding learners' overall proficiency, awareness of conceptual writing strategies, willingness in applying the strategies and their attitudes toward writing activities, Hashemnezhad and Hashemnezhad (2012) investigating EFL students' writing ability in terms of three writing approaches of product, process, and post-process, concluded that post-process approach didn't have any significant primacy over process approach, but they both have substantial superiority over product approach.However, more recently, Rusinovci (2015) proposed an integrated approach to the teaching of writing through combining the strength of the genre and process approaches for utilization in writing courses.He claims such an approach presents several benefits including more focused use of text models without having to exclude components of other approaches.Furthermore, regarding writing evaluation in post-process approach, Sukandi (2016) claims that "the quality of the students writings needs to be seen from the process itself and how the students come to the understanding that writing is a social act and a medium of individual expression over academic realm."He, investigating self-evaluation essays as the post-process pedagogy in teaching writing, concludes that students have their inclination to see our class through different ways, and consequently, they will firstly see us as the thing they see all the time, then the process of what they learn become the aspect they see, which can be seen as further confirmation on the social nature writing evaluation.It is also worth mentioning that with the advent of new technological advances in writing pedagogy, Herron (2017) studied the integration of mobile device into the composition classroom in post(e)-pedagogy1 which he believes brings new opportunities and frameworks to analyze and create in composition studies.He states that a key element of post(e)-pedagogy is the relinquishing of master and a personalized approach to pedagogy for both the teacher and student which agrees well with the notion of a mobile composition and puts the mastery in students' hands.At last, it should be noted that L2 writing is still a developing field in applied linguistics and second language studies.
FEEDBACK IN WRITING
Feedback is an important and essential issue in writing courses especially in process approach since this approach recommends the effectiveness of intervention at all stages of writing (McDonough & Shaw, 2003, p. 166).According to Keh (1990) it is defined as "input from a reader to a writer with the effect of providing information to the writer for revision" (p.294).He classified it into three major types of peer feedback, conferences as feedback, and teachers' comments as feedback.
In Peer feedback the students, generally using the questions provided by their teacher, examine their peers' written work and give feedback to each other.According to Bartels (2003) the terms "peer review, peer editing, peer evaluation, peer response, and peer critiquing" are interchangeably used for this type of feedback.Also, Byrd (2003) introduced different techniques of peer feedback including "classic peer editing, booklet editing, silent editing, slice and dice, post teacher check, colored pencils/highlighters, computer editing, and reader-response editing." Conference as feedback is divided into student-teacher conference and group conference.The former, based on White and Arndt (1991), is carried out on a face-to-face basis and teachers are able to discuss the problems with their students.In the latter, according to Keh (1990), some groups read aloud sections of their own compositions for feedback; other students read aloud their partner's composition with comments about where they think the composition seems wrong, and suggest how to improve it.
As Keh (1990) put it, in teachers' comment as feedback, teachers may have different roles.First, they can comment as readers interacting with a student -i.e.responding to the composition with phrases such as "good point" or "I agree".Second, still holding the role of a reader, they can draw student's attention to specific point of confusion or to strategies for revision such as choices of problem solving, options, or an example.Finally, adopting the role of a grammarian, they can provide reference to grammatical mistakes along with the reasons for the unsuitability of those mistakes.Sheppard (1992) also, investigating the efficacy of two kinds of responding to students' compositions including "holistic feedback on meaning" and "discrete-item attention to form", showed the superiority of the former in increasing students' awareness of sentence boundaries and grammatical accuracy.
It is also worth noting that all these three types of feedback can be provided through three different methods including minimal marking, taped commentary/audio taped feedback, and online peer review/computer-based commentary (Hyland, 1990;Swaffer et al., 1998).
Finally, self-monitoring is another technique for providing feedback that is also useful in developing learners' autonomy.Through this technique learners make notes in their drafts with questions and comments on their problems before submitting their papers and then teachers give feedback on them (Charles, 1990).
WRITING ASSESSMENT
There are two types of writing administrations: analytical and holistic (impressionistic).In analytic writing, rather than given a single score, assesses compositions based on several criteria or writing features.Consequently, "writing samples may be rated on such features as content, organization, cohesion, register, vocabulary, grammar, or mechanics" (Weigle, 2002, p. 114).In Contrast, on a holistic scale, a single mark is given to the whole essay.The underlying assumption is that in holistic approach raters will draw on set of marking scales to direct them in scoring the compositions (Weigle, 2002, p. 72).
The analytic approach on L2 writing has provided a lot of materials for writing studies.The virtues of the analytic rating are because of the detailed direction that is presented to the writing judges, and the rich information as criteria that is supplied on particular strengths and weaknesses of the test-takers' performance (Chuang, 2009).Such information generates valuable diagnostic input about testees' writing skills, which is the major advantage of analytic approach (Vaughan, 1991;Gamaroff, 2000, as cited in Aryadoust, 2010).Consequently, Researchers have long constructed analytic writing descriptors, each including several criteria to measure subskills such as vocabulary, grammar, content, and organization.Weir's (1990) long list with seven subcategories is one of the most comprehensive ones to gauge writing sub-skills, and an instance of a shorter and probably more practical list is Astika's (1993) three proposed rating benchmarks.Another example is the present rating scale in the IELTS writing test that is founded on a recent exposition of writing assessment and performance (Shaw & Falvey, 2008).Aryadoust (2010) reports examples of some crafting skills in analytic assessing of writing, including overall effectiveness, fluency, intelligibility, comprehension, resources, and appropriateness, which had an impact on writing performance the most (McNamara, 1996); relevance and adequacy of content, compositional organization, cohesion, adequacy of vocabulary, grammar, punctuation, and spelling (Weir, 1990); control over structure, organization of materials, vocabulary use, and writing quantity (Mullen, 1977); and sentence structure, vocabulary, and grammar (Daiker, Kerek, & Morenberg, 1978); content, language use, organizing ideas, lexis, and mechanics (spelling and punctuations) (Jacobs et al., 1981).The effectiveness of some of these frameworks has been justified; for instance, Brown and Baily (1984) examined Jacobs et al.'s (1981) framework and found that drawing on an analytic framework of grammar, organization, style, logical development of ideas, and mechanics of writing, is a significantly justifiable scheme in evaluating writing performance.
The holistic approach toward writing and its assessment has also been studied extensively.A holistic evaluation considers an overall impression of the writer's performance.In other words, the raters react to the written production as a whole, and one mark is assigned to a composition.As Chuang (2009) puts it, "the holistic scales are more practical for decision-making since the raters only marked one score: the flexibly allowed many different combinations of strengths and weakness within a level."From a judge's point of view, "holistic rating scales make [scoring easier and quicker] be-cause there is less to read and remember than in a complex grid with many criteria" (Luoma, 2004, as cited in Chuang, 2009).It is postulated that a high portion of variability in holistic writing scores is resulted from four subclasses of grammar competence, that is, sentential connectors, length, errors, and relativization/subordination (Homburg, 1984, as cited in Aryadoust, 2010).Several researchers especially those studying high-stake tests are in favor of the holistic approach.For instance, among IELTS writing investigators, Mickan and Slater (2003) proposed that a more holistic approach to assessing writing would be more sensible than a very meticulous, analytical approach, and claimed that "Highlighting vocabulary and sentence structure attracts separate attention to discrete elements of a text rather than to the discourse as a whole" (p.86).Consequently, they suggested a more impressionistic approach to assessing writing rather than the analytic one.In another research, Obeid (2017), investigating the attitudes and perception of EFL learners toward second language writing evaluation, reports the students' tendency toward a more holistic approach.However, it is also worth noting that some recent research into writing assumes that due to similarities between writing sub-skills, having composite sub-skills where two or more categories are accommodated into a single rubric is possible (Aryadoust, 2010).
Since during the last few decades the focus of L2 writing shifted from linguistic accuracy to communication of ideas, the writing assessment, regardless of analytic or holistic scales, pays more attention to the content (Ruegg & Sugiyama, 2010).Quantity of main ideas, the logical connection between the main ideas and the thesis statement, the use of examples to support the main ideas, and the level of development of the main ideas are amongst the factors that are most commonly drawn on by raters for evaluating the content of writing (Ruegg & Sugiyama, 2010).However, in a recent study, Ruegg and Sugiyama (2010) suggested the organization score in first place and the essay length in the second place as the main predictors of content score.
In conclusion, as Chuang (2009) mentioned, it is the purposes of the assessment and the availability of existing instruments that verify the appropriacy of a rating scale.Rating schemes may describe different degrees of competence along a scale or may indicate the presence or absence of a trait.Furthermore, choosing testing procedures should entail coming across the unsurpassed feasible amalgamation of the qualities (validity, reliability, etc.) and determining which qualities were most pertinent in a specified context (Weigle, 2002, as cited in Chuang, 2009).
CONCLUSION
The prominence of writing ability and its serious position in representing students' learning degree is indisputable in second language teaching and research.In effect, writing is considered as a difficult task, even for native speakers though it is much more intimidating for non-natives, especially EFL learners.The special characteristics of writing which give it such importance as well as some of the factors making writing one the most difficult language skills to learn has been enumerated above.Generally, writing research topics in second language studies range from disciplinary to interdisciplinary and finally to metadisciplinary field of inquiry.In addition, while behavioristic and contrastive rhetoric are considered as the two main approaches to teaching writing, we can name the product-oriented, the process-oriented, and the post-process approaches as the prevalent ones to the study of writing.Finally, the administration of writing assessment is categorized into analytical and impressionistic (holistic) approaches which have long been drawn on by language teachers and researchers.
END NOTES
1 .Hayes and Flowers in Greg and Steinberg defines a protocol as; "A description of the activities ordered in time, that a subject engages in whilst performing a task" (1980, p. 4). 2 "Technêis conceived as techniques for situating bodies in contexts" (Hawk, 2004).3 Postpedagogy is a more general discussion of the need to abandon prescriptive pedagogy while post(e)-pedagogy is Ulmer's discussion of such matters with the focus on the electronic (Herron, 2017). | 8,772.4 | 2018-04-30T00:00:00.000 | [
"Linguistics",
"Education"
] |
Douglas-Rachford Algorithm for Control- and State-constrained Optimal Control Problems
We consider the application of the Douglas-Rachford (DR) algorithm to solve linear-quadratic (LQ) control problems with box constraints on the state and control variables. We split the constraints of the optimal control problem into two sets: one involving the ODE with boundary conditions, which is affine, and the other a box. We rewrite the LQ control problems as the minimization of the sum of two convex functions. We find the proximal mappings of these functions which we then employ for the projections in the DR iterations. We propose a numerical algorithm for computing the projection onto the affine set. We present a conjecture for finding the costates and the state constraint multipliers of the optimal control problem, which can in turn be used in verifying the optimality conditions. We carry out numerical experiments with two constrained optimal control problems to illustrate the working and the efficiency of the DR algorithm compared to the traditional approach of direct discretization.
Introduction
Many of the problems we encounter in the world that can be presented as optimal control problems contain constraints on both the state and control variables.Imagine a scheduling problem where the aim is to maximize profits such as in [26] in which the control variable is the production rate and the state variable is the inventory level.Naively, one may want to greatly increase both the inventory level and production rate to maximize profits but in practice there is a limit to the amount of inventory a factory can hold and a limit on how quickly humans or machines can work.In order to accurately model this problem, and many others from a wide rage of application areas such as manufacturing, engineering, science, economics, etc., we must use an optimal control problem with both state and control constraints.Although these problems are commonly found in applications they are much more difficult to solve than optimal control problems which purely have control constraints.
We focus our attention on linear-quadratic (LQ) state-and control-constrained optimal control problems.These are infinite dimensional optimization problems that involve the minimization of a quadratic objective function subject to linear differential equation (DE) constraints, affine constraints on the state variables, and affine constraints on the control variables.There is extensive literature studying LQ control problems as they model many problems from a wide variety of disciplines -see [1, 14-16, 24, 27, 28].Though many of the LQ control problems posed in the literature contain control constraints along with the linear DE constraints, it is rarer to see state constraints included since, as mentioned above, they are much more difficult to deal with.This paper proposes a unique approach for solving state-constrained problems by applying the Douglas-Rachford (DR) algorithm.
The DR algorithm is a projection algorithm used to minimize the sum of two convex functions.In order to apply the algorithm one needs the proximal operators of the two convex functions.Splitting and projection methods such as the DR algorithm are a popular area of research in optimization with a variety of applications -see [2,3,6,9,20] for their use in sphere packing problems, protein reconstruction, etc.The use of these methods to solve discrete-time optimal control problems is not new but there are very few applications of these methods to continuous-time optimal control problems.The paper [7] is one in which projection methods were used to solve the energy-minimizing double integrator problem with control constraints.The papers [11][12][13] address more general energy-minimizing LQ control problems with control constraints.While a collection of general projection methods is used in [11], the DR algorithm is used in [12,13].
Following the promising numerical results observed in [12,13], the present paper will use the DR algorithm for addressing the more challenging control-and state-constrained LQ control problems.
The current technique for solving these control-and state-constrained LQ problems is a direct discretization approach where the infinite-dimensional problem is reduced to a large scale finite-dimensional problem by use of a discretization scheme, e.g. a Runge-Kutta method [5,22].This discretized problem is then solved through the use of an optimization modelling language such as AMPL [19] paired with a (finite-dimensional) large-scale optimization software such as Ipopt [33].
In the present paper (as was done in [7,11] for control constraints only) we make the following contributions for the numerical solution of LQ control problems with state and control constraints.
The two convex functions mentioned above are defined as follows: the information from the ODE constraints appears in one function while the information from the control and state constraints along with the objective functional appear in the other.We derive the proximal mappings of the two convex functions without reducing the original infinite-dimensional optimization problem to a finite-dimensional one, though we need to discretize the state and control variables over a partition of time when implementing the DR algorithm since a digital computer cannot iterate with functions.
The paper is structured as follows.In Section 2 we give the problem formulation and optimality conditions for the optimal control problem.In Section 3 we derive the proximal mappings used in the implementation of the DR algorithm.Section 4 introduces the DR algorithm.Then Section 5 begins with an algorithm for computing one of the proximal mappings (namely the projection onto an affine set) and a conjecture used to obtain the costate variable of the original LQ control problem.Next, this section introduces two example problems, a harmonic oscillator and a mass-spring system.At the end of this section, numerical experiments for the DR algorithm and AMPL-Ipopt suite and their comparisons are given for these two problems.Finally, concluding remarks and comments for future work are provided in Section 6.
Optimal Control Problem
In this section we formulate the general optimal control problem that will be the focus of this paper.We give some necessary definitions and provide conditions for optimality from optimal control theory.
Before introducing the optimal control problem we will give some standard definitions.Unless otherwise stated all vectors are column vectors.Let L 2 ([t 0 , t f ]; R q ) be the Banach space of Lebesgue measurable functions z : [t 0 , t f ] → R q , with finite L 2 norm, namely, the Sobolev space of absolutely continuous functions, namely . With these definitions we can state now the general LQ control problem as follows. (P) The state variable x ∈ W For every t ∈ [t 0 , t f ], the matrices Q(t) and R(t) are symmetric, and respectively positive semi-definite and positive definite.For clarity of arguments, these matrices are assumed to be diagonal, namely Q(t) := diag(q 1 (t), . . ., q n (t)) and R(t) := diag(r 1 (t), . . ., r m (t)).These diagonality assumptions particularly simplify proximal mapping expressions that appear later.The initial and terminal states are given as x 0 and x f respectively.
Optimality conditions
In this section we state the Maximum Principle by using the direct adjoining approach from [21].We start by defining the extended Hamiltonian function where the adjoint variable vector λ : [t 0 , t f ] → R n with λ(t) := (λ 1 (t), . . ., λ n (t)) ∈ R n and the state constraint multiplier vectors For brevity, we use the following notation, The adjoint variable vector is assumed to satisfy for a.e.t ∈ [t 0 , t f ], where H x := ∂H/∂x.
Maximum Principle.Suppose the pair is optimal for Problem (P).Then there exists a piecewise continuous adjoint variable vector λ ∈ W for i = 1, . . ., m, where b i (t) is the ith column of the matrix B(t) and r i (t) is the ith diagonal element of R(t).Moreover, the multipliers µ 1 (t), µ 2 (t) must satisfy the complementarity conditions for all i = 1, . . ., n.
Splitting and Proximal Mappings
In this section, we rewrite Problem (P) as the minimization of the sum of two convex functions f and g, and give the proximal mappings for these functions in Theorems 1-2.
We split the constraints from (P) into two sets A, B given as Despite previously defining We assume that the control system ẋ(t) = A(t)x(t) + B(t)u(t) is controllable; in other words the control system can be driven from any x 0 to any other x f -for a precise definition of controllability and the tests for controllability, see [29].Then there exists a (possibly not unique) u(•) such that, when this u(•) is substituted, the boundary-value problem given in the expression for A has a solution x(•).In other words, A ̸ = ∅.Also, clearly, B ̸ = ∅.We note that the constraint set A is an affine subspace.Given that B is a box, the constraints turn out to be two convex sets in Hilbert space.Since every sequence converging in L 2 has a subsequence converging pointwise, it is straightforward to see that the set B is closed in L 2 .The closedness of A will be established later on as a consequence of Theorem 2 (see Remark 2).
Fix β > 0 and let where ι C is the indicator function of the set C, namely Problem (P) is then equivalent to the following problem.
In our setting, we assume that we are able to compute the projector operators P A and P B .These operators project a given point onto each of the constraint sets A and B, respectively.Recall that the proximal mapping of a functional h is defined by [8,Definition 24.1].For our setting, for any (x, Note that Prox ι C = P C . In order to implement the Douglas-Rachford algorithm we must write the proximal mappings f and g.The proofs of Theorems 1 and 2 below follow the broad lines of proof of Lemma 2 in [13].In both theorems, the major difference from [13] is that the proximal operators in the current paper have two variables x − and u − .Thanks to separability, the proof of Theorem 1 is a straightforward modification of the corresponding part of the proof of [13, Lemma 2].We include a full proof of Theorem 1 for the convenience of the reader.On the other hand, the proof of Theorem 2 deals with the solution of a more involved optimal control subproblem, namely Problem (Pg).
Theorem 1.The proximal mapping of f is given as Prox f (x − , u − ) = (y, v) such that the components of y and v are expressed as Proof.From ( 13) and the definition of f in (11) we have that In other words, to find Prox f (x − , u − ) we need to find (y, v) that solves Problem (Pf) is separable in the variables y and v so we can consider the problems of minimizing w.r.t.y and v individually and thus solve the two subproblems (Pf1) The solution to Problem (Pf1) is given by i = 1, . . ., n, which, after straightforward manipulations, yields (14).The solution to Problem (Pf2) is obtained similarly as v j (t) = argmin u j ≤w j ≤u j β r j w 2 j + (w j − u − j (t)) 2 , j = 1, . . ., m, which yields (15) after straightforward manipulations.
Theorem 2. The proximal mapping of g is given as where x(t), λ(t) are obtained by solving the two-point boundary-value problem (TPBVP) Proof.Using [8,Example 12.25], and the definition of g in (11), which verifies the very first assertion.From (13) and the definition of g in (11) we have that In other words, to find Prox g (x − , u − ) we need to find (y, v) that solves the problem (Pg) Problem (Pg) is an optimal control problem where y(t) is the state variable and v(t) is the control variable.The Hamiltonian for Problem (Pg) is and the associated costate equation is If v is the optimal control for Problem (Pg) then, by the maximum principle, for all t ∈ [t 0 , t f ], a re-arrangement of which yields (17).Collecting together the ODE in Problem (Pg) and the ODE in (19), substituting v(t) from ( 17), and assigning y(t) = x(t), result in the TPBVP in (18).
Remark 1.We note from Theorem 2 that Prox g is the projection onto the affine set A.
Unlike Prox f , in general, we cannot find an analytical solution to (18) The application of the Douglas-Rachford (DR) algorithm to our problem is slightly different to that in [7,12].Since we are solving control-and state-constrained optimal control problems we must define the proximal mappings at the pair (x, u), rather than just at u as in [7,12].Thus in the implementation of the DR algorithm we give the iterations for the pair of iterates (x k , u k ) rather than for u k alone.
Given β > 0, we specialize the DR algorithm (see [17], [25] and [18]) to the case of minimizing the sum of the two functions f and g as in ( 11)- (12).The DR operator associated with the ordered pair (f, g) is defined by Application of the operator to our case is given by where the proximal mappings of f and g are provided as in Theorems 1-2.Let X be an arbitrary Hilbert space.Now fix x 0 ∈ X.Given x n ∈ X, k ≥ 0, the DR iterations are set as follows. ( The DR algorithm is implemented as follows.We define a new parameter γ := 1/(1 + β) where β is the parameter multiplying the objective as in (11) and Theorem 1.The choice of γ ∈]0, 1[ can be made because changing β does not affect the solution of Problem (P).
Algorithm for projector onto A
We introduce a procedure for numerically projecting onto A that is an extension of Algorithm 2 from [12] to the case of LQ control problems with state and control constraints.The procedure below (Algorithm 2) can be employed in Step 3 of the DR algorithm.In the procedure we effectively solve the TPBVP in (18) by implementing the standard shooting method [4,23,31].Throughout the steps of Algorithm 2 below, we will solve the ODEs in (18), rewritten here in matrix form as with various initial conditions (IC): In the above equations, we are using λ DR , instead of just λ, to emphasize the fact that λ DR is the costate variable emanating from solving Problem (Pg) to compute the projection onto A within the DR algorithm.We reiterate that Problem (Pg) is more involved than its counterpart in [13], which leads to the ODE in (21) which in turn is more complicated than its counterpart in [12].
Algorithm 2. (Numerical Computation of the Projector onto A)
Step 0 (Initialization) The following are given: Current iterate u − , the system and control matrices A(t) and B(t), the numbers of state and control variables n and m, and the initial and terminal states x 0 and x f , respectively.Define z(t, λ 0 ) := x(t).
A conjecture for the costates for Problem (P)
Recall that the optimal control for Problem (P) is given by cases in (8).A junction time t j is a time when the control u j (t) falls into two cases of (8) simultaneously, i.e. a point in time where a control constraint transitions from "active" to "inactive," or vice versa.This definition of a junction time becomes important in the following conjecture, which has been formulated and tested by means of extensive numerical experiments.
Conjecture 1.Let λ DR (t) be the costate variable emerging from the projector into A computed in Algorithm 2 and λ(t) be the costate variable emanating from Problem (P).Let t j be a junction time for some u j , j = 1, . . ., m, such that b j (t j ) T λ DR (t j ) ̸ = 0. Define α := −r j (t j )u j (t j ) b j (t j ) T λ DR (t j ) . Then Remark 4. The ability to obtain the costate variable by Conjecture 1 is desirable as a tool for checking that the necessary condition of optimality in ( 8) is satisfied.Without this conjecture we are unable to verify whether the optimality condition is satisfied when using the DR algorithm, except when a dual version of the DR algorithm is employed, as in [13].
Once we have calculated λ in this way we can also find a multiplier µ k , k = 1, 2, numerically for the case when only one state constraint is active at a given time.Suppose that only the ith state box constraint becomes active.By rearranging Equation (1), using numerical differentiation to find λ and assuming µ 2 i (t) = 0, If µ 1 i (t) = 0, then we compute With (24), or with (25), the complementarity conditions in (3) or ( 4) can now be checked numerically.
Numerical Experiments
We will now introduce two example problems.Along with posing the optimal control problems we also give plots of their optimal controls, states, costates and multipliers with vertical lines signifying the regions where the state constraints become active.
Harmonic oscillator
Problem (PHO) below contains the dynamics of a harmonic oscillator which is typically used to model a point mass with a spring.The dynamics are given as ÿ(t) + ω 2 0 y(t) = f (t) where ω 0 > 0 is known as the natural frequency and f (t) is some forcing.In a physical system y represents the position of a unit mass, ẏ is the velocity of said mass, the natural frequency is expressed as where k is the stiffness of the spring producing the harmonic motion and f is the restoring force.In addition to the restoring force we will introduce another force u 1 that will affect the velocity ẏ directly.We let x 1 := y, x 2 := ẏ and u 2 := f to arrive at ẋ1 (t) = x 2 (t) + u 1 (t), ẋ2 (t) = −ω 2 0 x 1 (t) + u 2 (t).In this example problem the objective contains the squared sum of all four variables in the system.It is common to see this problem with the objective of minimizing the energy of the control variable but in this case we have also included the state variables to test the algorithm with a slightly more involved objective.The focus of this research is control-and state-constrained problems so the constraints are added as in Problem (P). (PHO)
Simple spring-mass system
The simple spring-mass system is another physical system that can be easily visualised, see [30].This problem contains two masses and two springs connected in sequence with dynamics given by m 1 ÿ1 (t) where m 1 , m 2 are the two masses, k 1 , k 2 are the spring coefficients (stiffness) and f 1 (t), f 2 (t) are the forces applied to m 1 , m 2 respectively.Let and k 2 = 2 then we retrieve the system in Problem (PSM).This dynamical system furnishes an optimal control problem with four state variables and two control variables.As in (PHO) we add state and control constraints and set the Figure 1: (PHO) Case 2 plots (see Table 1) using DR with N = 10 3 , −0.025 ≤ x 1 (t).Vertical lines indicate the interval in which the state constraint becomes active.
for the different values of γ.A specific value of γ that would provide the smallest errors was not obvious since many values provided similar performance but values closer to 1 appeared optimal thus the choice of γ = 0.95 was made for these experiments.
Graphical Results.In Figures 1-2 we have plots for (PHO) and (PSM) using DR.We have generated similar figures using Ipopt as well but since they mostly overlap Figures 1-2 we will not show them all here but will point out the differences.In Figure 3 we give the 1 , µ 2 2 for Case 2 (PHO) with N = 10 3 using DR (solid lines) and Ipopt (dotted lines).Note that µ 2 1 (t) found by Ipopt attains a maximum value of 33, which is not shown in the graph.multiplier vector µ 2 components for Case 2 (PHO) using DR and Ipopt.As indicated by the black vertical lines in the bottom left plot of Figure 1 when using DR the constraint on x 1 is active over a time interval that aligns with the interval where µ 2 1 is positive, as expected from Equations ( 3)-( 4).
In the results from Ipopt we observe that the state constraint only became active at a single point in time but Figure 3 shows that µ 2 1 from Ipopt (yellow dotted line) is positive for a larger interval on time thus Equation ( 4) is violated.This appears to be a numerical error that is not present in the DR results since there is an interval of points around the spike where the state constraint −0.025 ≤ x 1 (t) becomes active.When N = 10 4 , 10 5 for (PHO) we observe that when using Ipopt the lower bound on x 1 is never reached though again we see the interval of points that are almost equal to the lower bound.
In Figure 3 we also see a discrepancy in µ 2 2 .Since we have not imposed a constraint on the variable x 2 Equation (4) implies that µ 2 2 (t) = 0 for all t ∈ [t 0 , t f ].We can see in Figure 3 that µ 2 2 from Ipopt (purple dotted line) fails to satisfy this requirement while µ 2 2 from DR is, at least to the eye, equal to zero.
Another difference between the multipliers µ from DR and Ipopt is the maximum values reached by the functions.In Figure 3 we see that µ 2 1 obtained via DR and Ipopt have similar shapes in their plots but the maximum value reached using DR is approximately 0.7 while from Ipopt the maximum value is approximately 33.For (PSM) Case 2 with N = 10 3 the maximum value that µ 2 1 obtained using DR was approximately 16 while Ipopt reached approximately 310.Along with the functions having very different maximum values we noted that when generating these plots for N = 10 3 , 10 4 , 10 5 the results from DR were clearly converging to a single function µ 2 1 while this was not obvious from the Ipopt results.For (PSM) we see the approximate maximum values of µ 2 1 obtained when using Ipopt were 310 for N = 10 3 , 2500 for N = 10 4 and 3000 for N = 10 5 .In the same example using DR the maximum values of the function were 16 for N = 10 3 , 14 for N = 10 4 and 14 for N = 10 5 .
We also see some slight variation between the plots from DR and Ipopt at the junction times in the control variables (not shown in this paper in order to avoid excessive amount of visual material).The intervals where the control variables attain their boundaries using DR always appear slightly larger than those from Ipopt.The control variables in the region where they transition between active and inactive constraints appear more rounded at these corners when using Ipopt and have a more sharp transition when using DR.
Errors and CPU Times.Table 2 contains the errors in the controls, states, costates for DR and Ipopt while Table 3 contains the errors in the multipliers, objective values and CPU times.The CPU times were computed as averages of over 200 runs (up to 1,000 runs in the faster examples).The values within boxes are the smaller errors and CPU times between DR and Ipopt.At a glance we can see that more often than not DR produced smaller errors and faster CPU times when compared with Ipopt.Upon closer inspection we see that in many of the Case 2 results the errors from DR and Ipopt are comparable.We note that the "true" solutions we are using to calculate these errors are those explained earlier in this subsection except for the multipliers µ 2 in the case 1 examples.In those examples we take the zero vector as our "true" solution.
We observe that most of the results in Table 2 show a smaller error in the control variable from DR, especially in the case 1 examples where there are no state constraints.In the state variables we see that Ipopt has the smaller errors when N = 10 3 , 10 4 though there is little difference compared with DR and we see an improvement in DR when N = 10 5 .Like with the error in the control variables we see that the error in the costates is smaller for DR in almost all examples.
From Table 3 the errors in the multipliers show an improvement in DR compared with Ipopt though the "true" solution in the case 2 examples was obtained using results from Ipopt which as previously mentioned did not appear to converge to a specific value as we increased N so the quality of this "true" solution is not guaranteed.DR produced slightly smaller objective values that were closest to the "true" solution in almost all cases though the difference compared with Ipopt is marginal.We see that the CPU times are faster for DR especially in the examples where N = 10 5 .
Conclusion
We have applied the Douglas-Rachford (DR) algorithm to find a numerical solution of LQ control problems with state and control constraints, after re-formulating these problems as the sum of two convex functions and deriving expressions for the proximal mappings of these functions (Theorems 1-2).These proximal mappings were used in the DR iterations.Within the DR algorithm (Algorithm 1), we proposed a procedure (Algorithm 2) for finding the projection onto the affine set defined by the ODEs numerically.We carried out extensive numerical experiments on two nontrivial example problems and illustrated both the working of the algorithm and its efficiency (in both accuracy and speed) compared with the traditional approach of direct discretization.We observed that in general the DR algorithm produced smaller errors and faster run times for these problems most notably in the examples where we have increased the number of discretization points.From these numerical results the DR algorithm could in general be recommended over Ipopt when generating high quality solutions.
Based on further extensive experiments, we conjectured on how the costate variables can be determined.We successfully used the costate variables constructed as in the conjecture, as well as the state constraint multipliers calculated using these costate variables, for the numerical verification of the optimality conditions.
We recall that Algorithm 2 involves repeated numerical solution of the ODEs in (21) with various initial conditions.For solving (21) we implemented (explicit) Euler's method which requires only a continuous right-hand side of the ODEs.Algorithm 2 appears to be successful for the worked examples partly owing to the fact that in these examples the optimal control is continuous.We tried to use our approach for the machine tool manipulator problem from [16] which has 7 state variables, one control variable, and upper and lower bounds imposed on the single control variable and one of the state variables.However, our approach did not seem to yield a solution (so far) for this problem, conceivably owing to the fact that the optimal control variable, as well as the optimal costate variable vector, is not continuous, in that these variables have jumps a number of times during the process-see [16,Figure 5].Note that discontinuities in the control make the right-hand side of (21) discontinuous rendering Euler's method ineffective.Therefore, problems of the kind in [16] require further investigation.
We believe that our approach can be extended to more general convex optimal control problems, for example those with a non-quadratic objective function or mixed state and control constraints, as long as the pertaining proximal operators are not too challenging to derive.
It would also be interesting to employ and test, in the future, other projection type methods, for example the Peaceman-Rachford algorithm [8, Section 26.4 and Proposition 28.8], which, to the knowledge of the authors, has not been applied to optimal control problems.
Table 2 :
Errors in controls u, states x and costates λ and CPU times for the DR algorithm and AMPL-Ipopt, with ε = 10 −8 and specifications from Table1.
Table 3 :
Errors in multipliers µ, errors in objective values and CPU times for the DR algorithm and AMPL-Ipopt, with ε = 10 −8 and specifications from Table1. | 6,657 | 2024-01-15T00:00:00.000 | [
"Mathematics",
"Engineering"
] |
Stable Aqueous Colloidal Solutions of Nd3+: LaF3 Nanoparticles, Promising for Luminescent Bioimaging in the Near-Infrared Spectral Range
Two series of stable aqueous colloidal solutions of Nd3+: LaF3 single-phase well-crystallized nanoparticles (NPs), possessing a fluorcerite structure with different activator concentrations in each series, were synthesized. A hydrothermal method involving microwave-assisted heating (HTMW) in two Berghof speedwave devices equipped with one magnetron (type I) or two magnetrons (type II) was used. The average sizes of NPs are 15.4 ± 6 nm (type I) and 21 ± 7 nm (type II). Both types of NPs have a size distribution that is well described by a double Gaussian function. The fluorescence kinetics of the 4F3/2 level of the Nd3+ ion for NPs of both types, in contrast to a similar bulk crystal, demonstrates a luminescence quenching associated not only with Nd–Nd self-quenching, but also with an additional Nd–OH quenching. A method has been developed for determining the spontaneous radiative lifetime of the excited state of a dopant ion, with the significant contribution of the luminescence quenching caused by the presence of the impurity OH– acceptors located in the bulk of NPs. The relative quantum yield of fluorescence and the fluorescence brightness of an aqueous colloidal solution of type II NPs with an optimal concentration of Nd3+ are only 2.5 times lower than those of analogous Nd3+: LaF3 single crystals.
Introduction
Currently, an important scientific goal is to design aqueous colloidal solutions (ACS) of nanoscale fluorophores based on dielectric crystals doped with rare earth (RE) ions. These luminescent nanoparticles (NPs) combine a number of unique properties, including narrow spectral absorption and emission lines and long lifetimes of their excited states, allowing for time detuning from autofluorescence of biological tissues and high photoand physicochemical stability [1][2][3][4][5]. All these advantages make it possible to use them in biology and medicine as in vitro and in vivo luminescent probes for visualization in the visible (VIS) and near infrared (NIR) spectral ranges in the first (0.75-0.95 µm) and second (1.0-1.2 µm) optical "windows" of transparency of biological tissues ("biological windows") [6][7][8][9]. Bioimaging with NIR radiation is advantageous compared to visible light because it can penetrate biological tissue to a great depth (one centimeter) [10][11][12][13][14][15] without causing photoinduced cytotoxicity. In addition, NIR signals can be discriminated from the natural fluorescence of biological tissues (autofluorescence) due to the increased signal to noise ratio, which provides more specific and sensitive detection.
Due to the closeness of the radii of the lanthanum and neodymium ions, the crystal matrix of LaF 3 is convenient for doping, since it allows up to 100% replacement of lanthanum ions with neodymium ions [16,17]. In addition, the 4 F 3/2 → 4 I 9/2 , 4 I 11/2 transitions of the neodymium ion ensure luminescence in the first and second biological windows [18].
An additional advantage of the LaF 3 crystal matrix doped with Nd 3+ ions is the large value of the ratio of the Judd-Ofelt intensity parameters Ω 4 /Ω 6 [19]. It provides a relatively high luminescence branching ratio β at the 4 F 3/2 → 4 I 9/2 transition of the Nd 3+ ion, i.e., in the first biological window, almost equal to the luminescence branching ratio on the wellknown 4 F 3/2 → 4 I 11/2 laser transition in the second biological window. The luminescence in the first biological window can be monitored with less expensive detectors than in the second. The relatively low value of the parameter Ω 6 in the LaF 3 crystal [19] also provides less effective self-quenching of the 4 F 3/2 level of Nd 3+ ions and weakened quenching of the luminescence caused by vibrations of the OH-acceptors remaining in the form of defects in the crystal lattice of LaF 3 , as the result of their synthesis in an aqueous medium. Consequently, Nd 3+ : LaF 3 exhibits weaker luminescence self-quenching and quenching than in many other crystals and NPs doped with Nd 3+ ions [19]. Although until recently there was no understanding of the physical reasons explaining the advantages of this crystal over other similar crystals, Nd 3+ : LaF 3 NPs and their aqueous solutions have nevertheless been developed for more than 15 years for optical bioimaging in the first [20,21] and second biological windows [16]. Such NPs can simultaneously combine optical and magnetic nanoheaters [22][23][24], as well as spectral or kinetic nanothermometers [24][25][26]. In addition to ensuring successful medical use, these NPs should have optimal sizes, the ability to form stable aqueous colloidal solutions, chemical and biological inertness, and low toxicity [27].
To analyze the degree of luminescence quenching by unexcited impurity ions and uncontrolled acceptors, it is acceptable to use such a value as the relative quantum yield of luminescence ϕ/ϕ 0 [28]. If the law of luminescence decay I(t) under delta-pulse excitation is known, then the relative quantum yield of luminescence is determined by the following expression: Here, ϕ 0 and τ D are the quantum yield and luminescence lifetime of the donor in the absence of quenching due to inter-center energy transfer, and N(t) and ϕ are kinetics of impurity quenching of luminescence and the quantum yield in the presence of inter-center energy transfer.
Usually, in the absence of energy transfer, the number of excited optical centers decreases exponentially with a characteristic decay rate equal to the sum of the radiative and nonradiative intra-center decay rates 1/τ D = 1/τ R + 1/τ N . Therefore, ϕ 0 is equal to the product of the radiative decay rate 1/τ R by the luminescence lifetime τ D in the absence of energy transfer. Similarly, the quantum yield of impurity quenching of luminescence ϕ is Combining (2) and (3), we obtain Formula (1). For metastable levels τ D ≈ τ R , and therefore ϕ 0 ≈ 1. As a result, we obtain that the Formulas (1) and (3) can be written in the form The fluorescence brightness is determined by the product of the absolute concentration n D of Nd 3+ ions and the relative quantum yield of impurity quenching of luminescence equals ϕ for metastable levels: This value is actually proportional to the radiation intensity of the donor at a concentration n D and is widely used in the literature [29,30].
In the literature, we have not come across an accurate experimental determination of the relative quantum yield of the luminescence of ACS of Nd 3+ : LaF 3 NPs synthesized by various methods. A fairly high relative quantum yield of 48% [31] was determined in wellcrystallized 3 at.% Nd 3+ : LaF 3 NPs about 30 nm in size, synthesized by the hydrothermal method from an aqueous ethanol solution and redispersed in an anhydrous solvent. The quantum yield was determined from the ratio of the decay time of luminescence by a factor of e (τ e = 369 µs) to the radiation lifetime of the 4 F 3/2 level (τ R = 753 µs), which was obtained from the absorption spectra of this colloid using the Judd-Ofelt theory. However, it is doubtful whether the refractive index of a colloidal solution with different refractive indices of NPs and a solvent was correctly taken into account when calculating the radiative lifetime. It seems to us that it is underestimated, since the refractive index of the proposed complex solvent is lower than that of Nd 3+ : LaF 3 NPs. Below, we will dwell on this issue in more detail. Similarly, [32] mentions the achievement of the relative quantum yield of NIR luminescence ϕ/ϕ 0 = 95%, measured in nanocomposites obtained by dispersing 0.5 at% Nd 3+ : LaF 3 NPs about 10 nm in size, synthesized by the solvothermal method, with the encapsulation of these NPs in a polymer shell. Such a high value of the quantum efficiency of luminescence in the NIR spectral range, calculated from the ratio τ e = 800 µs at 0.5 at% Nd 3+ to the τ e = 846 µs of the 4 F 3/2 level of the Nd 3+ ion (estimated by the Judd-Ofelt theory using absorption spectra) indicates the almost complete absence of the luminescence quenching of the Nd 3+ ions by OH− acceptors in the bulk of NPs obtained by a non-aqueous method. However, again, when calculating the radiation lifetime, the refractive index of a nanocomposite consisting of the NPs surrounded by a polymer, which have different refractive indices, was not correctly taken into account. The value is also underestimated, since the refractive index of the polymer is lower than that of Nd 3+ : LaF 3 NPs. In [25], which used the thermal lens method [33] in LaF 3 : Nd 3+ NPs, it was found that at a low concentration of Nd 3+ ions, the relative quantum yield of luminescence reaches 80%, and at a concentration of 20 at% it decreases to 20% [25]. In [34], the limitation of the inapplicability of the thermal lens method for estimating the relative quantum yield of luminescence of aqueous colloidal solutions of NPs is discussed in detail. This method gives extremely overestimated values of the quantum yield by almost an order of magnitude.
A method for determining the relative fluorescence quantum yield ϕ/ϕ 0 (Equation (4)) of the 4 F 3/2 excited state of the Nd 3+ ion, based on the ratio of the area under the measured luminescence decay curve I meas (t) and the lifetime τ R of the radiative spontaneous decay of this state, determined at the late stage of the luminescence kinetics of the powder and ACS of 0.1 at% Nd 3+ : LaF 3 NPs, was applied in [34] and [19], respectively. Note, that the intracenter quantum yield of luminescence for the 4 F 3/2 metastable state tends toward unity ( ϕ 0 → 1 ) in response to an insignificant intra-center nonradiative relaxation due to the large energy gap (∆E ≈ 5000 cm −1 ) [35] between the 4 F 3/2 level and the next lower energy level 4 I 15/2 . As stated above, the fluorescence quenching of dopant ions in Nd 3+ : LaF 3 NPs depends on two independent donor-acceptor processes of transfer of electronic excitation energy from the 4 F 3/2 excited state of the Nd 3+ ion. Namely, concentration self-quenching due to the cross-relaxation through the 4 I 15/2 electronic state [16] and on quenching, caused by anharmonic vibrations of molecular groups. It is known that hydroxyl OH−groups are one of the most effective quenchers of luminescence in the near-IR range [36].
At a very low concentration of Nd 3+ (0.1 at% Nd 3+ ), the efficiency of the self-quenching process of the luminescence of Nd 3+ ions in crystals is close to zero. Therefore, the maximum relative quantum yield of luminescence from the 4 F 3/2 level of the Nd 3+ ion, close to unity, is observed in a LaF 3 single crystal doped with a low concentration of Nd 3+ ions [17]. On the contrary, the contribution of luminescence quenching on OH-acceptors in NPs can be significant, even at a low concentration of Nd 3+ ions, which leads to a much lower quantum yield in NPs. The value of ϕ determined in the dried powder of 0.1 at.% Nd 3+ : LaF 3 NPs synthesized by the hydrothermal method with microwave treatment with Proxanol-268 is about 30% [34]. Consequently, the contribution of the Nd-OH donoracceptor quenching to the overall relaxation rate is about 70%, which indicates its high efficiency in Nd 3+ : LaF 3 NPs synthesized from aqueous solutions. Thus, the problem of NIR luminescence quenching of Nd 3+ ions NPs is fundamental, since most of the synthesis methods, which can result in their stable aqueous colloidal solutions, are carried out either in water or in other media containing OH−groups. At the same time, methods for the synthesis of NPs doped with rare earth ions largely determine their physicochemical and luminescent properties. Currently, the most simple and widespread methods of synthesis are co-precipitation [34,37,38], solvothermal [39][40][41], and hydrothermal [34,42,43]. Each of these methods, used to synthesize stable colloids of Nd 3+ : LaF 3 NPs with the aim of creating efficient nanosized phosphors on their basis, have their own advantages and disadvantages.
The co-precipitation method is the simplest and fastest way for obtaining NPs, since it does not require extreme synthesis conditions and/or expensive equipment and is carried out using organic or inorganic solvents. During the synthesis of Ln 3+ : LaF 3 NPs (Ln 3+ = Eu 3+ , Er 3+ , Nd 3+ , Ho 3+ ) in an ethanol-aqueous medium at 60 • C [37] using a surface-modifying agent (ammonium di-n-octadecyl dithiophosphate), highly crystalline luminescent Ln 3+ : LaF 3 NPs with a size of 7-10 nm were obtained. However, in order to obtain a stable aqueous colloid, the hydrophobic surface of these NPs requires an additional complex modification procedure to transform them into a hydrophilic entity. High-temperature annealing (up to 500 • C for 90 min) of Nd 3+ : LaF 3 NPs synthesized by a conventional co-precipitation method in an aqueous medium [38] improved the luminescence properties and crystal structure of these NPs. However, after annealing, they significantly aggregated with an increase in average size from 12 nm to 40 nm, which is unlikely to facilitate their redispersion in water.
In the solvothermal method of synthesis from organic solvents, the initially coprecipitated gel is treated at high temperature and pressure to obtain non-aggregated NPs with a narrow size distribution. These NPs contain much fewer defects in the form of OH-groups and water molecules. Therefore, the quenching of the luminescence of these NPs is much weaker than that obtained by water synthesis methods. On the other hand, such NPs almost always have a hydrophobic surface, which necessitates the modification of their surface using surfactants, leading to an increase in their toxicity and complication of the technology for the production of fluorescent probes based on them [39,40].
As a result of the hydrothermal synthesis, as a variation of the solvothermal synthesis using water as a solvent, non-toxic, well-crystallized colloidal Nd 3+ : LaF 3 NPs with a hydrophilic surface are synthesized, capable of forming stable aqueous colloidal solutions. However, at the same time, these NPs have a wider size distribution and contain an increased concentration of OH−groups in the crystalline matrix of LaF 3 and residual water in the mesopores, which leads to an increase in the quenching of the luminescence of Nd 3+ ions, as compared to NPs of solvothermal synthesis. The formation of NPs of low-soluble compounds under hydrothermal conditions is usually described by the dissolution-crystallization mechanism [44,45]. When a freshly precipitated gel of a poorly soluble compounds is exposed to hydrothermal conditions, it undergoes a collective recrystallization process, which largely determines the size distribution of nanoparticles, the stability of the final colloid, and the degree of crystallinity and the defectiveness of the NPs. [44,45]. Thus, the hydrothermal method makes it possible to obtain hydrophilic, partially agglomerated, highly crystalline, and compositionally homogeneous NPs of lowwater-soluble compounds. An increase in the morphological homogeneity of NPs, as well as a reduction in the duration of hydrothermal treatment to obtain well-crystallized materials, is possible with the use of microwave heating [46,47]. At the same time, hydrothermal microwave treatment (HTMW) can be used both for direct synthesis from the solution and for the crystallization of pre-precipitated gels [48].
As shown in [34], hydrophilic Nd 3+ : LaF 3 NPs obtained by co-precipitation do not meet the necessary requirements due to strong self-quenching of luminescence in the Nd*-Nd pairs and the Nd*-OH quenching when acceptors are in the bulk of NPs. Therefore, this method is supplemented by hydrothermal microwave treatment in an aqueous medium using the surfactant Proxanol-268. It was shown that the use of HTMW can significantly enhance the luminescence and improve the crystalline properties of Nd 3+ : LaF 3 NPs. Analysis of the luminescence properties of these NPs showed that they have a much lower degree of defectiveness and a much higher fluorescence brightness in the near IR spectral range, due to weaker luminescence quenching. Thus, the co-precipitation method, supplemented by the HTMW treatment, is one of the most convenient and promising methods for obtaining highly crystalline luminescent NPs for use in medicine [48].
In this work, we continued the study (begun in [19]) of the concentration dependence of the relative quantum yield and the fluorescence brightness of the impurity luminescence of Nd 3+ ions in long-term stable aqueous colloidal solutions of Nd 3+ : LaF 3 NPs synthesized without surfactants. Hydrothermal synthesis of colloids was carried out on two different devices capable of microwave heating, differing in the number of magnetrons.
The aim of this work is to maximize the luminescence brightness, in the NIR spectral range, of stable aqueous colloidal solutions of Nd 3+ : LaF 3 NPs synthesized by the hydrothermal microwave method by reducing fluorescence quenching, which will make them applicable for luminescent imaging in the first transparency window of biological tissues.
The initial reagents used in the synthesis without any further purification include Nd(NO 3 (15 mL). The solution of rare earth salts was added dropwise to the NH 4 F solution (5 mmol) in deionized water (25 mL) under vigorous stirring. The freshly precipitated gels were diluted with deionized water (10 mL) and left stirring for 15 min. The resulting solutions were transferred into a 100 mL Teflon autoclave and placed under microwave ir-radiation for 2 h at 200 • C using a speedwave Four or speedwave XPERT laboratory device. After they were cooled, they were centrifuged using a Thermo Scientific Heraeus Multifuge X1 or Hermle Z326 device correspondingly and washed several times with deionized water. The resulting precipitates were redispersed in deionized water using ultrasonication.
Nonoptical Characterization
X-ray diffractograms (XRD) of both powders were detected using Bruker D2 Phaser powder X-ray diffractometer with CuKα radiation. Processing of the results, the phase analysis of the powders, and lattice parameters refinement were performed using software package DIFFRACplus (TOPAS 4.2.0.2).
Samples of type I were studied at the Institute of Physics of the University of Tartu (Estonia) using transmission electron microscopy (TEM) analysis, and those measurements were performed in the scanning mode (STEM) at 200 kV using a Cs-probe-corrected transmission electron microscope (FEI Titan Themis 200, ThermoFisher Scientific, Hillsboro, OR, USA). Powders in solutions were diluted in ethanol and ultrasonicated. The colloid was placed on a TEM copper grid with carbon film and dried for several hours. Energy dispersive X-ray spectroscopy (EDX) signal of the NPs was collected with SuperX silicon drift detectors (Bruker, Billerica, MA, USA) to measure element concentrations. Quantitative analysis was performed with Cliff-Lorimer method for F with K-line and for La and Nd with L-line using Bruker Esprit software. It provides accurate enough elemental analysis with high spatial resolution.
Samples of type II were studied at the GPI RAS (Moscow). The transmission electron microscopy (TEM) and STEM images of the samples of type II were taken with Zeiss Libra 200 FT HR microscope under accelerating voltage 200 kV. The colloid was placed on a TEM copper grid with carbon film and dried for several hours. EDX spectrometer was controlled by EDS Aztec OXFORD software. Processing of TEM images for calculation of size distribution was carried out using the ImageJ program. The statistics for each sample consisted of about 1000 NPs.
Optical Characterization
The study of the spectral and kinetic characteristics of NIR luminescence of aqueous colloidal solutions of Nd 3+ : LaF 3 NPs, depending on the concentration of Nd 3+ ions, was carried out on similar experimental setup at the Institute of Physics of the University of Tartu and GPI RAS in Moscow. In Tartu, the samples of type I were excited in the spectral range of 564-590 nm into 4 G 5/2 level of Nd 3+ ion by tunable pulsed Rhodamine 6G dye laser DL-Compact (Estla Ltd., Tartu, Estonia) with laser line width ∆λ = 0.0065 nm at full-widthhalf-maximum (FWHM), pumped by the second harmonics of Nd:YAG (model LQ215, f = 20 Hz, pulse duration 5 ns, Solar laser systems, Minsk, Belarus), or Continuum Sunlite OPO system PL 9010, TRP with EX OPO frequency extension module (signal 405-705, idler 715-1750 nm, laser line width ∆λ = 0.003 nm at FWHM) pumped by second harmonics of Continuum YAG: Nd 3+ laser with seeder (f = 20 Hz, pulse duration 7 ns). The wavelength of excitation was controlled by the wavelength meter WS 5 (HighFinesse, Graefelfing/Munich, Germany/Ångstrom Ltd., Novosibirsk, Russia) with an accuracy of 0.001 nm. The near infrared luminescence of the sample was focused by the condenser on the entrance slit of the Shamrock 303i spectrometer (Andor, Oxford Instruments, Abingdon-on-Thames, UK) with 1200 grooves per mm grating with linear inverse dispersion of 2.4 nm/mm. The BLP01-808R-25 edge-filter (Shamrock) was placed at the front slit of the monochromator to limit the entrance of stray light caused by laser radiation. The fluorescence was detected with the gated Andor Technology iCCD camera iStar DH320T-18H-13 with a pixel size of 26 µm and with Peltier cooling system. In Moscow, the fluorescence kinetics of the samples of type II was measured with excitation by a pulsed Al 2 O 3 -Ti laser LOTIS-TII LS-2134-LT40 (Lotis, Minsk, Belarus) (f = 10 Hz, t pulse = 8-30 ns) into the 4 F 5/2 level of the Nd 3+ ions. The fluorescence that originated from the 4 F 3/2 level of Nd 3+ ions was dispersed by MDR 23 monochromator (LOMO, St. Petersburg, Russia) with 0.1 nm spectral resolution. The longpass filter FEL0850 (Thorlabs, Newton, NJ, USA) was attached to its front slit to block the laser radiation. The fluorescence kinetics of the samples of type I was detected by a Hamamatsu PMT 6240-02 (Hamamatsu Photonics, Naka Ward, Sunayamacho, Japan) and the samples of type II by a Hamamatsu PMT R13456P in gated photon counting mode with a multi-channel analyzer Fast Comtec P7882 (FAST ComTec Communication Technology GmbH, Oberhaching, Germany) with time resolution of 100 ns (samples of type I) and multichannel scaler (MCS) Timeharp 260 (PicoQuant GmbH, Berlin, Germany) with subnanosecond time resolution (samples of type II). Constant fraction discriminators of the NIM standard (ORTEC/AMETEK, Oak Ridge, TN, USA) were used for accurate timing of triggering and counting pulses of fluorescence signal of samples of type II. The fluorescence of both types of samples was detected at the 4 F 3/2 → 4 I 9/2 transition of Nd 3+ ions. To obtain the fluorescence kinetics undistorted by various nonlinear processes, usually defined as up-conversion [49] in the case of excitation into the 4 F 5/2 level in the study of samples of type II, we decreased the energy of the laser excitation pulse to a value when further decrease in energy affected only the fluorescence intensity, but not the kinetics itself. Since the NIR luminescence of type I NPs was excited through the high-lying 4 G 5/2 level, the effect of up-conversion on the excitation of the 4 F 3/2 metastable level was insignificant due to longer multiphonon relaxation times at higher levels compared to the exciting laser pulse.
The excitation spectrum in the range of 300-925 nm was recorded in the Center for Molecular Biophysics CNRS Orléans, France. A sample was placed into a quartz capillary with 2 mm interior diameter (i.d.) using a custom-designed Horiba Scientific Fluorolog 3-22 spectrofluorometer equipped with integrated sphere and visible photomultiplier tube (PMT) (220-950 nm, R13456; (Hamamatsu Photonics, Naka Ward, Sunayamacho, Japan) and a NIR PMT (950-1650 nm, H10330-75; Hamamatsu) upon excitation with a continuous Xenon lamp. The excitation spectrum was corrected for the instrumental functions.
Structure of the Nanoaprticles
According to the XRD ( Figure S1), all synthesized samples are pure LaF 3 phase with a fluorcerite structure (space group P3c1, ICDD PDF 78-1864) with a centrosymmetric unit cell.
Analysis of the projections of NPs from TEM images showed that after drying on a carbon film, drops of colloidal solution of Nd 3+ : LaF 3 NPs can partially agglomerate and partially remain isolated (Figure 1a,b). In general, the NPs are well crystallized and partially faceted (Figure 1a-d). In high resolution HR STEM images (Figure 1c,d), the projections of NPs show bright areas of mesopores, which are located in the volume or on the surface of NPs and are probably filled with a mother liquor.
Elemental analysis ( Figure S2) has shown that, in both types of NPs, the ratio of the main chemical elements (La/F) remains constant and very close to the stoichiometric ratio. Oxygen and dopant element (Nd) were also detected. However, their correct quantification in both types of NPs was impossible due to low content.
To carry out a statistical analysis of the size distribution of NPs, we assumed that the most suitable geometric figure for approximating the projection shape of most NPs is an ellipse (Figure 2). We obtained size distributions of both types of NPs ( Figure 3) in accordance with the parameters of large (D) and small (d) diameters of an ellipse (Figure 2), fitted around the projection of the nanoparticle.
All arrays of distributions of NPs of both samples ( Figure 3) are poorly described by one normal Gaussian distribution function, but at the same time, they are well approximated by a sum of two normal distribution functions (parameter R 2 > 0.99) (Figure 3a,b). In this regard, we assume that both types of NPs contain two fractions: a finely dispersed fraction (Fine, index F), which corresponds to NPs with smaller sizes, and the second coarse fraction (Coarse, index C), corresponding to larger NPs. Elemental analysis ( Figure S2) has shown that, in both types of NPs, the ratio of the main chemical elements (La/F) remains constant and very close to the stoichiometric ratio. Oxygen and dopant element (Nd) were also detected. However, their correct quantification in both types of NPs was impossible due to low content.
Samples of type I NPs
Samples of type II NPs To carry out a statistical analysis of the size distribution of NPs, we assumed that the most suitable geometric figure for approximating the projection shape of most NPs is an ellipse ( Figure 2). We obtained size distributions of both types of NPs ( All arrays of distributions of NPs of both samples ( Figure 3) are poorly described by one normal Gaussian distribution function, but at the same time, they are well approximated by a sum of two normal distribution functions (parameter R 2 > 0.99) (Figure 3a,b). In this regard, we assume that both types of NPs contain two fractions: a finely dispersed tion in both types of NPs was impossible due to low content.
Samples of type I NPs
Samples of type II NPs To carry out a statistical analysis of the size distribution of NPs, we assumed that the most suitable geometric figure for approximating the projection shape of most NPs is an ellipse (Figure 2). We obtained size distributions of both types of NPs ( All arrays of distributions of NPs of both samples ( Figure 3) are poorly described by one normal Gaussian distribution function, but at the same time, they are well approximated by a sum of two normal distribution functions (parameter R 2 > 0.99) (Figure 3a,b). In this regard, we assume that both types of NPs contain two fractions: a finely dispersed The parameters D F , D C and d F , d C , respectively, in each distribution ( Figure 3, Table S1) are the positions of the maxima of two Gaussian functions, the sum of which describes the size distribution of NPs in accordance with the large (D) and small (d) diameters of the approximating ellipse. The volume ratio parameter is the ratio of the specific contributions of these two Gaussian functions, determined by the ratio of the areas under them in the same distribution ( Figure 3, Table S1). The fraction with smaller projection sizes of NPs, characterized by the parameters D F and d F , has a narrower size distribution than those with the larger sizes (Table S1). This fraction of NPs appears to be formed as a result of the primary crystallization of the gel from solution. The fraction of larger NPs, which is described by the D C and d C parameters, is apparently the result of recrystallization and aggregation of primary NPs during HTMW treatment. Thus, in spite of the same general conditions of synthesis, the geometry of the speedwave XPERT setup leads to an increase in the growth and recrystallization of NPs of the sample of type II ( Figure 3, Table S1), which leads to an increase in their crystallinity and, consequently, to a decrease in quenching of Nanomaterials 2021, 11, 2847 9 of 23 NIR luminescence of these NPs. The reason for the intensification of growth during HTMW treatment in a setup with two magnetrons requires additional research. Apparently, this is due to the intensity and distribution of the electromagnetic field during treatment of the autoclaved sample. It should be noted that, in different types of samples of aqueous colloidal solutions of Nd 3+ : LaF 3 NPs, the ratio of these two fractions changes only to a small extent. In the sample of type I, the contribution of the fraction of small NPs is slightly more than 50% (53-57%) (Figure 3a, Table S1), while in the sample of type II this contribution varies from 49 to 64% (Figure 3b, Table S1). In this case, the fraction of small NPs of the sample of type I is finer than that finely dispersed fraction of NPs of the sample of type II. The same applies to the coarse fraction of NPs of both types. The very close polydispersity values ΠD ≈ Πd ≈ 0.7 (see Appendix A, Formula (A1)), calculated from large (D) and small (d) diameters, for both types of NPs indicate that the size distributions of NPs synthesized in different HTMW devices obey the same law. Large differences in the values of Π from unity indicate a wide size distribution of NPs in the colloidal system, which is a frequent phenomenon in the synthesis from an aqueous medium and is considered a disadvantage of the aqueous method of synthesis.
In addition to growing, NPs also aggregate and agglomerate with each other, which occurs as a result of their chaotic motion caused by temperature phenomena (Brownian motion, temperature gradients). Due to the processes of aggregation and agglomeration, both samples contain separate large NPs with a parameter D = 40-70 nm.
Excitation and Fluorescence Spectra
To select the excitation wavelength of luminescence when measuring the fluores- The <D sphere > values are calculated considering the average specific contribution of each fraction of NPs, which is characterized by the average diameter <D N sphere > (n = F, C) (Table S1). In turn, the average diameter <D N sphere > is calculated from the condition of equality of the volumes of a sphere, with such a diameter and model ellipsoids obtained by rotating an ellipse with diameters D N and d N relative to a large diameter D N . The estimate of the <D> v mean scattering region size (CSD) of NPs obtained from XRD patterns is shown in Table S1. The values of <D> v of both types of NPs agree with the average diameters <D sphere > within the calculation error of the latter (Table S1), which confirms a high degree of crystallinity of the NPs. The CSD value is slightly higher than the average particle size calculated from microscopy data, probably due to the peculiarities of the size distribution (a significant fraction of large particles and bi-normal distribution). It is known [50] that a wide fraction of large particles increases the scatter of calculated CSD values, which can reach several nanometers in the case of particles larger than 15 nm.
The very close polydispersity values Π D ≈ Π d ≈ 0.7 (see Appendix A, Formula (A1)), calculated from large (D) and small (d) diameters, for both types of NPs indicate that the size distributions of NPs synthesized in different HTMW devices obey the same law. Large differences in the values of Π from unity indicate a wide size distribution of NPs in the colloidal system, which is a frequent phenomenon in the synthesis from an aqueous medium and is considered a disadvantage of the aqueous method of synthesis.
In addition to growing, NPs also aggregate and agglomerate with each other, which occurs as a result of their chaotic motion caused by temperature phenomena (Brownian motion, temperature gradients). Due to the processes of aggregation and agglomeration, both samples contain separate large NPs with a parameter D = 40-70 nm.
Excitation and Fluorescence Spectra
To select the excitation wavelength of luminescence when measuring the fluorescence kinetics, the fluorescence excitation spectrum of a 0.1 at% Nd 3+ : LaF 3 type I NPs sample was measured at a detection wavelength of 862.8 nm by scanning the laser within the 4 I 9/2 → 4 G 5/2 + 2 G 7/2 electronic transition ( Figure 4a) and detected at 1064 nm in the range of 300-950 nm (Figure 4b). There are two observed absorption maxima at 577.8 nm (Figure 4a) and 790 nm (Figure 4b, inset), respectively. The luminescence of NPs samples of type I was excited at the 4 I9/2 → 4 G5/2 transition (λexc = 577.8 nm) (Figure 5a, orange arrow) and for NPs samples of type II at the 4 I9/2 → 4 F5/2 transition (λexc = 789 nm) (Figure 5a, red arrow). Note that for both types of NPs samples, the luminescence spectrum ( Figure 6) and the luminescence decay kinetics from the 4 F3/2 level do not depend on the above-mentioned excitation wavelengths. This is due to the fact that excitation, both from the 4 G5/2 level, which is approximately 4500 cm −1 above the 4 F5/2 level, and from the 4 F5/2 level (Figure 5a), almost instantly (on a nanosecond time scale) nonradiatively relaxes (Figure 5b, blue arrows) to the metastable state 4 F3/2. The luminescence of NPs samples of type I was excited at the 4 I 9/2 → 4 G 5/2 transition (λ exc = 577.8 nm) (Figure 5a, orange arrow) and for NPs samples of type II at the 4 I 9/2 → 4 F 5/2 transition (λ exc = 789 nm) (Figure 5a, red arrow). Note that for both types of NPs samples, the luminescence spectrum ( Figure 6) and the luminescence decay kinetics from the 4 F 3/2 level do not depend on the above-mentioned excitation wavelengths. This is due to the fact that excitation, both from the 4 G 5/2 level, which is approximately 4500 cm −1 above the 4 F 5/2 level, and from the 4 F 5/2 level (Figure 5a), almost instantly (on a nanosecond time scale) nonradiatively relaxes (Figure 5b, blue arrows) to the metastable state 4 F 3/2 .
(λexc = 577.8 nm) (Figure 5a, orange arrow) and for NPs samples of type II at the 4 I9/2 → 4 F5/2 transition (λexc = 789 nm) (Figure 5a, red arrow). Note that for both types of NPs samples, the luminescence spectrum ( Figure 6) and the luminescence decay kinetics from the 4 F3/2 level do not depend on the above-mentioned excitation wavelengths. This is due to the fact that excitation, both from the 4 G5/2 level, which is approximately 4500 cm −1 above the 4 F5/2 level, and from the 4 F5/2 level (Figure 5a The form-factors of the luminescence spectra of a 4 at% Nd 3+ : LaF3 type I NPs sample ( Figure 6, red curve) and a 0.45 at.% Nd 3+ : LaF3 single crystal ( Figure 6, blue curve) are identical, which is in agreement with the HRTEM (Figure 1) and XRD ( Figure S1) and indicates a good quality of crystallization of NPs in the obtained aqueous colloidal solutions. The positions of intense spectral lines, with maxima at about 860 and 862.8 nm for the 4 F3/2(2') → 4 I9/2(1) and 4 F3/2(1') → 4 I9/2(1) transitions, coincide in the excitation ( Figure 6, black curve) and luminescence spectra of 0.1 and 4 at% Nd 3+ : LaF3 type I NPs samples. This result indicates that at room temperature the single type of optical centers in these NPs forms the Nd 3+ ions spectral lines. At the same time, the spectral lines of the 4 F3/2(2') → 4 I9/2(1) and 4 F3/2(1') → 4 I9/2(1) transitions for NPs are better resolved, which indicates an even lesser inhomogeneous broadening of spectral lines when comparing to a single crystal. Figure 6. The luminescence excitation spectrum of a 0.1 аt% Nd 3+ : LaF3 NPs sample of type I (black spectrum), measured by scanning a laser at the 4 I9/2 → 4 F3/2 transition with a step of 0.12 nm. The detection was performed in the range of 850-873 nm, including 4 F3/2 (j') → 4 I9/2 (j) transitions, where j = 1 and 2. The luminescence spectra of a 4 аt.% Nd 3+ : LaF3 NPs sample of type I (red spectrum) and 0.45 аt% Nd 3+ : LaF3 single crystal (blue spectrum) upon excitation at a wavelength λexc = 577.8 nm, measured at the 4 F3/2 → 4 I9/2 transition.
Fluorescence Decay Kinetics, Relative Fluorescence Quantum Yield, and Brightness of Aqueous Colloidal Solutions
The fluorescence decay kinetics of the 4 F3/2 metastable level of the Nd 3+ ion in aqueous colloidal solutions of x at% Nd 3+ : LaF3 NPs was measured depending on the dopant concentration x = 0.1, 1, 2, 3, 4, 6, 8, 10, 12 within four orders of magnitude. Fluorescence detection was performed at the 4 F3/2 → 4 I9/2 transition at a wavelength λdet = 862.8 nm upon laser excitation to the 4 G5/2 + 2 G7/2 level (λexc = 577.8 nm) for NPs samples of type I ( Figure 7, blue curves) and to the 4 F5/2 + 2 H9/2 level (λexc = 789 nm) for NPs samples of type II ( Figure 7, red curves). It was found that the measured luminescence kinetics of NPs samples of type II decays more slowly than the luminescence kinetics of NPs samples of type I at the 3+ Figure 6. The luminescence excitation spectrum of a 0.1 at% Nd 3+ : LaF 3 NPs sample of type I (black spectrum), measured by scanning a laser at the 4 I 9/2 → 4 F 3/2 transition with a step of 0.12 nm. The detection was performed in the range of 850-873 nm, including 4 F 3/2 (j') → 4 I 9/2 (j) transitions, where j = 1 and 2. The luminescence spectra of a 4 at% Nd 3+ : LaF 3 NPs sample of type I (red spectrum) and 0.45 at% Nd 3+ : LaF 3 single crystal (blue spectrum) upon excitation at a wavelength λ exc = 577.8 nm, measured at the 4 F 3/2 → 4 I 9/2 transition.
The form-factors of the luminescence spectra of a 4 at% Nd 3+ : LaF 3 type I NPs sample ( Figure 6, red curve) and a 0.45 at% Nd 3+ : LaF 3 single crystal ( Figure 6, blue curve) are identical, which is in agreement with the HRTEM (Figure 1) and XRD ( Figure S1) and indicates a good quality of crystallization of NPs in the obtained aqueous colloidal solutions. The positions of intense spectral lines, with maxima at about 860 and 862.8 nm for the 4 F 3/2 (2') → 4 I 9/2 (1) and 4 F 3/2 (1') → 4 I 9/2 (1) transitions, coincide in the excitation (Figure 6, black curve) and luminescence spectra of 0.1 and 4 at% Nd 3+ : LaF 3 type I NPs samples. This result indicates that at room temperature the single type of optical centers in these NPs forms the Nd 3+ ions spectral lines. At the same time, the spectral lines of the 4 F 3/2 (2') → 4 I 9/2 (1) and 4 F 3/2 (1') → 4 I 9/2 (1) transitions for NPs are better resolved, which indicates an even lesser inhomogeneous broadening of spectral lines when comparing to a single crystal.
Fluorescence Decay Kinetics, Relative Fluorescence Quantum Yield, and Brightness of Aqueous Colloidal Solutions
The fluorescence decay kinetics of the 4 F 3/2 metastable level of the Nd 3+ ion in aqueous colloidal solutions of x at% Nd 3+ : LaF 3 NPs was measured depending on the dopant concentration x = 0.1, 1, 2, 3, 4, 6, 8, 10, 12 within four orders of magnitude. Fluorescence detection was performed at the 4 F 3/2 → 4 I 9/2 transition at a wavelength λ det = 862.8 nm upon laser excitation to the 4 G 5/2 + 2 G 7/2 level (λ exc = 577.8 nm) for NPs samples of type I (Figure 7, blue curves) and to the 4 F 5/2 + 2 H 9/2 level (λ exc = 789 nm) for NPs samples of type II (Figure 7, red curves). It was found that the measured luminescence kinetics of NPs samples of type II decays more slowly than the luminescence kinetics of NPs samples of type I at the same concentration x of Nd 3+ ions. To calculate the relative fluorescence quantum yield (Equation (4) To calculate the relative fluorescence quantum yield ϕ (Equation (4)), it is necessary to know the value of the spontaneous radiative lifetime τ R of the 4 F 3/2 excited state of the Nd 3+ ion. Theoretically, the spontaneous radiative decay time of the RE ion excited state in spherical NPs can be estimated using the approximate formula from paper [51] τ nano where τ nano R and τ bulk R are the spontaneous radiative lifetime of the 4 F 3/2 level of Nd 3+ ions in NPs and bulk crystal, respectively; ε = ε cr /ε med = n 2 cr /n 2 med is the relative dielectric constant. The n cr (LaF 3 )= 1.593 [52] and n med (H 2 O) = 1.33 [53] are the refractive indices of the LaF 3 bulk crystal and the medium (H 2 O) containing the crystalline NPs determined at a luminescence wavelength of 863 nm. Expression (6) is valid when the volume fraction of NPs in solution (in the medium) tends to zero c → 0. As follows from (6), in this limit, the value τ nano R of the 4 F 3/2 level of the Nd 3+ ion in an aqueous colloidal solution of LaF 3 NPs depends only on the value of τ bulk R in the LaF 3 single crystal and the value of the parameter ε. If τ bulk R 4 F 3/2 = 701 µs [17], then, in accordance with (6), the value of the spontaneous radiative lifetime in a spherical nanoparticle placed in an aqueous solution can be estimated as τ nano R 4 F 3/2 ≈ 1100 µs. Calculation of the spontaneous radiative lifetime for non-spherical NPs is a rather complicated theoretical problem that has not yet been solved. However, an analysis of some special cases carried out in [51] shows that the value of τ nano R may depend on the shape of the nanoparticle. Therefore, it should be expected that colloids with non-spherical NPs of the same crystal structure, but with different distribution functions with respect to deviation from sphericity, can have different radiation lifetimes.
The correct experimental determination of the τ R value of the 4 F 3/2 level of the Nd 3+ ion in aqueous colloidal solutions of Nd 3+ : LaF 3 NPs is a separate problem. As it is known, at a sufficiently low level of excitation, there are two main decay channels for the excited electronic state of an RE ion introduced into the crystal matrix. One of them is due to intra-center processes: spontaneous emission and multiphonon relaxation. The other is associated with the transfer of the excitation energy to impurity optical centers, donors and acceptors of energy, which are randomly distributed in the system. If in the first channel (in the absence of inhomogeneous broadening of donor levels or latent anisotropy [54] associated with the non-sphericity of the NPs shape) quenching occurs exponentially with a characteristic time τ R , and in the second quenching channel the impurity quenching kinetics is substantially nonexponential.
Since the excitation relaxation channels are independent, the observed luminescence kinetics is the product of the kinetics of the excitation relaxation in each channel.
The first determination method of τ R logically follows from Formula (7) for I meas (t). According to it, it is necessary to synthesize a sample with the lowest possible concentration of impurities in such a way that the quenching channel could be neglected in the observed time interval: N(t max ) ≈ 1 (τ R determination method 1). This method of τ R determination was successfully applied, for example, for a 0.1 at% Nd 3+ : LaF 3 single crystal [17], 0.05% Nd 3+ : YAlO 3 [55], and phosphate glasses doped with Nd 3+ [56].
However, when the system contains uncontrollable impurities, the energy acceptors, in addition to the activator, the situation becomes more complicated. In this case, no matter how we reduce the concentration of the activator, static quenching by uncontrolled acceptors remains. This situation is quite common in both nano-and bulk systems. In our case, these impurities are OH−acceptors randomly distributed in the volume of NPs. In a powder sample of Y 3+ chelated complexes co-doped by Tb 3+ , the same OH−acceptors were contained in the structure of the crystal matrix [57], while in a bulk crystal activated by Nd 3+ ions, this role was played by Dy 3+ ions, which are effective quenchers of the 4 F 3/2 metastable state of Nd 3+ ions [58]. Thus, if the rate of nonradiative energy transfer at the far stage of the fluorescence impurity quenching kinetics is comparable to the rate of radiative relaxation, the determination scheme should be changed. As shown in [34,56], OH−acceptors interact with excited Nd 3+ ions via the dipole-dipole mechanism. If at the same time this distribution of acceptors in the volume of the nanoparticle is disordered, then all together should give the well-known Förster "square root" kinetics of luminescence quenching [59,60]. For further analysis of the kinetic data, we write down the theoretical formula in an explicit form where Here, n OH is the concentration of acceptors, and C OH DA is the microparameter of the dipole-dipole donor-acceptor interaction.
If the quenching on acceptors is strong enough, then the second term in the exponent will be comparable to the first one or even dominate over the entire observation interval, leaving no room for the radiative decay exponent. In such a situation, we cannot determine τ R according to the method 1. It is necessary to analyze a more general Formula (8).
To determine τ R , it is necessary to build a function depending on √ t. Then, for each selected value τ R we analyze how the experimental data correspond to a straight line. With the optimal selection of the value τ R , we should obtain a straight line over the entire measurement interval, from the slope of which it is possible to determine the value of the static quenching macroparameter γ OH (τ R determination method 2).
For the practical implementation of the method 2, let us use logarithm kinetics (10) once again, presenting it in coordinates lg[−ln(N(t)] vs. lgt. In the general case, the slope angle of this dependence determines the power of time t, which in turn is determined by the multipolarity of the donor-acceptor interaction s and the dimension of the acceptor space D [61]. In our case, the interaction is dipole-dipole, i.e., s = 6, and the acceptors are located in the volume of NPs, which corresponds to D = 3 [19]. As a result, the power of t should be s/D= 3/6 = 1 2 . After proper selection of τ R = 1400 µs for NPs of type I and τ R = 1300 µs for NPs of type II (Figure 8), the impurity quenching kinetics for both types of NPs gives exactly a straight line with a slope of 1 2 , which corresponds to the Förster static kinetics. As we can see, the values τ R obtained using this procedure are 20% for type I and 15% for type II, higher than the theoretically estimated values made by Formula (6). This may be due to the non-spherical shape of the NPs. The difference in τ R between the two studied samples is also not surprising. As we indicated above, this may be due to the difference in the distribution functions of NPs in colloids, with respect to deviation from sphericity.
For the practical implementation of the method 2, let us use logarithm kinetics (10) once again, presenting it in coordinates lg[-ln(N(t)] vs. lgt. In the general case, the slope angle of this dependence determines the power of time t, which in turn is determined by the multipolarity of the donor-acceptor interaction s and the dimension of the acceptor space D [61]. In our case, the interaction is dipole-dipole, i.e., s = 6, and the acceptors are located in the volume of NPs, which corresponds to D = 3 [19]. As a result, the power of t should be s/D= 3/6 = ½.
After proper selection of R 1400 s for NPs of type I and R 1300 s for NPs of type II (Figure 8), the impurity quenching kinetics for both types of NPs gives exactly a straight line with a slope of ½, which corresponds to the Förster static kinetics. As we can see, the values R obtained using this procedure are 20% for type I and 15% for type II, higher than the theoretically estimated values made by Formula (6). This may be due to the non-spherical shape of the NPs. The difference in R between the two studied samples is also not surprising. As we indicated above, this may be due to the difference in the distribution functions of NPs in colloids, with respect to deviation from sphericity. Note that an attempt to determine τ R directly from the slope of the luminescence kinetics at the far stage in the coordinates lg(N(t)) vs. t leads to significantly underestimated values of τ R = 805 µs for the NPs sample of type I (Figure 8a) and τ R = 840 µs for the NPs sample of type II (Figure 8b). As a result, after dividing I meas (t) by an exponential decay with such decay times τ R , we obtain not a monotonically decaying kinetics, but a curve growing at large times (blue curves in Figure 8a,b), which is unrelated to the disordered static stage of the impurity quenching kinetics.
Further, representing the fluorescence impurity quenching kinetics in special coordinates ln N(t) − √ t, we observe linearization of kinetics for both types of NPs in the entire time interval, except for the initial stage of ordered static decay. The slope of this stage in Figure 8 is close to unity. Having approximated the linear stage ( Figure 9) for both types of samples, we obtained the following values for the macroparameters γ OH in 0.1 at.% Nd 3+ : LaF 3 NPs: for type I γ I OH = 0.055 µs −1/2 and for type II γ I I OH = 0.041 µs −1/2 . Since the γ OH macroparameter linearly depends on the concentration of OH acceptors (Formula (9)), and C OH DA is the same in both systems, it is possible to determine the ratio of their concentrations in types I and II samples as the ratio of the macroparameters themselves. It turns out that the number of OH−quenchers in the type II samples is 1.3 times less than in the type I samples. Further, representing the fluorescence impurity quenching kinetics in special coordinates ln ( ) N t t , we observe linearization of kinetics for both types of NPs in the entire time interval, except for the initial stage of ordered static decay. The slope of this stage in Figure 8 is close to unity. Having approximated the linear stage ( Figure 9) for both types of samples, we obtained the following values for the macroparameters OH t between the exponential initial stage and the "root" Förster kinetics. By definition, the exponents of the ordered and disordered stages are equal at this point, whence the following relation follows for In Formula (11), ord W is the rate of excitations quenching at the ordered stage.
Here, is the volume per one site of the acceptor sublattice, and P is the lattice sum in the case of dipole-dipole interaction of the donor with acceptors uniformly distributed over the corresponding sublattice. In order to determine the absolute values of n OH and C OH DA , in addition to the γ OH value, it is also necessary to know the value of the boundary time t 1 between the exponential initial stage and the "root" Förster kinetics. By definition, the exponents of the ordered and disordered stages are equal at this point, whence the following relation follows for In Formula (11), W ord is the rate of excitations quenching at the ordered stage.
W ord = n OH ΩC OH DA P Here, Ω is the volume per one site of the acceptor sublattice, and P is the lattice sum in the case of dipole-dipole interaction of the donor with acceptors uniformly distributed over the corresponding sublattice. Substituting expressions (9) and (12) into (11), we obtain the following expression for the boundary time: It is easily seen from this relationship that t 1 depends only on the structural constants of the acceptor sublattice and the microparameter of the dipole-dipole donor-acceptor interaction C OH DA . Therefore, having determined the value of t 1 , we can immediately calculate the value of C OH DA using the formula A boundary time t 1 is equal to the abscissa of the asymptotes intersection of the corresponding stages, with slope angles of 1 and 1 2 ( Figure 8). Accordingly, for both systems type I and II, we obtain t 1 ≈ 15 µs. It is assumed that the hydroxyl OH − group replaces the F − ion in the crystal lattice of LaF 3 . The corresponding lattice sum and the volume per fluorine site are equal [62] P La−F = 44,793 nm −6 , Ω F = 1/n Fmax ≈ 1/55.05 nm 3 (16) Further, using the value t 1 and calculating C OH DA according to Formula (15), we obtain C OH DA ≈ 0.0056nm 6 /ms (17) Now, knowing C OH DA , γ OH and using (9), we can determine the absolute value of the acceptors concentration n I OH ≈ 3.15 nm −3 , n I I OH ≈ 2.33 nm −3 . Then, for a dimensionless relative concentration, c OH = n OH Ω F × 100%, we obtain c I OH = 5.7% and c I I OH = 4.2%, respectively, which are rather large values. It shows that diffusion of OH − ions in the lattice with the subsequent exchange for F − ions during hydrothermal crystallization of precipitated gels is difficult. Most likely, an increase in the duration and temperature of hydrothermal treatment should lead to a decrease in the concentration of defects in resulting NPs.
By integrating the luminescence decay kinetics and dividing it by τ R = 1400 µs for the NPs of type I series and by τ R = 1300 µs for the NPs of type II series, we can calculate the fluorescence relative quantum yield (Formula (4)). Figure 10a shows the concentration dependence of the fluorescence relative quantum yield for both series of NPs samples in comparison with a concentration series of single crystals. The difference in the fluorescence relative quantum yield of aqueous colloids of NPs is that it does not tend to be 100% in the limit of low values of the Nd 3+ concentration, as it was in bulk crystals. The characteristic cutoff of the concentration dependence at φ = 26-33% indicates a high contribution of luminescence quenching on OH−acceptors.
The maximum relative quantum yield of luminescence of aqueous colloidal solutions, associated with quenching on third-party acceptors only, as expected, is observed at the minimum concentration of Nd 3+ ions (0.1 at.%) and for type II samples exceeds 30%. This value is even larger than for dried powder obtained by the HTMW treatment with the surfactant Proxanol-268 (Proxanol-268 surfactant) [34] and is undoubtedly a record result for stable aqueous colloids of NPs doped with Nd 3+ ions. The concentration dependences of the fluorescence relative quantum yield ϕ (Equation (4)) and the fluorescence brightness ν (Equation (5)) of the two types of NPs samples differ from each other over the entire range of Nd 3+ concentrations. Due to systematic and random errors in the preparation of initial salt solutions during the synthesis of aqueous colloidal solutions of NPs, the concentration of impurity Nd 3+ ions in them may slightly vary. Therefore, for better verification of the ϕ and ν values, we performed repeated syntheses of both samples of NPs solutions with the same Nd 3+ concentration by inclusion. The dispersion in the ϕ and ν values for both series of NPs samples with nominally the same Nd 3+ concentration is indicated by double and triple circles of the same color ( Figure 10). The maximum relative quantum yield of luminescence of aqueous colloidal solutions, associated with quenching on third-party acceptors only, as expected, is observed at the minimum concentration of Nd 3+ ions (0.1 at.%) and for type II samples exceeds 30%. This value is even larger than for dried powder obtained by the HTMW treatment with the surfactant Proxanol-268 (Proxanol-268 surfactant) [34] and is undoubtedly a record result for stable aqueous colloids of NPs doped with Nd 3+ ions. The concentration dependences of the fluorescence relative quantum yield (Equation (4)) and the fluorescence brightness ν (Equation (5)) of the two types of NPs samples differ from each other over the entire range of Nd 3+ concentrations. Due to systematic and random errors in the preparation of initial salt solutions during the synthesis of aqueous colloidal solutions of NPs, the concentration of impurity Nd 3+ ions in them may slightly vary. Therefore, for better verification of the and ν values, we performed repeated syntheses of both samples of NPs solutions with the same Nd 3+ concentration by inclusion. The dispersion in the and ν values for both series of NPs samples with nominally the same Nd 3+ concentration is indicated by double and triple circles of the same color ( Figure 10).
Our analysis showed that the lower value of in the sample of NPs of type I compared to NPs of type II is explained by the higher concentration of uncontrolled impurity of OH−groups in NPs of type I. Effective quenching of the excitation of the 4 F3/2 level of Nd 3+ ions in Nd 3+ : LaF3 NPs is caused by the interaction with anharmonic vibrations of OH−groups [34]. During the hydrothermal synthesis, the hydroxyl OH−group randomly replaces Fions in the entire volume of the nanoparticle, which is responsible for the Förster kinetics of luminescence quenching at low neodymium concentrations. With an increase in the concentration of Nd 3+ ions in NPs, self-quenching appears, where the role of acceptors is played by unexcited Nd 3+ ions, and at the same time, the resonant migration of excitations over impurity sites of the activator significantly accelerates both selfquenching and quenching of excitations on additional acceptors. Our analysis showed that the lower value of ϕ in the sample of NPs of type I compared to NPs of type II is explained by the higher concentration of uncontrolled impurity of OH−groups in NPs of type I. Effective quenching of the excitation of the 4 F 3/2 level of Nd 3+ ions in Nd 3+ : LaF 3 NPs is caused by the interaction with anharmonic vibrations of OH−groups [34]. During the hydrothermal synthesis, the hydroxyl OH−group randomly replaces Fions in the entire volume of the nanoparticle, which is responsible for the Förster kinetics of luminescence quenching at low neodymium concentrations. With an increase in the concentration of Nd 3+ ions in NPs, self-quenching appears, where the role of acceptors is played by unexcited Nd 3+ ions, and at the same time, the resonant migration of excitations over impurity sites of the activator significantly accelerates both self-quenching and quenching of excitations on additional acceptors.
When migration-accelerated self-quenching prevails over radiative decay (strong self-quenching: V DA n D ; V DD n D >> 1) in the case of a dipole-dipole donor-acceptor interaction, the theory [17,63] gives the following expression for the brightness: Here, n D is the donor concentration, and the effective volumes of strong incoherent donor-acceptor are V DA and donor-donor, V DD , interactions, which are determined through the corresponding Förster radii: Förster radii of the strong interaction are determined in the usual way from the condition of the equality of the rates of radiative and nonradiative decay of excitation at the critical distance As a rule, in systems doped with rare earth ions, a hopping mechanism is realized in which the inequality R DD >> R DA .
Formula (18) gives the concentration dependence of the luminescence brightness during self-quenching, without any quenching on additional acceptors. However, if quenching occurs only on independent acceptors randomly distributed in a system with the concentration n A and no self-quenching process takes place, then the theory of migrationaccelerated quenching [64,65] gives a completely different result. In the same simple form that is valid for strong hopping quenching, the brightness for dipole-dipole interactions is equal to It is easy to see from Formula (18) that in the case of pure self-quenching, the concentration dependence of the brightness initially has a linear increase, proportional to n D , and then, passing through a maximum, at high concentrations decreases proportionally to 1/n D . In the case of pure quenching (21), the brightness at high concentrations reaches a plateau of ν ∼ π/(2V OH DA V DD n A ). In our experiments, the maximum fluorescence brightness ν in the series of single crystals is observed at a Nd 3+ ions concentration of about 1.3 at% and for that of NPs of type II at about 1.8 at%, while in the series of NPs of type I, at the concentrations of more than 1.8 at%, the brightness practically does not change (Figure 10b). At the same concentrations of Nd 3+ ions of the NPs sample of type II, the fluorescence relative quantum yield and fluorescence brightness are only 2.5 times lower than in similar Nd 3+ : LaF 3 single crystals and 20% higher than in samples of NPs of type I.
Thus, in agreement with Formula (18) in a bulk crystal, pure self-quenching with a pronounced maximum is observed (Figure 10b, black dots). For the series of NPs of type I, on the contrary, a plateau is observed (Figure 10b, blue dots), which indicates the prevalence of the quenching over the self-quenching, in agreement with Formula (21). Finally, the concentration dependence of the brightness of the series of NPs of type II has a weakly pronounced maximum (Figure 10b, red dots), shifted relative to the single crystal towards higher concentrations. This means that an intermediate situation of twochannel quenching is realized in this system, when both self-quenching and quenching make comparable contributions.
By the combination of the properties of our NPs: (1) "green" water synthesis, (2) short synthesis duration, (3) long-term stability of an aqueous colloid within several months without observable sedimentation of NPs [19], (4) no additional modification of the NPs surface, (5) high luminescence brightness in the first biological transparency window, and (6) low cytotoxicity [21,66], the aqueous colloidal solutions of Nd 3+ : LaF 3 NPs of type II surpass the similar aqueous colloidal solutions of Nd 3+ : LaF 3 NPs described in the literature.
Conclusions
In this work, two concentration series of long-term stable aqueous colloidal solutions of Nd 3+ : LaF 3 crystalline NPs that possess a fluorcerite structure were synthesized by the HTMW method in the experimental setups with either one or two magnetrons. The Nd 3+ : LaF 3 NPs resulting from these two synthetic setups are single-phase, well crystallized, morphologically different, and partially faceted with an average size of 15.4 ± 6 nm (NPs of type I) and 21 ± 7 nm (NPs of type II). Both types of NPs have a size distribution that can be described by a double Gaussian function. The finely dispersed fraction of NPs is apparently formed as a result of the primary crystallization of the gel from an aqueous solution. The fraction of larger NPs is probably the result of recrystallization and the growth of primary NPs during the HTMW treatment. In the setup with two magnetrons, it seems that a more uniform supply of microwave radiation to the autoclave containing the mother liquor is realized. This increases the rate of growth and recrystallization of type II NPs, leading to an increase in their average size and degree of crystallinity.
Differences in morphologies and in size distributions of NPs affect their physical properties. The non-spherical shape of NPs, different size distribution functions, and different concentration of OH−groups that randomly replace Fions in the entire NP volume all primarily affect the physical parameters that determine the relative quantum yield of fluorescence and their relative fluorescence brightness. Namely, the radiative lifetime τ R and macroparameters of luminescence quenching γ OH , which turned out to be different in NPs of different types.
This paper describes in detail the methodology for determining the radiative lifetime and macroparameter of luminescence quenching in NPs, which is more complicated in comparison with the corresponding bulk crystals. The main reason for this complication is high concentrations of OH−acceptors, which explains why, even at low concentrations of Nd 3+ ions (0.1 at%) when self-quenching is absent and the luminescence kinetics can be represented in the form (8), the fluorescence quenching on additional acceptors dominates over radiative decay in the observed time interval. To determine τ R in such situations, it is necessary to plot the function N(t) = I meas (t)/ exp(−t/τ R ) in special coordinates and to select the values of τ R in such a way that the function N(t) over the entire observation time interval fits a straight line, corresponding to the Förster kinetics with a slope angle of 1 2 (Figure 8). Having checked the functional dependence in time in this way, it is then necessary to rearrange the kinetics in the coordinates of Figure 9, in which the kinetics should have the form of a straight line with a tangent slope equal to the macroparameter γ OH . With an optimal choice of the τ R value, it is possible to obtain a straight line over the entire measurement interval and then to determine the static quenching macroparameter γ OH from the angle of its slope.
With an increase in the concentration of Nd 3+ ions, the migration of excitations and the self-quenching appear in the system of these impurity centers. Migration accelerates both channels of excitation relaxation competing in NPs, self-quenching (Nd* → Nd), and quenching (Nd* → OH). Self-quenching itself, which occurs in single crystals, gives a distinct maximum in the dependence of the luminescence brightness on concentration, as described by Equation (18) (Figure 10b, black dots). On the contrary, the simple quenching on additional acceptors leads to the exit of the luminescence brightness curve to a plateau (Equation (21)). In NPs, these two channels compete with each other. Therefore, in NPs of type I, where quenching prevails over self-quenching, a plateau is observed (Figure 10b, blue dots). At the same time, in NPs of type II where the concentration of OHacceptors is lower, both excitation relaxation processes have a comparable contribution, and the brightness maximum smooths and shifts towards higher concentrations of Nd 3+ ions (Figure 10b, red dots). A decrease in the relative quantum yield and fluorescence brightness of the 4 F 3/2 level of Nd 3+ ions in aqueous colloidal solutions of Nd 3+ : LaF 3 NPs in comparison with similar single crystals containing the same concentrations of impurity Nd 3+ ions is caused exactly by the interaction with anharmonic vibrations of OH-groups. Therefore, the conclusion of the work on the effect of the number of magnetrons on the concentration of OH-groups in the volume of NPs is important, since it provides a basis for optimizing the luminescent properties of aqueous colloidal solutions of NPs. The relative quantum yield of fluorescence and fluorescence brightness of an aqueous colloidal solution of the NPs of type II are only 2.5 times lower than that of the analogous Nd 3+ : LaF 3 single crystals. This property offers promising prospects for the use of these colloidal solutions for bioimaging.
The results obtained on the fluorescence of our NPs show that aqueous colloidal solutions of 2 at% Nd 3+ : LaF 3 NPs of type II synthesized on a setup with two magnetrons are more promising for biological imaging, since their fluorescence brightness is about 25-30% higher than solutions of NPs of type I synthesized in a setup with a single magnetron. | 16,241.4 | 2021-10-26T00:00:00.000 | [
"Materials Science",
"Chemistry"
] |
Flat space holography and complex SYK
Hamid Afshar, 2, ∗ Hernán A. González, † Daniel Grumiller, ‡ and Dmitri Vassilevich 5, § Institute for Theoretical Physics, TU Wien, Wiedner Hauptstr. 8, A-1040 Vienna, Austria School of Physics, Institute for Research in Fundamental Sciences (IPM), P.O.Box 19395-5531, Tehran, Iran Facultad de Artes Liberales, Universidad Adolfo Ibáñez, Diagonal Las Torres 2640, Peñalolén, Santiago, Chile CMCC, Universidade Federal do ABC, Santo Andre, Sao Paulo, Brazil Physics Department, Tomsk State University, Tomsk, Russia (Dated: November 15, 2019)
Introduction. The Sachdev-Ye-Kitaev (SYK) model [2][3][4] reinvigorated studies of Jackiw-Teitelboim (JT) gravity [5,6] since in a certain limit it is the gravity dual of the former [7]. This holographic relationship, dubbed nAdS 2 /nCFT 1 , inspired numerous research activities in the past few years in gravity and condensed-matter communities, see e.g. . One crucial piece of the puzzle is the Schwarzian action [2,7] that arises in the large N and strong coupling limit on the quantum mechanics side and, upon imposing suitable boundary conditions, also on the gravity side.
Given the impressive evidence for AdS/CFTrealizations of holography it is justified to ask how general the holographic principle works, if it works beyond the AdS/CFT correspondence and in particular if and how it works in asymptotically flat spacetimes. See and refs. therein for selected earlier results in flat space holography.
The main goal of our Letter is to find a model analogous to JT that leads to a holographic relationship involving flat space instead of AdS 2 , with some suitable replacement of the Schwarzian action.
The Callan-Giddings-Harvey-Strominger (CGHS) model [72] is a prime candidate for the gravity side of flat space holography since all solutions have asymptotically vanishing Ricci scalar. This opens up the prospect to construct a concrete holographic correspondence between flat space dilaton gravity in 1+1 dimensions and some cleverly designed quantum system of (complex) fermions in 0+1 dimensions.
The principal result of this Letter is that the flat space analogue of the Schwarzian action is given by where κ is a coupling constant, β is inverse temperature, and prime denotes derivative with respect to τ , the time direction along the boundary. The timereparametrization field h(τ + β) = h(τ ) + β is quasiperiodic and the phase field g(τ ), in the absence of winding, is periodic. When the functions T and P are constant we refer to them as mass and charge, respectively. While mass can be arbitrary it will turn out that regularity demands a linear relationship between charge and temperature. The superscript tw stands for 'twisted warped' and stems from the symmetries (7) that govern our action. On the gravity side κ is essentially the inverse Newton constant, as evident from our starting point (2). On the field theory side κ is essentially the geometric mean of specific heat at constant charge and zero temperature compressibility, as evident from our final equation (32).
The remainder of our Letter is organized as follows. We start by gaining some intuition about our gravity model in the metric formulation and then switch to a gauge theoretic formulation. The latter is employed to derive our main result (1). Finally, we recover the boundary action (1) from a scaling limit of complex SYK.
Metric formulation of CGHS. Following Cangemi and Jackiw [73] we manipulate the CGHS action [72] in three ways: 1. for simplicity we set all matter fields to zero, 2. we perform a Weyl rescaling (depending on the dilaton X) of the metric g µν , and 3. we "integrate in" an abelian gauge field A µ and an auxiliary scalar field Y that is constant on-shell. The action (ε µν is the ε-tensor) provides a reformulation of the CGHS model referred to as CGHS. We solve now the CGHS field equations with suitable boundary and gauge fixing conditions.
In axial gauge for the U (1) connection the field equation (4) is solved by the two-dimensional Coulomb connection A = r du. Its preservation under combined diffeomorphisms and gauge transformations, δ ξ,σ A ν = ξ µ ∂ µ A ν + A µ ∂ ν ξ µ + ∂ ν σ, relates the functions η and σ, η = σ , which can be interpreted in two different ways. Either one concludes that σ has to contain a ln u-term, since M 0 is allowed to be non-zero, or one concludes that M 0 is forbidden, since σ is assumed to have a Laurent series around u = 0. The first option leads to the BMS 2 symmetries discussed above. The second option leads to a slight modification of the transformation properties.
In the present Letter we focus on the second option, since it guarantees that the Wilson loop in the complex u-plane encircling the origin u = 0 is gauge invariant, δ σ A = 0; in other words, there are no winding modes. Defining instead of M n new Fourier modes J n := ξ(0, σ = u n ) yields the asymptotic symmetry algebra in terms of asymptotic Killing vector modes [L n , L m ] Lie = (n − m) L n+m , [L n , J m ] Lie = −m J n+m and [J n , J m ] Lie = 0, which is known as "warped Witt algebra", the centerless version of either the warped conformal algebra [74] or the twisted warped conformal algebra [75].
Finally, the rr-component of the field equation (5) is solved by dilaton fields linear in the radial coordinate The remaining components of the field equation (5), which involve the functions x i (u), will be determined in the gauge theoretic formulation of CGHS. Gauge theory formulation of CGHS. For the gauge theoretic formulation as non-abelian BF-theory [73,76] we use conventions analogous to [77]. The first order form of the CGHS bulk action (2) is given by where κ is the coupling constant, B is a scalar and F = dA+A∧A the non-abelian field strength. The connection contains dualized spin-connection ω, zweibein e a and U (1) connection A. The generators obey the Maxwell algebra [95] whose non-zero commutators read [P a , comprises the dilaton X, Lagrange multipliers X a for torsion constraints and the auxiliary field Y . Finally, , denotes the bilinear form with non-vanishing entries We use light-cone gauge for the Minkowski metric, η +− = 1, in terms of which the Levi-Civitá symbol is ± ± = ±1 and the gauge algebra reads [P + , P − ] = Z and [P ± , J] = ±P ± . Integrating out the Lagrange multipliers X a and solving the torsion constraints, the action (9) with (10)- (12) can be shown to be equivalent to (2).
Boundary conditions compatible with the ones in the metric formulation are given by with b = exp − r P + and a = T(u) P + + P − + P(u) J du (14) x where both functions in the connection are allowed to vary, δT = 0 = δP. The equations of motion reduce to The first one is obeyed automatically by our ansatz (14). The second one, which states that x is the stabilizer of a, holds provided Y = Λ = const. and the following differential equations are fulfilled: Using them, x is conveniently parametrized by two functions x 1 and x 0 .
Asymptotic symmetries. The boundary condition preserving gauge transformations δ λ B = [B, λ] and (18) are generated by gauge parameters The absence of the P − -and Z-components on the right hand side of (18) yields consistency relations between the functions in the gauge parameter (19).
The boundary condition preserving gauge transformations (18)- (20) imply precisely the transformation laws (7), which is the twisted warped conformal transformation behavior introduced in [75]. Therefore, the analogue of the conformal symmetries that govern the Schwarzian action are twisted warped conformal symmetries, which govern our boundary action (1).
Boundary action. Our derivation of the boundary action for CGHS follows closely the derivation of the Schwarzian action in section 3 of [77].
From now on we work in Euclidean signature with periodic boundary time, t ∼ t + β, where β is inverse temperature. Mapping Lorentzian to Euclidean results requires the following replacements: u → iu = t, η ab → δ ab , . The fields are given by (10) and (11), with all quantities replaced by their Euclidean counterparts. We use the definition dt = β 0 dt and dot means d dt .
The variation of the BF-action (9) apart from the bulk equations of motion yields a boundary term [77].
Our aim is to cancel this boundary term by variation of a boundary action. To this end we use a convenient representation of the connection, We impose as integrability condition that the function f t has a fixed zero mode (which we set to unity with no loss of generality). As shown below this guarantees that the first variation of the full action vanishes for all variations preserving our boundary conditions.
The variation of the boundary action (21) expands as with the bilinear Casimir [96] The second term in (22) vanishes on-shell since C is constant and f t has a fixed zero mode, while the terms in the second line vanish on-shell or because they integrate to zero; thus, only the first term remains, which is integrable in field space, leading to the boundary action For future purposes we define f t =ḟ , where f = t+. . . is a quasi-periodic function that has arbitrary Fourier modes but a fixed linear term. The full action has a well-defined variational principle, i.e., its first variation vanishes for all variations that preserve our boundary and on-shell conditions (13)- (16).
Plugging the Euclidean version of the expression (17) for x into the bilinear Casimir (23) yields where we used the relation 1/x E 1 =ḟ . Defining additionallyġ := ix E 0ḟ the boundary action (24) with the expression for the Casimir (26) is nearly our final result.
The boundary action (27) depends functionally on f and g, both of which are boundary scalars, as evident from their transformation behavior under asymptotic symmetries, δ λ f = εḟ and δ λ g = σ + εġ. The latter also shows that g is a phase under U (1) gauge transformations. As in the JT case [77] we reparametrize the time coordinate along the boundary by a diffeomorphism τ := f (t), where τ is our new (Euclidean) time coordinate with period β, and introduce the inverse of f as a new field h(τ ) := −f −1 (τ ). The other field, g, now also depends on τ and prime from now on means derivative with respect to τ . Implementing this diffeomorphism in the boundary action (27) establishes the boundary action (1) announced in the introduction, where prime means d dτ . The action (1) is our main result and constitutes the analogue of the Schwarzian. Since it has a geometric interpretation as group action for twisted warped coadjoint orbits, governed by the symmetries (7), we refer to it as "twisted warped action". This is analogous to the interpretation of the Schwarzian action as group action for Virasoro coadjoint orbits [78,79]. We refer to [1] and refs. therein for more on these mathematical aspects.
Solutions to twisted warped theory. We study now classical solutions of the action (1) for constant representatives, T = T 0 and P = P 0 . The Hamiltonian formulation involves three canonical pairs (i = 1, 2, 3) The relation to the original variables is q 3 = exp(iP 0 h), q 2 = g + ih T 0 /P 0 while all other canonical variables are of auxiliary nature to get rid of higher derivatives. The interaction term with the exponential in q 1 also appears in the Schwarzian theory, see Eq. (2.1) in [40]. The key difference is the kinetic term, p 2 1 for the Schwarzian and p 1 p 2 for the twisted warped Hamiltonian.
Solving the Hamiltonian equations of motion yields q 3 = h 0 + h 1 e iτ /τ0 and q 2 = g 0 − ig 1 τ + g 2 e iτ /τ0 . These solutions depend on six integration constants, g 0 , g 1 , g 2 , h 0 , h 1 , τ 0 , the latter playing the role of the periodicity, τ 0 = β 2π . The integration constants h 0 and g 0 are constant shifts, while h 1 and g 2 are amplitudes in front of oscillating terms. The remaining constant, g 1 , captures the non-periodicity of q 2 and is responsible for the on-shell action being non-zero, I tw [q i , p i ] EOM = −2πκ g 1 . Thermodynamics. Assuming g 1 is independent from temperature allows to deduce the entropy S = −I tw [q i , p i ] EOM = 2πκ g 1 from the on-shell action. Inserting all our definitions we recover the well-known fact that entropy is given by the dilaton at the horizon [80].
The result for entropy (29) can be derived along the lines of [8,44]. One aspect of this derivation is worth highlighting: the holonomy of a along the thermal cycle must belong to the center of our gauge group for regularity (in order to have contractible thermal cycles). Assuming a single cover, we find that this regularity condition relates temperature T = β −1 and charge P 0 = 2πT (30) while mass T 0 remains arbitrary. The label "charge" is justified for P 0 since the equations of motion imply P 0 = Y and Y is the U (1) charge. The label "mass" is justified for T 0 as it is the subleading term in the metric (6) and since the associated function T transforms like a stresstensor (7b) in a twisted warped field theory [75].
A peculiar aspect of CGHS black hole thermodynamics is that the inverse specific heat (at fixed charge) vanishes, C −1 = 1 T dT dS | δP0=0 = 0, since the Hawking-Unruh temperature T trivially does not vary if the charge P 0 is kept fixed due to the relation (30). This property is well-known [81], but will be crucial for the scaling limit from complex SYK.
Scaling limit from complex SYK. We turn now to the field theory side, starting with the complex SYK model [7,28,[82][83][84]. The effective action governing the dynamics of the collective low temperature modes of complex SYK is given by (see [28] and Eq. (1.12) in [84]) is the Schwarzian derivative, N is the (large) number of complex fermions, N K is the zero temperature compressibility, N γ is the specific heat at fixed charge and E is a spectral asymmetry parameter. The time-reparametrization field h(τ + β) = h(τ ) + β is quasi-periodic and the phase field g(τ ), in the absence of winding, is periodic.
According to the thermodynamical discussion above we are interested in the limit N γ → ∞ in order to obtain our action (1) as limit from the complex SYK effective action (31). This is indeed possible by combining the actions (1) and (31) to the geometric action associated with the twisted warped Virasoro group, known as 'warped Schwarzian' [1], I wSch = I cSYK + I tw .
Starting from the effective action (31) and shifting g by [1] g → g − κ N K (logh + 2πi β h) yields the action I wSch with non-vanishing κ and shifted specific heat parameter γ = γ + 36π 2 κ 2 N 2 K . Thus, our boundary action (1) emerges by sending bothγ and K to zero, while keeping fixed κ. At large N this is achieved by the family of scaling limits The constants γ 0 and K 0 are independent from N and their product must be positive. The exponents a > −1, b > 1 lead to infinite specific heat and vanishing zero temperature compressibility, respectively, in the large N limit. Two simple choices are a = b = 2, leading to κ = N 6π √ γ 0 K 0 , and a = 0, b = 2, leading to κ = 1 6π √ γ 0 K 0 . Conclusions. We derived on the gravity side the boundary action (1) as a first step towards a twodimensional model for flat space holography. We showed that the field theory side of our proposal for flat space holography emerges as a triple scaling limit of complex SYK: large N , large coupling (or small temperature) and large specific heat, while keeping fixed (with an adjustable scaling in N ) the geometric mean of specific heat and zero temperature compressibility. As evident from (32), this geometric mean (up to a factor N 1+ a−b 2 /(6π)) is the coupling constant κ in (1).
Starting from our flat space holographic description numerous further research avenues can now be pursued, inspired by corresponding SYK-related results or by generic aspects of two-dimensional dilaton gravity (see [85] for a review and [86] for a list of models). Not intend-ing to do justice to the vast literature on these subjects we highlight just one intriguing aspect, namely the role of chaos in flat space holography. By analogy to the AdS 2 case [12,87] we expect saturation of the chaos bound, i.e., a Lyapunov exponent given by λ L = 2π T . It should be rewarding to verify this through explicit calculations. Nees, Stefan Prohazka, Jakob Salzer and Carlos Valcárcel for collaborations on related subjects and for discussions. DG is particularly grateful to Jakob Salzer for joint unpublished work on BMS 2 in two-dimensional dilaton gravity. This work was supported by the Austrian Science Fund (FWF), projects P 28751, P 30822 and P 32581. where 01 = 1 = − 01 . Changing the basis as L 0 = J, L 1 = P 1 − P 0 , J −1 = P 1 + P 0 and J 0 = −2Z yields which coincides with a maximal subalgebra of the warped Witt algebra in the main text, re-displayed below.
The relation to the light-cone generators in the main text is L 1 = P + , J −1 = P − , L 0 = J and J 0 = Z, in terms of which the algebra simplifies to Using light-cone generators a simple matrix representation for the algebra (S1) in terms of 3 × 3 matrices The bilinear form (12) cannot be represented by the simple matrix trace, since all traces of bilinear or quadratic expressions vanish, with the exception of trJ 2 = 1. However, we recover the bilinear form (12) by introducing the adjoint matrices and then defining The adjoint (S7) is involutive, i.e., (A † ) † = A for all generators A = P ± , J, Z, but it does not act in the usual way on products, (AB) † = B † A † in general.
Harmonic oscillator basis. The 1+1 dimensional Maxwell algebra (S1) [or (S2) or (S4)] is identical to the harmonic oscillator algebra. This is seen explicitly by introducing the creation operator a = L 1 , the annihilation operator a † = J −1 , the Hamiltonian H = 1 a † a = L 0 and the central term 1l = J 0 , with the usual commutation relations and then taking the limit ε → 0. Since sl(2) ⊕ u(1) is the gauge algebra in the gauge theoretic formulation of the charged JT model the existence of this contraction shows that a scaling limit from charged JT to CGHS exists. This is the gravity version of the scaling limit of the complex SYK model studied in the main text.
Infinite lift and central extension. The Maxwell algebra (S2) has an infinite lift to the warped Witt algebra, see the Lie-bracket algebra (S3) of asymptotic Killing vectors. The warped Witt algebra has up to three nontrivial central extensions: the Virasoro central charge c, a twist term κ and a u(1)-levelK. The centrally extended version of (S3) reads [L n , L m ] = (n − m) L n+m + c 12 n 3 − n δ n+m,0 (S12a) [L n , J m ] = −m J n+m − iκ n 2 − n δ n+m, 0 (S12b) [J n , J m ] =K 2 n δ n+m, 0 . (S12c) Note that the Maxwell algebra (S2) is a subalgebra of (S12) that is blind to all three central extensions.
The twisted warped Virasoro algebra can be mapped to the warped Virasoro algebra (with central charge c → c − 24κ 2 K ) by a change of basis, namely by first twisting the Virasoro generators, L n → L n + i 2κ K n J n , and then shifting both zero modes L 0 → L 0 + ∆ 0 , J 0 → J 0 + Q 0 with suitably chosen constants ∆ 0 and Q 0 . The twist of the Virasoro generators no longer works if the u(1) level vanishes,K = 0, so there is no regular way to eliminate the twist term κ from the twisted warped Witt algebra. Singular limit. A singular limit maps the warped Virasoro algebra (after twisting with some κ, i.e., inverting the map above from warped Virasoro to twisted warped Virasoro) to the twisted warped Witt algebra, namelŷ K → 0, c → ∞ while keeping fixed the geometric mean of central charge and u(1)-level, κ = − cK 24 (S13) This is the algebraic version of the scaling limit we performed in (32), with central charge c = N γ 3π 2 playing the role of specific heat, u(1)-levelK = 2N K playing the role of zero temperature compressibility and the twist term κ is identical to the coupling constant κ in the main text.
The twisted warped Witt algebra case, c = 0, κ = 0,K = 0, is the one associated with our main result, the boundary action (1) with the symmetries (7). In a holographic context the twisted warped Witt algebra was discussed first in [75], including a derivation of a Cardylike entropy formula. | 5,227.6 | 2019-11-14T00:00:00.000 | [
"Physics"
] |
Visible supercontinuum generation in photonic crystal fibers with a 400 W continuous wave fiber laser
We demonstrate continuous wave supercontinuum generation extending to the visible spectral region by pumping photonic crystal fibers at 1.07 μm with a 400 W single mode, continuous wave, ytterbium fiber laser. The continuum spans over 1300 nm with average powers up to 50 W and spectral power densities over 50 mW/nm. Numerical modelling and understanding of the physical mechanisms has led us to identify the dominant contribution to the short wavelength extension to be trapping and scattering of dispersive waves by high energy solitons. © 2008 Optical Society of America OCIS codes: (060.4370) Nonlinear optics, fibers; (140.3510) Lasers, fiber. References and links 1. A. V. Avdokhin, S. V. Popov, and J. R. Taylor, “Continuous-wave, high-power, Raman continuum generation in holey fibers,” Opt. Lett. 28, 1353–1355 (2003). 2. J. W. Nicholson, A. K. Abeeluck, C. Headley, M. F. Yan, and C. G. Jorgensen, “Pulsed and continuous-wave supercontinuum generation in highly nonlinear, dispersion-shifted fibers,” Appl. Phys. B 77, 211–218 (2003). 3. M. González-Herráez, S. Martı́n-López, P. Corredera, M. L. Hernanz, and P. R. Horche, “Supercontinuum generation using a continuous-wave Raman fiber laser,” Opt. Commun. 226, 323–328 (2003). 4. C. J. S. de Matos, S. V. Popov, and J. R. Taylor, “Temporal and noise characteristics of continuous-wave-pumped continuum generation in holey fibers around 1300 nm,” Appl. Phys. Lett. 85, 2706 (2004). 5. J. C. Travers, S. V. Popov, and J. R. Taylor, “Extended CW supercontinuum generation in a low water-loss Holey fiber,” Opt. Lett. 30, 3132 (2005). 6. A. B. Rulkov, A. A. Ferin, J. C. Travers, S. V. Popov, and J. R. Taylor, “Broadband, low intensity noise CW source for OCT at 1800nm,” Opt. Commun. 281, 154–156 (2008). 7. B. A. Cumberland, J. C. Travers, S. V. Popov, and J. R. Taylor, “29 W High power CW supercontinuum source,” Opt. Express 16, 5954–5962 (2008). 8. A. Abeeluck and C. Headley, “Continuous-wave pumping in the anomalous-and normal-dispersion regimes of nonlinear fibers for supercontinuum generation,” Opt. Lett. 30, 61–63 (2005). 9. F. Vanholsbeeck, S. Martin-Lopez, M. González-Herráez, and S. Coen, “The role of pump incoherence in continuous-wave supercontinuum generation,” Opt. Express 13, 6615–6625 (2005). 10. P. Beaud, W. Hodel, B. Zysset, and H. Weber, “Ultrashort pulse propagation, pulse breakup, and fundamental soliton formation in a single-mode optical fiber,” IEEE J. Quantum Electron. 23, 1938–1946 (1987). 11. A. V. Gorbach and D. V. Skryabin, “Theory of radiation trapping by the accelerating solitons in optical fibers,” Phys. Rev. A 76, 053803 (2007). 12. P. Persephonis, S. V. Chernikov, and J. R. Taylor, “Cascaded CW fibre Raman laser source 1.6-1.9 μm,” Electron. Lett. 32, 1486–1487 (1996). 13. M. Prabhu, N. S. Kim, and K. Ueda, “Ultra-broadband CW supercontinuum generation centered at 1483.4 nm from Brillouin/Raman fiber laser,” Jpn. J. Appl. Phys 39, 291–293 (2000). #99957 $15.00 USD Received 8 Aug 2008; revised 26 Aug 2008; accepted 28 Aug 2008; published 29 Aug 2008 (C) 2008 OSA 15 September 2008 / Vol. 16, No. 19 / OPTICS EXPRESS 14435 14. S. V. Popov, J. R. Taylor, A. B. Rulkov, and V. P. Gapontsev, “Multi-watt, 1.48-2.05 μm range CW Raman-soliton continuum generation in highly-nonlinear fibres,” in Conference on Lasers and Electro-Optics, Technical Digest (CD) (Optical Society of America, 2004) paper CThEE4. 15. B. A. Cumberland, J. C. Travers, S. V. Popov, and J. R. Taylor, “Towards visible CW pumped supercontinua,” Opt. Lett. doc. ID 97526 (posted 14 August 2008, in press). 16. A. B. Rulkov, M. Y. Vyatkin, S. V. Popov, J. R. Taylor, and V. P. Gapontsev, “High brightness picosecond all-fiber generation in 525-1800nm range with picosecond Yb pumping,” Opt. Express 13, 377–381 (2005). 17. J. C. Travers, S. V. Popov, and J. R. Taylor, “Extended blue supercontinuum generation in cascaded holey fibers,” Opt. Lett. 30, 3132–3134 (2005). 18. J. Lægsgaard, “Mode profile dispersion in the generalised nonlinear Schrödinger equation,” Opt. Express 15, 16110–16123 (2007). 19. P. V. Mamyshev and S. V. Chernikov, “Ultrashort-Pulse Propagation in Optical Fibers,” Opt. Lett. 15, 1076–1078 (1990). 20. D. Milam, “Review and Assessment of Measured Values of the Nonlinear Refractive-Index Coefficient of Fused Silica,” Appl. Opt. 37, 546–550 (1998). 21. D. Hollenbeck and C. D. Cantrell, “Multiple-vibrational-mode model for fiber-optic Raman gain spectrum and response function,” J. Opt. Soc. Am. B 19, 2886–2892 (2002). 22. K. J. Blow and D. Wood, “Theoretical Description of Transient Stimulated Raman-Scattering in Optical Fibers,” IEEE J. Quantum Electron. 25, 2665–2673 (1989). 23. J. Lægsgaard, DTU Fotonik, Department of Photonics Engineering, Technical University of Denmark, Ørsteds Plads 345V, DK-2800 Kgs. Lyngby, Denmark, “Raman term in the nonlinear Schrödinger equation,” (personal communication, 2008). 24. J. Hult, “A Fourth-Order Runge-Kutta in the Interaction Picture Method for Simulating Supercontinuum Generation in Optical Fibers,” J. Lightwave Technol. 25, 3770–3775 (2007). 25. O. V. Sinkin, R. Holzlohner, J. Zweck, and C. R. Menyuk, “Optimization of the split-step Fourier method in modeling optical-fiber communications systems,” J. Lightwave Technol. 21, 61–68 (2003). 26. S. M. Kobtsev and S. V. Smirnov, “Modelling of high-power supercontinuum generation in highly nonlinear, dispersion shifted fibers at CW pump,” Opt. Express 13, 6912–6918 (2005). 27. A. Mussot, M. Beaugeois, M. Bouazaoui, and T. Sylvestre, “Tailoring CW supercontinuum generation in microstructured fibers with two-zero dispersion wavelengths,” Opt. Express 15, 11553–11563 (2007). 28. S. G. Johnson and J. D. Joannopoulos, “Block-iterative frequency-domain methods for Maxwell’s equations in a planewave basis,” Opt. Express 8, 173–190 (2001). 29. V. P. Tzolov, M. Fontaine, N. Godbout, and S. Lacroix, “Nonlinear self-phase-modulation effects: a vectorial first-order perturbation approach,” Opt. Lett. 20, 456–458 (1995). 30. J. C. Travers, S. V. Popov, and J. R. Taylor, “A New Model for CW Supercontinuum Generation,” in Conference on Lasers and Electro-Optics, OSA Technical Digest (CD) (Optical Society of America, 2008), paper CMT3. 31. J. C. Travers, Femtosecond Optics Group, Physics Department, Prince Consort Road, Imperial College, London SW7 2AZ, UK, is preparing a manuscript to be called “Modelling the Initial Conditions of Continuous Wave Supercontinuum Generation.” 32. A. Mussot, E. Lantz, H. Maillotte, T. Sylvestre, C. Finot, and S. Pitois, “Spectral broadening of a partially coherent CW laser beam in single-mode optical fibers,” Opt. Express 12, 2838–2843 (2004). 33. M. H. Frosz, O. Bang, and A. Bjarklev, “Soliton collision and Raman gain regimes in continuous-wave pumped supercontinuum generation,” Opt. Express 14, 9391–9407 (2006). 34. B. Barviau, S. Randoux, and P. Suret, “Spectral broadening of a multimode continuous-wave optical field propagating in the normal dispersion regime of a fiber,” Opt. Lett. 31, 1696–1698 (2006). 35. A. Hasegawa and W. F. Brinkman, “Tunable coherent IR and FIR sources utilizing modulational instability,” IEEE J. Quantum Electron. 16, 694–697 (1980). 36. K. Tai, A. Hasegawa, and A. Tomita, “Observation of modulational instability in optical fibers,” Phys. Rev. Lett. 56, 135–138 (1986). 37. E. M. Dianov, A. Y. Karasik, P. V. Mamyshev, A. M. Prokhorov, V. N. Serkin, M. F. Stelmakh, and A. A. Fomichev, “Stimulated-Raman conversion of multisoliton pulses in quartz optical fibers,” JETP Lett. 41, 294 (1985). 38. J. P. Gordon, “Theory of the soliton self-frequency shift,” Opt. Lett. 11, 662–664 (1986). 39. A. S. Gouveia-Neto, A. S. L. Gomes, and J. R. Taylor, “Pulses of Four Optical Cycles from an Optimized Optical Fibre/Grating Pair/Soliton Pulse Compressor at 1· 32 μm,” J. Mod. Opt. 35, 7–10 (1988). 40. M. N. Islam, G. Sucha, I. Bar-Joseph, M. Wegener, J. P. Gordon, and D. S. Chemla, “Femtosecond distributed soliton spectrum in fibers,” J. Opt. Soc. Am. B 6, 1149–1158 (1989). 41. D. V. Skryabin, F. Luan, J. C. Knight, and P. S. J. Russell, “Soliton Self-Frequency Shift Cancellation in Photonic Crystal Fibers,” Science 301, 1705–1708 (2003). 42. P. Wai, H. Chen, and Y. Lee, “Radiations by solitons at the zero group-dispersion wavelength of single-mode optical fibers,” Phys. Rev. A 41, 426–439 (1990). 43. N. Akhmediev and M. Karlsson, “Cherenkov radiation emitted by solitons in optical fibers,” Phys. Rev. A 51, #99957 $15.00 USD Received 8 Aug 2008; revised 26 Aug 2008; accepted 28 Aug 2008; published 29 Aug 2008 (C) 2008 OSA 15 September 2008 / Vol. 16, No. 19 / OPTICS EXPRESS 14436 2602–2607 (1995). 44. A. Kudlinski, A. K. George, J. C. Knight, J. C. Travers, A. B. Rulkov, S. V. Popov, and J. R. Taylor, “Zerodispersion wavelength decreasing photonic crystal fibers for ultraviolet-extended supercontinuum generation,” Opt. Express 14, 5715–5722 (2006).
Introduction
Continuous wave (CW) pumping of optical fibers has led to the highest spectral power, and some of the smoothest, supercontinua demonstrated to date [1][2][3][4][5][6][7].The resulting sources are useful for a wide range of applications ranging from biomedical imaging to chemical sensing.The fundamental mechanisms, based on modulation instability (MI) leading to soliton formation and subsequent soliton dynamics, are also intrinsically interesting.A long standing issue is the lack of generation of blue-shifted spectra from the commonly used infrared CW lasers.In this paper we demonstrate the first CW pumped supercontinuum to reach the visible spectral region.We also scale the average power to an unprecedented level of 50 W, leading to spectral power densities of over 50 mW/nm.To achieve this we use a 400 W CW pump source and therefore these experiments are in a novel pump regime compared to previous CW pumped supercontinua.The generation of frequency up-shifted components from a CW pump source has been previously observed from 1.55 μm pump wavelengths in conventional optical fibers [8], but some confusion has arisen as to the physical mechanism for this occurrence, with references being made to fission of high order soliton solutions [8,9], a process which appears to be in conflict with the basic MI dynamics.In this work we consider the fundamental processes involved in the CW supercontinuum development and find that the blue-shifting spectral components are due to trapping of dispersive waves by high energy solitons, a process widely exploited in pulse pumped conditions [10,11].
Continuous wave supercontinua were first generated in the late 1990s [12,13], but significant development did not occur until high nonlinearity fibers and high power single mode CW lasers became more widely available.The first demonstration in photonic crystal fiber (PCF) [1] showed that multi-watt average powers are achievable in a simple experimental configuration.Subsequently a large body of work has been produced showing results pumped at 1.55 μm in conventional optical fibers [2,3,6,14] and at 1.06 μm in PCF [1,4,5,7].In the former case, due to favorable dispersion curves, continua extending to wavelengths short of the pump wavelength have been observed [8], along with the usual Raman-soliton continuum extending to longer wavelengths.In PCFs pumped at 1.06 μm this has not been observed until now.
The reason for this fact is due to competing requirements on the dispersion curves for efficient Raman-soliton continuum generation to long wavelengths, and for short wavelength extension.For the former, it is important that |β 2 |/γ (where β 2 is the group velocity dispersion and γ the nonlinear coefficient) stays relatively constant with wavelength, to allow the solitons to shift as far as possible to longer wavelengths before broadening so much that Raman self-scattering no longer occurs.For the latter, we need to pump close to the zero dispersion wavelength, to allow MI or soliton coupling to dispersive waves to transfer power to the normal dispersion region.These two requirements conflict as pumping close to the zero generally implies a steeper dispersion slope.
We have recently taken two approaches to achieving short wavelength generation for 1.07 μm pumping, with the aim of generating a continuous wave visible supercontinuum, which would have numerous applications.In one approach we have optimized the dispersion to precisely accommodate the relevant phase and group velocity matching conditions to reach the visible, sacrificing somewhat the Raman-soliton continuum [15].As an alternative, in the present work, we show how by power scaling the pump, using an industrial class continuous wave fiber laser, we can also extend to the visible spectral region.This paper is organized as follows.In Section 2 we explain the experimental setup and the results we have achieved.In Section 3 we discuss our model of nonlinear propagation in optical fibers, including some details on the initial conditions.In Section 4 we discuss the physical mechanisms involved along with some simulation results and note that further extension to the blue should be possible using cascaded or tapered fibers.Figure 1 shows the experimental setup.The industrial class pump laser was supplied by IPG Photonics.It emitted up to 432 W of average power at 1.07 μm, with random polarization and a spectral linewidth of 3.6 nm.The single mode output of the laser was interfaced to a collimating unit producing a 7 mm (1/e 2 ) collimated beam.Correspondingly, we were unable to splice the laser output directly to the setup and therefore used a bulk lens in order to couple this beam into a series of mode matching single mode fibers to reduce the mode field diameter, before finally splicing to the PCF we were using.The free-space coupling was typically greater than 70% efficient, the total free-space to PCF efficiency was between 30% and 50% depending on the splice loss to the PCF.The coupling at full power was stable, and did not require constant monitoring or adjustment, at least for time periods exceeding 1 hour, however, in initial experiments we reduced the thermal load on the coupling lens and input fiber by modulating the laser with a duty cycle of between 1 and 40.Final results were taken with no modulation in order to scale the average power.References to equivalent power refer to the peak or equivalent CW power with no modulation.a the parameters are: pitch Λ, air hole diameter d, zero dispersion wavelength λ 0 , dispersion at the pump wavelength D p and nonlinear coefficient at the pump wavelength γ p .b these values are estimated as described in [7] and are uncertain.
Setup
We pumped several PCFs to probe the affect of dispersion profile and nonlinearity on the resulting supercontinuum shape.The PCF parameters and the names we use to refer to them are given in Table 1 generation with picosecond and nanosecond lasers at 1.06 μm [16,17].It has a zero dispersion wavelength near to 1.05 μm and a low anomalous dispersion at the pump wavelength.The dispersion curve for this and the other PCFs we used are shown in Fig. 2.
HF840 has been previously used for continuous wave supercontinuum generation as it has a low water loss value [5].It has a single zero dispersion wavelength around 0.84 μm.HFDBL has two zero dispersion wavelengths at around 0.81 and 1.73 μm, and has also been used previously for high power supercontinuum generation [7].
We pumped a number of lengths of each fiber and show below results for the approximately optimal length, judged by the smoothness and extent of the continuum produced.The output power of each fiber changes considerably depending on their different attenuation curves and the particular spectral extent of each continuum.
Results
Figure 3 shows the results of pumping 17 m of HFDBL with our setup.A maximum of 170 W was coupled into the PCF forming a continuum spanning from 1.06 to 1.9 μm with ∼10 dB flatness.The role of the second zero dispersion wavelength is clearly evidenced by the dip in spectral power around 1.7 μm.The effect of water loss is also apparent as the spectral power reduces after 1.4 μm.The qualitative form of the spectra is very similar to those we previously achieved in [7] with 50 W pumping, but with the higher powers accessible in this work we have significantly increased the power transfered to the dispersive wave formed beyond the second zero dispersion wavelength.The spectral power densities between 1.06 and 1.4 μm are over 50 mW/nm as the output powers were around 27 W.
The supercontinuum spectra obtained at the output of 20 m of HF840 are shown in Fig. 4 for a pump power of 170 W. The total average output power was over 50 W and the continuum extended from 1.06 to 2.2 μm.The 10 dB width of the spectrum was over 900 nm, between 1.14 and 2.05 μm.Across this range the spectral power exceeds 10 mW/nm, with half of the range between 50 and 100 mW/nm.Water loss causes a significant fall off in spectral power, although the power density at the long edge of the continuum around 2.1 μm is still over 5 mW/nm, which is sufficient for most applications.
These results clearly demonstrate the power scaling potential of using an industrial class laser, but pumping HFDBL and HF840 has not lead to a visible supercontinuum spectrum.The reasons for this are analyzed in detail below, but stem from the fact that the zero dispersion wavelength is too far from the pump wavelength.To generate visible continua there must be a mechanism to transfer power to the normal dispersion region.This can be achieved either through the generation of dispersive waves from the solitons formed from the MI process which initiates the continuum, or directly from widely separated MI sidebands, where one MI sidelobe overlaps with the normal dispersion region.Both of these processes require the pump wavelength to be sufficiently close to the zero dispersion.HF1050 has a suitable dispersion curve.The result of pumping 50 m of this fiber with 230 W from our setup is shown in Fig. 5.The total average output power was 28 W. It is clear that the supercontinuum extends to the visible spectral region, down to 0.6 μm and the continuum appeared bright red to the eye.The long wavelength edge of the continuum was 1.9 μm.The spectral powers were over 2 mW/nm in the short wavelength side, which is competitive with the highest power pulse pumped visible supercontinua, and over 10 mW/nm in the infrared region, with substantial spectral regions between 20 and 30 mW/nm.This result shows that visible supercontinua similar to those obtained with pulse pumped systems can be achieved with extremely high power CW pump conditions, but with additional benefits due to scalable spectral power and flatness.We should be able to extend the continuum to the blue, and scale the power further by carefully designing the fiber dispersion and pump conditions, as discussed in the following sections.
Propagation equation
To gain insight into the supercontinuum dynamics we modelled the propagation of a model CW field through the PCFs.The complex field envelope E(ω, z) at angular frequency ω and axial fiber position z was calculated using the generalized nonlinear Schrödinger equation, modified to include the dispersion of the mode field profile [18,19], In Eq. ( 1) β (ω) is the mode propagation constant, Ω = ω − ω 0 is the frequency shift with respect to a chosen reference frequency ω 0 , n 2 = 2.74 × 10 −20 m 2 W −1 is the nonlinear refractive index [20], c the speed of light and A e f f the effective mode area.The wavelength dependent loss is included in α(ω).The first term on the right hand side of the response function, represents the Kerr effect and the second term represents the Raman effect, where h r is the Raman response function in the frequency domain [21] and the factor f r = 0.19 determines the Raman contribution to n 2 .There is some variation in the literature about whether a factor of 2/3 should be included as a prefix to the Raman part of R(ω 1 − ω).This was reported in [22], and arises from ignoring a cross term when deriving Eq. ( 1).It turns out, however, that a thorough analysis of the ways which n 2 and g r (the Raman gain coefficient) are experimentally measured, along with a self-consistent analysis of their relation leads to Eq. ( 3) being the correct definition for use in our propagation equation [23].
Equation ( 1) can be integrated directly after a change of variables, or more commonly it is solved using the split-step Fourier method.Here we use the Runge-Kutta in the interaction picture method [24].The step size was chosen automatically based on the relative local error [25], which was held below 1 × 10 −6 for the simulations discussed below.
Modelling CW phenomena is made difficult by the time scales involved.To make the simulations tractable we can only simulate a snapshot of the field as it propagates.We therefore have to carefully choose a time window which contains sufficient information to accurately reproduce experimental observations.The consensus from the current literature is that a time window of several hundred picoseconds with the periodic boundary conditions inherent to the split-step Fourier method is sufficient [9,26,27].In the simulations described below we used 2 18 grid points over a time window of 256 ps.To visualize the results we use spectrograms or XFROG traces, computed with a windowed Fourier transform of the field envelope: Here E re f is an envelope of a reference pulse, in our case a 3 ps Gaussian.
The propagation constants and effective areas of the PCF modes were computed from scanning electron micrographs of the fiber cross section using a free software package [28].The effective areas were calculated from the modal fields using a vectorial method accounting accurately for the air-holes [29].For HFDBL the very high water loss at 1.38 μm was included with a spectrally dependent α(ω) term in Eq. 1, derived directly from the measured loss spectrum of the fiber.
Initial conditions
There is great difficulty in modelling the initial conditions of the pump CW fields for supercontinuum generation as accurate single-shot diagnostics of CW lasers are difficult to obtain.This problem is compounded when considering CW fiber lasers as the laser cavities are both nonlinear and highly dispersive, leading to more complicated field evolution than their bulk laser counterparts.This problem is also of great importance as modulation instability, the precursor of CW supercontinua, is highly dependent on the input noise conditions.We have performed a comparison of the various models previously used with careful experimental results, and designed our own model.These results are reported elsewhere [30,31].Here we briefly summarize the main points.
One of the simplest models is that of a CW field with no temporal or spectral phase or amplitude fluctuations, with quantum noise [26].This leads to a spectrum with a very narrow spike at the central laser frequency and is approximately comparable to a single frequency laser.Such narrow pump spectra are not comparable to those observed from a high power fiber laser.To improve on this, a number of models have been based on the phase diffusion concept, where the temporal phase is modelled as a Gaussian noise process, which leads to Lorentzian shaped laser spectra [32,33].This model neglects temporal amplitude fluctuations, which in a nonlinear dispersive cavity with high powers are certain to exist.Also, Lorentzian spectra with the bandwidths equivalent to our pump lasers contain significant power outside of the gain region of the fiber, beyond experimental observation.An alternative model is to represent the laser field as a collection of longitudinal modes with no phase relationship [9,34].This model starts from the measured spectral power of the pump laser with a random spectral phase added to each frequency bin.This leads to very strong intensity fluctuations in the time domain of the order of the inverse spectral width of the pump (coherence time).However the resulting fluctuations in this case are too high as dispersion and self-phase modulation are completely neglected, although they should be significant in a fiber laser cavity.A careful comparison between these models and real experimental results with CW fiber lasers has shown them to be somewhat limited even, in some cases, for qualitative comparison [30,31].
Our approach was to model the effects of dispersion and nonlinearity in the fiber lasers by modelling the whole laser itself.We start with quantum noise represented as two photons per mode.We then amplify through a fiber with spectrally dependent gain, gain saturation, nonlinearity and dispersion.Bragg gratings are modelled at the end of each amplification pass simply by a suitable spectral filter.We iterate the field through such a cavity until the average output power is at the desired level.The resulting field exhibits many of the expected characteristics: temporal and spectral amplitude and phase fluctuations and a triangular shaped spectrum on a log scale.The power fluctuations are much weaker than with the random spectral phase model described above.Comparisons with experimental results have shown this model to be an improvement on the other proposals [30,31].We modelled our 400 W laser in this way and used it to produce the simulated results reported below.
Long wavelength generation: the Raman-soliton continuum
The initiation of a continuous wave supercontinuum in the low anomalous dispersion region is due to modulation instability [35,36].Noise fluctuations of an otherwise CW or quasi-CW field become self-trapping due to the Kerr effect.This process is fundamentally linked to the existence of solitons, which are maintained by the same trapping effect.Fundamental solitons are a stable condition -adiabatic amplification simply leads to shorter fundamental solitons -and therefore the process of MI does not naturally lead to higher order soliton solutions.
The early stages of the CW continuum formation are clearly identifiable in the simulated spectrograms of our pump laser in HFDBL, shown in Fig. 6.In Fig. 6(a) is shown the pump field with the modelled initial conditions as described in Section 3. In Fig. 6(b) we see the emergence of solitons, which we attribute to MI.Here we can clearly see that the solitons form earlier and with higher peak power at the peaks of the input power fluctuations.Note also that the solitons being formed are much shorter in duration than the input power fluctuations.The dependence of MI on the noisy initial conditions means that we create a train of solitons with a distribution of energies, this leads to a smooth continuum.
Once formed, the solitons may shift due to Raman self-scattering [37,38].To do so they must have a short enough duration so that their bandwidth is broad enough to self amplify through Raman.The frequency shift of a soliton is proportional to the fourth power of the soliton energy and so the shape of the soliton energy distribution strongly affects the resulting spectrum, which has been called a Raman-soliton continuum [39].In Fig. 6(c) we see how Raman self-scattering has started to shift the highest energy solitons to longer wavelengths, beginning the continuum formation, while new solitons are forming from the remaining pump field.We should note again that these are single shots of 200 ps duration from a continuous process.The summation of all of these solitons leads to a very smooth spectrum on average.
In addition to the above process, inelastic collisions between solitons can transmit significant energy from higher frequency solitons to those with a lower frequency.Solitons collide when they overlap in time (i.e.align vertically in these spectrograms) as long as they are close enough spectrally for the Raman process to occur (up to about 40 THz).As solitons with lower frequency have clearly Raman shifted further, they tend to have higher energies and so the highest energy solitons get further excited.This enhances the continuum bandwidth.In the context of CW supercontinua this was discussed in [33], and was originally identified in picosecond pumped supercontinua [40].In Fig. 6(d) we see evidence of soliton collisions.The non-solitonic traces left behind by the solitons is the radiation shed by a pair of solitons after an inelastic collision mediated through the Raman process occurred.Both pulses involved will have energies mismatched from the required soliton energy, therefore some energy is shed as the pulse adapts back to the soliton condition.
Limitations to the continuum bandwidth are caused by at least four mechanisms.Firstly, there is a finite length of fiber, and so the maximum shift is that achieved by the most energetic soliton through the necessarily limited nonlinear medium.Secondly, losses become significant, which reduces the soliton energy and can slow or stop their shift.Thirdly, the balance of nonlinearity and dispersion, which maintains the soliton shape can change very significantly and thus broaden the soliton temporally so that it no longer has the spectral bandwidth for Raman [7].Finally, the Raman-soliton continuum can be limited by a second zero dispersion wavelength.As we noted in [7], HFDBL has a second zero dispersion which prevents the continued shift of the solitons as the anomalous dispersion region is limited [41].This is seen clearly in Fig. 6(d) where the solitons gather around 1.6 to 1.7 μm, before the zero dispersion point, and phase-matched dispersive waves are generated beyond this point, in the normal dispersion region.The dispersive waves are clearly being chirped as they propagate through the fiber.These features were clearly identifiable on the experimental spectra shown in Fig. 3.
Short wavelength generation
There are two main mechanisms available for generating wavelengths short of the pump in the CW regime.Either solitons formed from MI have a spectral overlap with the normal dispersion region, and can therefore directly excite phase-matched dispersive waves [42,43], or the anti-Stokes MI sidelobe is itself in the normal dispersion region causing a growth of power there.The excitation of phase-matched dispersive waves is proportional to the spectral amplitude of the soliton spectrum at the phase-matched wavelength, and as the soliton spectral power drops off exponentially, pumping close to the zero is required.Similarly, for MI extension to the normal dispersion region, a low anomalous dispersion is required for wide Stokes shifts and closeness to the zero dispersion is required to achieve overlap into the normal dispersion region.In fact, these two processes are intimately linked.Therefore it is clear how pumping close to the zero dispersion wavelength is essential, as we observed experimentally in Section 2.
After this initial stage, further extension to the short wavelength region can be gained either through four wave mixing between the soliton continuum and the dispersive waves, or through the soliton trapping of dispersive waves [10,11].In both cases we must generate significant power in the normal dispersion region for further extension to occur.In the four wave mixing case this is essential to achieve phase-matching to yet shorter wavelengths.In the soliton trapping case it is inherent to the process that the trapped waves are group-velocity matched to the solitons, thus requiring that a dispersion zero must be between the soliton and dispersive wave frequencies.It should be noted that some authors have attributed short wavelength generation from 1.5 μm CW pumped continua to soliton fission processes [8,9].However, soliton fission requires high order soliton solutions which, as noted above, are not naturally generated from the MI process; we therefore believe that they cannot play a role in the continuum mechanism.
To identify which mechanism (four wave mixing, or soliton trapping) dominates in the case of HF1050 we can plot the phase and group velocity matching curves for the two processes and compare with our experimental results, as shown in Fig. 7.For the four wave mixing phasematching curve it is assumed that the required pump wavelengths (in the normal dispersion regime), are made available in the initial supercontinuum stages, this was verified by experiment.At small Stokes/anti-Stokes shifts from the pump, the two matching curves are quite similar, preventing unambiguous differentiation between the processes from experimental data, but at large Stokes shifts the curves diverge considerably.Marked by a pair of horizontal and vertical lines on Fig. 7 are the longest Stokes and shortest anti-Stokes wavelengths generated in our experiments.It is clear that they cross almost precisely on the calculated group-velocity matching curve.This implies that the soliton trapping of dispersive waves is the dominant mechanism for our short wavelength extension.
To verify this we ran numerical simulations of our pump laser propagating through HF1050.The results are shown in Fig. 8.The spectrogram of the pump is shown in Fig. 8(a) and exhibits temporal power fluctuations as described in Section 3.After 3 m of propagation (Fig. 8(b)) MI has led to the formation of solitons and dispersive waves have been formed at wavelengths short of the pump.We estimate that the MI sidelobe separation for our pump and fiber conditions is ∼20 nm which is not large enough for significant generation of power in the normal dispersion GVM PM Fig. 7. Calculated phase (PM) and group velocity (GVM) matching curves for HF1050.The solid yellow line indicates the long wavelength edge in the supercontinuum output of HF1050 and the solid green line indicates the short wavelength edge.region.Instead the dispersive waves are excited by the very short duration (and hence wide bandwidth) solitons formed through the MI process.Fig. 8(c) shows that as the solitons shift to longer wavelengths via the Raman effect, the dispersive waves maintain their relative positions to individual solitons.This is characteristic of the soliton trapping effect [10,11].In addition, examination of the precise locations of individual soliton-dispersive wave pairs is in agreement with the group velocity matching curve shown in Fig. 7.The short wavelength extension through this process is understood as described in [10,11].The soliton modulates the refractive index such that the dispersive wave cannot escape in one direction.The soliton then chirps the dispersive wave towards the blue, through cross-phase modulation, where the group velocity is lower.It then shifts to longer wavelengths through Raman, in doing so it is decelerated as longer wavelengths have lower group velocity, and therefore falls back onto the dispersive wave, and thus the process can repeat.This trapping effect therefore leads to a cascade of scattering events for the dispersive wave pushing it further and further towards the blue.In Fig. 8(d) we see that the solitons have pushed the dispersive waves as far short as 0.6 μm, in agreement with our experiments.This process will be limited by either the breaking of the group velocity matching or by the halting of the red-shifting solitons, which can occur by any of the reasons described in Section 4.1.Further extension to the blue should be possible if we use cascaded or tapered fibers to extend the group velocity matching of the anti-Stokes components, as previously demonstrated in the picosecond pump regime [17,44].
Conclusion
We have demonstrated the extension of a continuous wave supercontinuum to the visible spectral region and analyzed the physical mechanisms enabling this process.We have identified the two requirements to generate a short wavelength continuum: pumping close to the zero dispersion with sufficient power to generate dispersive waves, and met them by using a selected photonic crystal fiber and pumping with an industrial class 400 W, continuous wave, ytterbium fiber laser.The high power available with this laser enabled the scaling of the supercontinuum average power to 50 W and spectral power densities of over 50 mW/nm over wide supercon-
Fig. 5 .
Fig.5.Measured spectra out of 50 m of HF1050 for 230 W equivalent pump power, normalized to the total average output power of 28 W.
Fig. 6 .
Fig. 6.Simulated spectrograms of the supercontinuum development through HFDBL for 170 W pump power.The spectrograms are calculated for fiber lengths of: (a) 0 m, (b) 1.5 m, (c) 3.0 m, (d) 7.0 m.The full color scales and axis figure scales change for each sub-figure and range over 40 dB.
Fig. 8 .
Fig. 8. Simulated spectrograms of the supercontinuum development through HF1050 for 170 W pump power.The spectrograms are calculated for fiber lengths of: (a) 0 m, (b) 3 m, (c) 9 m, (d) 25 m.The full color scales and axis figure scales change for each sub-figure and range over 40 dB.
Table 1 .
Parameters a of the photonic crystal fibers.
. HF1050 is similar to PCFs commonly used for visible supercontinuum | 8,162 | 2008-09-15T00:00:00.000 | [
"Physics"
] |
Plume–MOR decoupling and the timing of India–Eurasia collision
The debatable timing of India–Eurasia collision is based on geologic, stratigraphic, kinematic, and tectonic evidence. However, the collision event disturbed persistent processes, and the timing of disturbance in such processes could determine the onset of India–Eurasia collision precisely. We use the longevity of Southeast Indian Ridge (SEIR)—Kerguelen mantle plume (KMP) interaction cycles along the Ninetyeast ridge (NER) as a proxy to determine the commencement of India–Eurasia collision. The geochemical signature of the KMP tail along the NER is predominantly that of long-term coupling cycles, that was perturbed once by a short-term decoupling cycle. The long-term coupling cycles are mainly of enriched mid-ocean ridge basalts (E-MORBs). The short-term decoupling cycle is mostly derived from two distinct sources, MOR and plume separately, whereas the KMP is still being on-axis. The onset of India–Eurasia collision led to continental materials recycling into the mantle; hence the abrupt enrichment in incompatible elements at ca. 55 Ma, the MOR–plume on-axis decoupling, and the abrupt slowdown in the northward drift of the Indian plate was induced by the onset of India–Eurasia collision, thereafter MOR–plume recoupled.
Scientific Reports
| (2022) 12:13349 | https://doi.org/10.1038/s41598-022-16981-y www.nature.com/scientificreports/ interaction between mantle plumes and the lithosphere is manifested in a dynamic topography [24][25][26] , large igneous provinces related to lithosphere breakup 19,27 , and the inherited older crustal materials in younger ridges 28 . The mantle plumes of the Comoros, Marion, Crozet, Kerguelen and Reunion plumes existed in the area once occupied by the amalgamated East Gondwana fragments 29,30 . Accordingly, large igneous provinces, and mantle plume tails related to mantle-plume activity are dominant in the Indian Ocean and within continents surrounding it 31 (Fig. 1b). Furthermore, The KMP break-up East Antarctica and Australia obliquely to all orogenic structures 32,33 , one of the most puzzling elements of Pangaea supercontinent fragmentation. This process of semi-active rifting 34 occurred in the presence of a small amount of syn-rift magma intrusion 35 . Regions of Precambrian continental crust recorded underneath the Indian Ocean are interpreted to be related to interactions with mantle plumes. Proterozoic garnet granulite xenoliths exists in the basaltic basement of Elan Bank related to the interaction with the KMP 36,37 , and inherited Archaean zircon found in Mauritius Island Miocene lava related to the Reunion plume 38 . Therefore, the kinematic of plates and the drift of the Indian plate is dependent on MOR-plume interaction, and in return the MOR-plume interaction is very sensitive to major tectonic events. Meanwhile, the diversity in melts extruded as a result of the interaction between the MORs and mantle plume records variations in mantle sources for the melts and give age-constraints for the tectonic processes. We track the longevity of the plume-MOR interaction using the geochemical signature of the basaltic rocks along the NER to determine the sensitivity of plume-MOR coupling/decoupling cycles to the collision of India and Eurasia.
Ma geochemical anomaly
Ocean island basalts (OIBs) are mainly derived from hot spots connected to a mantle plume 51 , while normal mid-ocean ridge basalts (N-MORBs) are extracted from depleted mantle and emplaced at MORs 52 . The interaction between MOR-plume rarely produces enriched mid-ocean ridge basalts (E-MORBs) 53 . However, E-MORBs emplaced along the NER were extruded as a result of interaction between the SEIR and KMP [54][55][56] , thus providing the geochemical signature that serves as a significant proxy in determining the longevity of the interaction between a MOR and mantle plume. Combinations of the little-mobile elements Th-Nb-Zr-Y-Yb define proxies that demonstrate the existence of different types of oceanic basalts, including N-MORB, E-MORB, and OIB, where the nature of these basalts are unrelated to subduction-related processes 57,58 . Thus, Th/Yb-Nb/Yb 57 , Nb/Y-Zr/Y 58 , and Th/Yb vs. Zr/Y 59 diagrams are used to discriminate between 1164 (out of 6550) geochemical analyses for basalt volcanisms extruded along the NER in the Indian Ocean. Furthermore, normalized (La/ Sm) PM ratios to primitive mantle are used to remove the effect of magma differentiation processes. Sr-Nd-Pb-Hf isotopes are accompanied with trace element proxies to estimate the source of melts, the extent of MOR-plume interaction, and the involvement of crustal materials. The analyses along the NER are well-represented, and all the analyses are assigned to their ages (Supplementary Table 1 Trace elements composition. Th-Nb-Yb proxy is sensitive to mantle enrichment along MORB-OIB array, whereas higher Th/Yb values that displaced the non-subducting array to subducting (arc) array involves addition by subduction components 57 . The enrichment in Th for the MORB-OIB array could be related to deep crustal recycling; NER samples plot within the MORB-OIB array with both depleted and enriched basalts supporting non-subduction tectonic processes (Fig. 3a), and confirm plume-ridge related processes. The mantle was enriched abruptly in incompatible elements, such as Nb, Th, Zr once at ~ 55 Ma, whereas depleted basalt erupted contemporaneously, and this assured the presence of two sources for the magmatism along the NER (Fig. 3a). The interaction between the SEIR and KMP lasted for the whole lifespan of the NER, and produced depleted/enriched MORB, except perhaps for the decoupling event at ca. 55 Ma. Niobium (Nb) as a proxy is a key discriminator between different mantle reservoirs relative to other incompatible elements 64 , hence Nb-Zr-Y discrimination diagram compare non-subducting basalts to Icelandic plume volcanics 58,65 . The NER basalts are similar to those of Icelandic magmatism, where both depleted and enriched basalt varieties exist 54 . In this connection we apply delta (Δ) Nb as a parameter to discriminate between MORB and OIB, where Δ Nb = 1.74 + log (Nb/Y) − 1.92 log(Zr/Y) 58,66 . Δ Nb = 0 line divides the field into Nb depleted (Δ Nb < 0), below that line, and Nb enriched (Δ Nb > 0) (Fig. 3b); NER basalts have both enriched and depleted affinities. Enriched basalts in that diagram exhibit positive Δ Nb anomaly, and can be interpreted to be associated with the mantle plume 65,67 .
The enriched magmatism of NER plot in three different groups, slightly enriched pertain to SEIR N-MORB, moderately enriched plot in the field of SEIR E-MORB, and highly enriched basalts with affinity similar to the Kerguelen Archipelago (Fig. 3b). Extremely enriched basalts are related to the ca. 55 Ma group, whereas depleted to enriched varieties also erupted contemporaneously. However, NER samples plot mainly in SEIR MORB with both enriched and depleted affinities, and in the Kerguelen Archipelago field, some samples are enriched in Lu (Fig. 3c), which is accompanied with enrichment in Y and Sc 55 . Lu, Y, and Sc are compatible in garnet, and their enrichment at a given Zr/Nb could be generated by partial melting of either garnet-or spinel-bearing peridotites 55 .
The discrimination between subalkaline magmas is based on combinations of the incompatible trace elements Th, Yb, Zr and Y. Thus, in the diagram of Th/Yb vs. Zr/Y diagram 59 , most samples are of tholeiitic character, whereas others plot in the transitional field between tholeiitic and calc-alkaline fields. The ~ 55 Ma basalts are the only samples that exhibit calc-alkaline behavior, whereas the tholeiitic type also exists, but without samples of transitional character (Fig. 3d). The existence of bimodal (tholeiitic/calc-alkaline) basalts occurs simultaneously at 55 Ma, confirming the decoupling between the sources of melts, and suggests that the enrichment www.nature.com/scientificreports/ related to deep crustal recycling is induced by continental crust abrupt recycling that is linked to the initiation of India-Eurasia collision, then basalts became transitional. Normalized La/Sm ratios to primitive mantle 64 can effectively remove the effect of magma differentiation processes, where (La/Sm) PM sets SEIR E-MORB apart from SEIR N-MORB 54 at (Nb/La) PM = 1, and (Nb/Y) PM = 1. The NER magmatism has both the depleted and enriched affinities, whereas ca. 55 Ma magmatism shows Nb enrichment abnormally than any other NER magmatism (Fig. 4a,b), and this positive anomaly in Nb is considered 70 ; this is controlled by the two large low-shear velocity provinces underneath the Pacific and Africa, mantle sole domain with low S-wave velocity, and does not rely on the southern hemispheric basalt classification 71 . The KMP is fed from the lower mantle 19,71 , and pertains to the African mantle domain 71 . The African and Pacific mantle domains structure is linked to be in dynamic relationship with tectonics 71 , therefore, the interaction between the SEIR and the KMP along the NER reflects the relationship between deep mantle geochemical state and plate tectonics. The NER basalts have higher 207 Pb/ 204 Pb, and 208 Pb/ 204 Pb isotope ratios (Fig. 5a,b), and plot above the NHRL line that defines the DUPAL anomaly, and in the field of mixed PERMA W15 (prevalent mantle) + UCC (upper continental crust) 69 and this confirms their lineage to the African mantle domain (Fig. 5 a&b). The NER magmatism has higher Sr isotope values than the Pacific MORB (Fig. 5c), where significant crustal materials were recycled back, including sediments 69 . The ca. 55 Ma anomaly plot in the EM1 field based on 176 Hf/ 177 Hf vs. 143 Nd/ 144 Nd isotope diagram (Fig. 5d), and this could be attributed to the involvement of crustal materials, including sediment recycling 69 . Based on the isotopic composition of the NER magmatism, these melts were derived from enriched mantle, as a result of recycling crustal materials. The removal of the roots of the Indian plate during the breakup of Gondwana supercontinent, as a result of warming up the lithosphere by mantle plume 72 , enriched the mantle by incompatible elements underneath the Indian Ocean. However, the abrupt enrichment at ~ 55 Ma was related to another deep crustal recycling event, and this event coincided with the abrupt slowdown in the velocity of the Indian plate, most probably related to the initiation of India-Eurasia collision.
MOR-plume interaction cycles
Long-term coupling cycles. The longevity of interaction between the SEIR and KMP continued for a long period 54 . During the time period from 77 to 32.9 Ma, basaltic magma was generated from enriched mantle similar to the main type extruded along the NER, with E-MORB, N-MORB and OIB affinities (Fig. 3). The diversity in basalt types confirms variations in mantle sources for melts 54 , and most probably changes in the behavior of ridge-plume interaction related to tectonic events. The SEIR jumps southward toward the KMP occurred more frequently beneath the NER 20 , leaving fossil ridges behind 21 . Large spreading jump events occurred at 65 Ma and 42 Ma, whereas smaller jump events happened repeatedly 20 . Ridge jump mechanism of interaction between the SEIR and KMP, increased the longevity of the interaction and stability of the SEIR system. Therefore, the SEIR-KMP coupling produced both enriched and depleted melts 54 , with the involvement of crustal materials (Figs. 3, 5). These long-term coupling cycles existed more frequently during the interaction between the SEIR and KMP, (Fig. 3).
Short-term decoupling cycle. The asymmetrical spreading of the Indian Ocean 74,75 caused by the eastward flow of the asthenosphere 15 is driven by MOR-plume interactions 19 . The plume-ridge interaction depends on the interaction distance, and the spreading rate of the MORs [76][77][78] . The interaction distance between the MOR and mantle plumes range from hundreds of kilometers 79 , to greater than 1000 km 19,22,80 . Meanwhile, perpendicular and radial structures related to the interaction between the MOR and mantle plume have been detected by seismic tomography 81 , reproduced by numerical modelling 82,83 and analogue experiments 84 . Therefore, the off-axis mantle plume within the interaction field distance is still able to interact with the distal MORs 85 . Meanwhile, the flow of melts from the off-axis plume toward the ridge produces elementary depleted, but isotopically enriched N-MORB 77 . The migration of SEIR southward, slow spreading rate, and ridge jumps frequently www.nature.com/scientificreports/ occurred underneath the NER enabled long-term interaction between the SEIR and KMP 86 . The second enrichment event at ~ 55 Ma along the NER produced the OIB and E-MORB separately (Fig. 3a,b). Meanwhile, the Nb anomaly of these events assure the deep crustal recycling related to plume-ridge interaction (Fig. 3a,b). Moreover, immobile trace elements discrimination of ~ 55 Ma anomaly basalts into tholeiitic and calc-alkaline types, confirms the geochemical decoupling, that is different from the off-axis plume (Fig. 3d). The existence of two contrasting rock types being formed, including OIB (Fig. 3), and the isotopic signature of the second enrichment event with low 143 Nd/ 144 Nd ratio confirms the interaction between SEIR and the on-axis KMP at ~ 55 Ma event. Therefore, the geochemical decoupling of the SEIR and the KMP occurred while the KMP was on-axis, and this reflects profound changes in the chemical properties of the mantle. Meanwhile, the abrupt slowdown in the Indian plate drift at 55 Ma can be explained to coincide with the beginning of India-Eurasia collision 7,17 .
Discussion
The interaction between mantle plumes and nearby MORs controlled the production rate of magma fluxes 87 , and plume-MORs interaction magnitude is a function of mantle temperature 88 and the spreading rate 89 . Thus, in return any change in mantle geochemistry could affect the plume-MORs interaction activities. Consequently, the diversity in melts extruded along MORs records variations in mantle sources for melts 54 , and most probably changes in the behavior of ridge-plume interaction related to tectonic events. The basalts erupted along the Indian Ocean ridges are elementary and isotopically of different composition compared to both the Atlantic and Pacific MORBs 90 (Fig. 5), because of their interaction with plumes feeding from the African large low-shear velocity province mantle domain 71 . However, SEIR is a fast migrating MORs 19 , but the interaction of KMP and the SEIR persisted for a long time 54-56 , a process induced by the tectonic ridge jumps 20,21 . Meanwhile, the KMP is considered as a deep plume, feeding from the lower mantle 19 , so NER magmatism related to KMP-SEIR conjunction is the product of interaction between shallow and deep mantle reservoirs. Therefore, the first enrichment event for the mantle underneath the Indian Ocean that produced enriched basalts was induced by the large low-shear velocity province of the African mantle domain 71 , and this effect persisted for the lifespan of the NER. Simultaneously, the elimination of the Indian plate roots as a result of Gondwana supercontinent dispersal 72 , enriched the mantle underneath the Indian Ocean. Therefore, Precambrian chunks of continental material were found in basaltic rocks in the Indian Ocean, such as garnet granulite xenoliths that found in the basaltic basement of Elan Bank 36,37 , and inherited Archaean zircon recorded from Mauritius Island Miocene lava 38 .
The onset of India-Eurasia collision led to quiescence in the Neotethys closure activity, and low spreading rates along the Indian Ocean ridges 18 , thus causing deceleration in the northward drift of the Indian plate 91,92 . The subduction zones influenced the mantle by recycling crustal materials, including sediments 69,93 , hence modifying the asthenosphere mantle 94 , then recycling it back into MORs 94,95 , and into arc magmatism 93 . A group of microcontinental blocks existed between India and Eurasia before the closure of the Neo-Tethys 96 , this is a common phenomenon at many continental margins 97 , and was induced by mantle plume 98,99 . The subduction of Neo-Tethys microcontinents before the onset of India-Eurasia collision plate affected the arc magmatism in Ladakh 100,101 . Subduction zones act as a shield that divide the mantle tectonically and prevent the convection of enriched mantle between different domains laterally 94 . Consequently, the Neotethys northward doublesubduction zones prevented the enriched mantle underneath the India Ocean from northward migration below the Tibet-Himalaya orogeny (Fig. 6).
The second enrichment event happened abruptly at ca. 55 Ma, whereas depleted and enriched basaltic rocks erupted contemporaneously, and this assures the presence of distinct sources for the magmatism along the NER derived from different mantle reservoirs (Fig. 3). However, NER samples plot mainly in SEIR MORB, some samples are enriched in Lu at a given Zr/Nb (Fig. 3c). This is correlated with enrichment in Y and Sc 55 , and could give evidence for the partial melting of either garnet-or spinel-bearing peridotites 55 . However, ca. 55 Ma basalts have both tholeiitic and calc-alkaline affinities without transitional samples (Fig. 3d), indicating the presence of immiscible distinct sources. These distinct sources exist along the NER, giving both enriched and relatively depleted rock types that mixed homogeneously, but the rapid enrichment related to the second phase boost the decoupling into distinct rock types contemporaneously. Therefore, the second enrichment event was related to deep crustal recycling, that is induced by continental crust abrupt recycling, and this is linked to the initiation of India-Eurasia collision. Most of the collisional age estimation methods are biased; the stratigraphic age-constraints are mostly older, while tecto-magmatic ages are younger (Fig. 1a). The abrupt slowdown in the Indian plate set the onset of India-Eurasia collision to be at 55 Ma 7,18,44 , and this is consistent with the SEIR-KMP decoupling event related to the deep recycling of crustal materials underneath the Indian Ocean. However, the break-off of the Neo-Tethys subducted slab ca. 53 Ma is explained as the reason behind the slowdown in the drift of the Indian plate, rather than India-Eurasia collision 18 . The onset of India-Eurasia collision preceded the break-off of Neo-Tethys subducted slab break-off; Zhu et al. 18 set ca. 55 Ma to be the time for India-Eurasia collision initiation.
The NER magmatism has both the depleted and enriched affinities, whereas ca. 55 Ma magmatism shows Nb enrichment abnormally compared with any other NER magmatism (Fig. 4a,b), and this positive anomaly in Nb is considered as a key indicator for mantle enrichment 65 , that could be related to deep recycling of crustal materials 69 . The post-collision enrichment in La , could be related to the transition to less compressional regimes, that decrease the pressure and favor the partial melting of spinel peridotite rather than garnet peridotite (Fig. 3c). The NER basalts have higher 208 Pb/ 204 Pb, and 207 Pb/ 204 Pb isotope ratios (Fig. 5a,b), and plot above the NHRL line that defines the DUPAL anomaly, and in the field of mixed PERMA W15 (prevalent mantle) + UCC (upper continental crust) 69 and this confirms their lineage to the African mantle domain (Fig. 5a,b). The NER magmatism has higher Sr isotope than the Pacific MORB (Fig. 5c), where significant crustal materials were recycled back, including sediments 69 (Fig. 5d), and this could be attributed to the involvement of crustal materials, including sediment recycling 69 . Based on the isotopic composition of the NER magmatism, these melts were derived from enriched mantle, as a result of recycling crustal materials. The earlier enrichment of the mantle underneath the Indian Ocean was triggered by the African large lowshear velocity province domain 71 . Meanwhile, the Indian plate lost the lower part of its lithosphere during the breakup of Gondwana 72 , with evidence for recycling of the Indian lithosphere in the basaltic basement of Elan Bank 36,37 , and in Mauritius Island Miocene lava 38 (Fig. 6a). However, two-stage diachronous collisions happened 13 , but the disturbance in MOR-plume decoupling happened once at ca. 55 Ma (Fig. 6b), and this was coincident with the second enrichment event (Fig. 6b). The disturbance in MOR-plume interaction led to on-axis decoupling between MOR-plume, as a result of deep recycling of lithospheric materials. Consequently, the geochemical composition of the mantle changed, and this is most probably related to the collision between India and Eurasia. E-MORBs extruded predominantly to postdate the collision of India-Eurasia, thereafter www.nature.com/scientificreports/ MOR-plume reconciled (Fig. 6c). The interaction of MOR-plume is very sensitive to major geodynamic events, such as India-Eurasia collision, and could be used to make the timing of geologic events more precise.
Methodology
A geochemical database of non-subducting-influenced basaltic rocks, consisting mainly of E-MORB and OIB in the Indian Ocean especially along the NER, Chagos-Laccadive ridge, and Mascarene Plateau, has been analyzed (Supplementary Table 1) and applied for the present study. The integrated Sr, Nd, Pb, Hf isotope, and trace elements (particularly actinide elements such as Th, transition elements such as Nb, Y, Lu, and Zr, in addition to lanthanide elements such as La, Yb, and Sm) of basalts (6550 samples), with ages ranging from 77 to 32.9 Ma, were retrieved from the EarthChem repository. Data reduction was applied using the Pandas-Python data analysis library in Jupyter notebook IDE in order to exclude samples with abnormal values. After an automated check based on the Pandas library, a manual double-check was carried out. Out of 6550 samples, 1164 were used in this study and plotted on the map to cross-check the age of the samples relative to their location along large igneous province ridges (Fig. 1b), based on the results from Refs. 19,102 . 5386 samples were excluded from this study, because they are located outside the NER and/or reduplicated. The geochemical data of basalt volcanisms extruded along the NER are available from https:// www. earth chem. org/, and were reduced using Jupyter (https:// jupyt er. org/), and plotted using the GeoChemical Data Toolkit software (GCDkit http:// www. gcdkit. org/).
Data availability
All data generated or analysed during this study are included in this published article and its Supplementary Information files. Figures 1a and 6 were generated using CorelDRAW software, and Fig. 1b | 4,998 | 2022-08-03T00:00:00.000 | [
"Geology"
] |
Metabolic status differentiates Trp53inp2 function in pressure-overload induced heart failure
Cardiometabolic disorders encompass a broad range of cardiovascular complications associated with metabolic dysfunction. These conditions have an increasing share in the health burden worldwide due to worsening endemic of hypertension, obesity, and diabetes. Previous studies have identified Tumor Protein p53-inducible Nuclear Protein 2 (Trp53inp2) as a molecular link between hyperglycemia and cardiac hypertrophy. However, its role in cardiac pathology has never been determined in vivo. In this study, we generated a cardiac specific knockout model of Trp53inp2 (Trp53inp2-cKO) and investigated the impact of Trp53inp2 inactivation on the pathogenesis of heart failure under mechanic or/and metabolic stresses. Based on echocardiography assessment, inactivation of Trp53inp2 in heart led to accelerated onset of HFrEF in response to pressure-overload, with significantly reduced ejection fraction and elevated heart failure marker genes comparing to the control mice. In contrast, inactivation of Trp53inp2 ameliorated cardiac dysfunction induced by combined stresses of high fat diet and moderate pressure overload (Cardiometabolic Disorder Model). Moreover, Trp53inp2 inactivation led to reduced expression of glucose metabolism genes in lean, pressure-overloaded hearts. However, the same set of genes were significantly induced in the Trp53inp2-cKO hearts under both mechanical and metabolic stresses. In summary, we have demonstrated for the first time that cardiomyocyte Trp53inp2 has diametrically differential roles in the pathogenesis of heart failure and glucose regulation under mechanical vs. mechanical plus metabolic stresses. This insight suggests that Trp53inp2 may exacerbate the cardiac dysfunction during pressure overload injury but have a protective effect in cardiac diastolic function in cardiometabolic disease.
Introduction
The cardiometabolic syndrome represents a broad range of cardiovascular diseases associated with metabolic abnormalities, and has become a major public health problem in the United States as well as many countries worldwide (1).The cardiac pathology associated with cardiometabolic syndrome can often be traced to functional and structural alterations in other organ systems, including liver, fat and skeletal muscles (2).Consequently, metabolic disturbances at systemic level, such as hyperglycemia, hyperlipidemia, and insulin resistance have demonstrated causal effects to the pathogenesis of cardiometabolic syndrome based on numerous epidemiological studies (3,4), as well as extensive preclinical studies involving animal models under simultaneous application of mechanical and metabolic stresses (5,6).Despite the broad recognition to the importance of metabolic stress in heart failure development, the molecular network linking metabolic disturbance and pathological progression in heart is vastly underexplored.
Heart is one of the most metabolically demanding organs with high level and constant demand for ATP production in order to maintain cardiac contractility.Under normal condition, the adult heart prefers to utilize fatty acid as the main fuel source.However, in failing heart, the glucose becomes a more significant substrate when fatty acid oxidation is impaired (7).Glucose metabolism in heart is tightly regulated by both transporters and metabolic enzymes, and impaired glucose metabolic activities could have profound impact on cardiac contractile function and hypertrophic remodeling (8)(9)(10).In the past decade, targeting glucose metabolism has also been demonstrated to be an effective therapeutic strategy to treat heart failure (11,12).
Trp53inp2 (Transformation related protein 53 inducible nuclear protein 2) is originally discovered as a modulator of thyroid hormone receptor in skeletal muscle as well as a candidate gene associated with obesity in adipose tissues (13,14).Based on its subcellular localization, Trp53inp2 is postulated to have dual roles in nucleus and cytoplasmic compartments.In the nucleus, Trp53inp2 enhances the transcriptional activity of the thyroid hormone receptor and regulates promoter activities of ribosomal genes, and subsequently promotes ribosome biogenesis (14,15).In the cytosol, Trp53inp2 regulates autophagosome biogenesis by promoting LC3B-ATG7 interaction and serves as a rate limiting factor for autophagy initiation, therefore regulating physiological homeostasis in different organs (16).In liver, Trp53inp2 plays a role in liver injury associated with TGF-β and BMP mediated signaling (17).More relevant to this study, Trp53inp2 has been identified through an unbiased systems genetics study using Hybrid Mouse Diversity Panel (HMDP) to be a candidate gene linking glucose utilization and cardiomyocyte hypertrophy (18).In cultured cardiomyocytes, inactivation of Trp53inp2 blunts the hypertrophic response induced by glucose and isoproterenol.However, the functional role of Trp53inp2 in intact adult heart has never been explored.
In this study, we aim at determining the physiological role of Trp53inp2 in cardiac pathological remodeling in intact heart.We established a cardiomyocyte-specific Trp53inp2 knockout mouse model (Trp53inp2-cKO) and challenged the Trp53inp2-cKO mice with severe mechanical stress (transverse aortic constriction) using 28 g needle, or moderate mechanical plus metabolic stresses using 25 g needle.Using echocardiography, molecular and histology analysis, we have uncovered that Trp53inp2 plays a protective role in heart failure induced by pressure-overload but appears to be detrimental in heart failure triggered by simultaneous application of mechanical and metabolic stresses.Further, we have provided evidence that this dual function of Trp53inp2 in cardiac pathologies is associated with its differential impact on glucose utilization.
Animals
The mTrp53inp2 floxed allele was generated at UCSD Transgenic Mouse shared resource core (https://moorescancercenter.ucsd.edu/research/shared-resources/transgenic-mouse/index.html) via Cas9-Crispr mediated flox sites insertion flanking the exon 2 of mouse trp53inp2 gene on Chromosome 2. The trp53inp2 flox/flox/cre mice (Trp53inp2-cKO) were generated by crossing the trp53inp2 flox/flox mice with αMHC-Mer-Cre-Mer mice (19) where the Trp53inp2 can be inactivated in cardiomyocytes upon tamoxifen treatment as previously described (Figure 1).The genotype was confirmed by genotyping PCR for 2 flox sites.Genotyping primers: 5LoxP F: TTATGATGAT GGTAATAAGA; 5LoxP R: GGATATGGAA GAAGTGATAT; 3LoxP F: TATTTGGATGTTGGAGTCC 3LoxP R: CTAGGTGTCCCAGGTGTGTG.The genetic background of both strains is C57BL/6J.The mice were treated by Tamoxifen diet (Envigo TD 130855) for 2 weeks followed by a wash out period of 7 days.All animals in this study were handled in accordance with the Guide for the Care and Use of Laboratory Animals published by the US National Institutes of Health.
Surgery and echocardiography
Transverse aortic constriction (TAC): The mice were placed on a volume ventilator (80 breaths/min, 1.2 ml/g/min) and anesthetized by isoflurane.The chest was opened, and the aorta was identified at the T8 region.For TAC only surgery, the suture was passed around the transverse aorta and tightened against a 28 g needle.For TAC + HFD surgery, the suture was passed around the transverse aorta and tightened against a 25 g needle.
Echocardiography: Mice were anesthetized and maintained with 2% isoflurane in 95% oxygen.A Vevo 3000 (Visual Sonics) echocardiography systems with 30 mHz scan head was used to image the heart.Both long axis and short axis view were recorded.The ejection fraction and fraction shortening were generated based on M-Mode images.
Cardiac histological analysis
Mice were euthanized and the cardiac tissues were fixed with 10% Formalin before paraffin embedding.The H&E staining was performed at CCHMC pathology research core.For Wheat germ agglutinin (WGA) staining, cardiac tissue was cryo-preserved.Tissue sections were incubated with pre-chilled methanol at −20 °C for 10 min before blocking with 10% goat serum in 1% BSA/PBS for 1 h Alex Fluor 498 conjugated WGA (5 mg/ml, Invitrogen) were diluted in 1% BSA/PBS and incubated with section for 1hr at room temperature.Samples were counterstained and mounted with SlowFade Gold Antifade reagent with DAPI (Invitrogen).The images were taken using Leica Stellaris 8 Confocal and the cross-sectional area was quantified using Image J.
Hexokinase activity assay
Hexokinase activity in cardiac tissues was measured using Hexokinase Activity Assay Kit (Abcam ab211103)
RNA extraction and quantitative PCR
Total RNA was extracted from tissues using Trizol Reagent (Thermo Fisher Scientific).RNA was used for first-strand complementary DNA synthesis using Random Primer (Thermo Fisher Scientific) and Maxima Reverse Transcriptase (Thermo Fisher Scientific) according to manufacturer's instruction.Realtime polymerase chain reaction (PCR) was performed using iTAQ SYBR Green Supermix (Bio-Rad Laboratories) with CFX Opus 96 Real-time PCR Detection System (Bio-Rad Laboratories).Values were normalized to GAPDH or ACTB.Primers are listed in Table 1.
Statistics
Values are expressed as mean ± SEM.Student's t-test, 1-or 2way ANOVA are used to determine significant differences.p < 0.05 was considered as statistically significant.
Generation of Trp53inp2-cKO mouse model
In order to understand the functional role of Trp53inp2 in intact heart physiology, we utilized Cas9-Crispr mediated gene editing to insert 2 loxP sites flanking the exon 2 of the mouse trp53inp2 gene on Chromosome 2 (Trp53inp2 flox/flox ) (Figure 1A).To inactivate the Trp53inp2 in a cardiomyocytes specific manner, we crossed the Trp53inp2 flox/flox mice with αMHC-Mer-Cre-Mer mice to generate Trp53inp2 flox/flox/cre as well as their littermate controls Trp53inp2 flox/flox .Both cohorts were treated with a tamoxifen diet at the age of 8 weeks for 2 weeks prior to experiments (Figure 1A).To confirm that Trp53inp2 is completely inactivated in cardiomyocytes, we isolated cardiomyocytes from Trp53inp2 flox/flox/cre mouse hearts and compared with the control (Trp53inp2 flox/flox ) mice for Trp53inp2 expression.Real-time PCR measurements confirmed that more than 90% knockout efficiency of Trp53inp2 was achieved in the Trp53inp2-cKO cardiomyocytes, while the expression of Trp53inp2 in liver or skeletal muscle was not affected (Figures 1B-D).Inactivation of Trp53inp2 in intact heart does not change the cardiac morphology comparing to the control littermates, as reflected by cardiac section H&E staining (Figure 1E).
Inactivation of Trp53inp2 in intact heart accelerated pressure overload induced heart failure with reduced ejection fraction (HFrEF)
Although Trp53inp2 has been suggested to play a role in glucose mediated regulation of cardiomyocytes hypertrophy in vitro, the in vivo function of Trp53inp2 has yet to be explored.We first investigated the functional role of Trp53inp2 in a severe pressure overload induced heart failure model by applying transverse aortic constriction (TAC) surgery using 28 g size needle (Figure 2A).We measured the cardiac function in the control and the Trp53inp2-cKO mice at baseline (one week post tamoxifen washout) and observed no differences in echo parameters between the control and the Trp53inp2-cKO mice (Figures 2B-E).However, at 4 weeks post TAC, the Trp53inp2-cKO mice demonstrated marked heart failure with significantly reduced ejection fraction (Figure 2B) and fraction shortening (Figure 2C) while the control mice only showed a trend in reduced cardiac function at more modest scale.Although no significant differences in ventricular wall thicknesses were observed based on echocardiography imaging between the two genotypes (LVPW; s and LVPW;d), the Trp53inp2-cKO hearts demonstrated a trend in wall thickening associated with cardiac hypertrophy (Figures 2D-E).This was supported by histological analysis at the tissue level.The Trp53inp2-cKO mice showed
Primer name Sequence
Trp53inp2 RT-F ATGAAGTGGATGGCTGGCTC 2H,I).At molecular level, we observed higher levels of expression of heart failure marker genes including ANF and BNP in the Trp53inp2-cKO hearts comparing to the controls following TAC (Figures 2J,K).Lastly, enhanced cardiomyocyte hypertrophy was observed in the Trp53inp2-cKO heart following pressure overload, based on enlarged cardiomyocytes cross-section areas (Figures 2L,M).These evidence support that cardiomyocyte Trp53inp2 expression is protective and its inactivation promotes pressure overload induced pathological hypertrophic and cardiac dysfunction in intact heart.
Inactivation of Trp53inp2 in cardiomyocytes alleviated cardiac diastolic dysfunction in response to simultaneous mechanical and metabolic stresses (cardiometabolic disorder) Next, we explored the role of Trp53inp2 in the pathogenesis of heart failure induced by a combined stresses of moderate mechanical stress and metabolic overload.We challenged the Trp53inp2-cKO mice and their age-matched littermate controls with continuing high fat diet (HFD).At 4 weeks post HFD, additional mechanic stress was induced by applying a moderate pressure overload using TAC with a 25 g needle (Figure 3A).The combination of the moderate pressure overload and metabolic stress would lead to cardiometabolic disorder, characterized by both systolic and diastolic dysfunction.Cardiac function was monitored for additional 8 weeks following TAC.In contrast with the results observed in the pressure overloaded lean mice, pressure-overload in the HFD treated control mice led to significantly reduced ejection fraction (Figure 3B) and fraction shortening (Figure 3C).On the other hand, the statistic significances for the reduction of ejection fraction and fraction shortening were diminished in the Trp53inp2-cKO mice (Figures 3B,C).Importantly, the diastolic dysfunction observed in the control mice was significantly attenuated in the Trp53inp2-cKO mice (Figures 3F,G).These changes in cardiac function appeared to be independent from the status of hypertrophy since the cardiac hypertrophy remained significantly induced in the Trp53inp2-cKO heart at comparable level as the controls based on echo parameters such as LVPW; s (Figure 3D) and LVPW;d (Figure 3E), tissue weights, marker gene expression and cross-section area measurements (Figures 3H-O).Remarkably, left atrial sizes showed a significant induction in control mice, but not in Trp53inp2-cKO mice, further supporting our conclusion that Trp53inp2 plays a role in cardiac diastolic function in cardiometabolic disease (Figure 3K).In summary, combined cardiac and metabolic stresses led to both diastolic and systolic dysfunction in control mice, however, inactivation of Trp53inp2 in cardiomyocytes protected the heart from both systolic and diastolic dysfunction without an impact on cardiac hypertrophic remodeling.
Trp53inp2 serves as a molecular switch for glucose utilization under different cardiac stresses
Previously, Trp53inp2 has been suggested to play an important role in cardiomyocytes glucose utilization linking with hypertrophy based on in vitro analysis (18).The different outcome of Trp53inp2 inactivation under different stresses raised the question about its impact on glucose metabolism.Therefore, we performed real-time PCR to determine the expression of selected glucose utilization genes in the same mouse hearts obtained from this study.Strikingly, a diametrically different expression pattern of glucose utilization genes was observed in different stress conditions.Under single pressure overload, inactivation of Trp53inp2 in cardiomyocytes led to a significant reduction of glucose utilization genes including HK-2 and Pfkm (Figures 4A-D).In sharp contrast, the Trp53inp2-cKO mouse hearts showed a significant induction of these glucose utilization genes including HK-1 and HK-2 in cardiometabolic disorder ( Figures 4E-H).Therefore, the different functional impact on cardiac contractile function from Trp53inp2 inactivation may be highly correlated with its impact on glucose utilization.However, the impact of Trp53inp2 KO on cardiac hypertrophy does not appear to be related to glucose utilization as originally predicted.To exclude the cardiac amyloidosis as a potential cause for the cardiac hypertrophy that we observed in our Trp53inp2-cKO mice, we further evaluated BDNF, a marker for cardiac amyloidosis (20).Realtime PCR analysis showed no significant changes of BDNF in either our pressure overload model or cardiometabolic disorder model, providing support for our conclusion that Trp53inp2 impacts on cardiac function through glucose utilization and that hypertrophic response we have observed in our study was bona fide pathological remodeling to pressure overload (Figures 4I,J).Lastly, to understand whether inactivation of Trp53inp2 in intact heart directly impacts on the hexokinase enzymatic activity, we performed hexokinase activity assay using the cardiac tissue from the two disease models.Interestingly, inactivation of Trp53inp2 in heart led to a significant decrease of hexokinase activity in response to severe pressure overload (Figure 4K), the hexokinase activity in Trp53inp2-cKO hearts showed a similar trend with control littermates in our cardiometabolic disorder model (Figure 4L).This data suggested that the changes in hexokinase activity could partially explain the differential cardiac phenotype we observed in Trp53inp2-cKO mouse.Additional analysis is necessary to understand the protection in diastolic function in Trp53inp2-cKO mice post metabolic challenge.
In summary, we have uncovered for the first time the in vivo function of Trp53inp2 in cardiac hypertrophy and dysfunction through a cardiomyocyte specific Trp53inp2 knockout mouse model.We find the impact of Trp53inp2 on cardiac contractility is stress-dependent and highly correlated with its impact on glucose utilization related genes.On the other hand,
Discussion
Our study demonstrated functional role of Trp53inp2 in cardiac pathological remodeling in intact heart.Inactivation of Trp53inp2 in cardiomyocytes led to distinct outcome in response to different cardiac diseases.In severe pressure overload induced heart failure, inactivation of Trp53inp2 has detrimental effect accelerating the heart failure progression characterized by significantly reduced ejection fraction.In response to combined metabolic and moderate cardiac stresses, inactivation of Trp53inp2, however, protected heart from cardiac systolic and diastolic dysfunction post injury.
The function of Trp53inp2 has been characterized in liver as well as skeletal muscle (14).In cultured cardiomyocytes, inactivation of Trp53inp2 suppresses expression of key glycolytic enzymes.Inactivation of Trp53inp2 further blunts glucose and isoproterenol treatment induced cardiac hypertrophy in vitro (18).Our in vivo model provides strong supportive evidence that Trp53inp2 protects the heart from combined metabolic and cardiac stresses based on improved cardiac systolic and diastolic function (a decrease in ejection fraction and fraction shortening in control but not Trp53inp2-cKO mice post TAC + HFD; and an increase in E/e′ in control but not Trp53inp2-cKO mice post TAC + HFD).However, our in vivo mouse model further showed distinct outcome when the Trp53inp2-cKO mice are challenged with transverse aortic constriction surgery alone, leading to accelerated heart failure.One possible explanation is the complexity of the in vivo system including the organ-organ crosstalk as well as interaction among different cell types within the myocardium.As the cardiometabolic disorder essentially is a multi-organ dysfunction model, the crosstalk between the adipose tissue and liver with the heart post combined cardiac and metabolic stresses could explain the different roles of Trp53inp2 in different cardiac diseases.A global change of glucose homeostasis and insulin resistance is possible in the Trp53inp2-cKO mice post high fat diet and cardiac injury and a complete metabolic profiling, as well as analysis of involvement of mTOR signaling pathway among different tissues would better provide the underlying mechanism for the physiological role of Trp53inp2 in heart.
The Trp53inp2 was originally discovered as a transcription factor with the capacity of binding on promoter of thyroid hormone receptor-α.Further studies have also identified its regulatory role in autophagy in both flies and mammals (21)(22)(23).Although the changes we observed in glucose metabolic genes expression in Trp53inp2-cKO mouse ventricular tissues could provide partial explanation for the physiological responses in these animals, several important questions need to be addressed.Firstly, the underlying mechanism for Trp53inp2 mediated glucose metabolic gene expression.As Trp53inp2 has been first identified to be a transcription activator, it is possible that the Trp53inp2 directly bind to the promoter region of these glucose metabolic genes.This is unlikely the case as we have observed different change directions in the Trp53inp2-cKO mouse hearts in response to different cardiac stresses.Secondly, the changes of glucose metabolic genes could be directly contributing to the physiological impact of Trp53inp2 in heart, or they could be merely secondary effect due to the protected cardiac function.The same concern applies to the hexokinase activity analysis post different stresses.A combined gene manipulation could help resolve this issue and an unbiased gene discovery approach would also yield further insights on the cardiac remodeling in these mice.
Our study provided interesting concept that one protein regulator could play dual roles in response to different cardiac stresses.With the cardiometabolic syndrome being the largest unmet medical need in the past decade, additional research in the dual-role regulators would for sure yield promising therapeutic strategies for this disease in the very near future.
FIGURE 1
FIGURE 1 Generation of Trp53inp2-cKO mouse model.(A) Schematic view of Trp53inp2-cKO mouse line generation.(B-D) Real-time PCR analysis of Trp53inp2 expression in cardiomyocytes (B), liver (C) and skeletal muscle (D) between control and Trp53inp2-cKO mice.n = 3 each group, ***, p < 0.005, Student t-test was used for statistical analysis (E) H&E staining for control and Trp53inp2-cKO mouse heart at baseline.Scale bar represents 1 mm.
FIGURE 2
FIGURE 2 Inactivation of Trp53inp2 in heart accelerated pressure-overload induced heart failure.(A) Schematic view of experimental design.Control and Trp53inp2-cKO mice were subjected to sham or transverse aortic constriction using 28 g size needle (B-E).Echocardiography analysis including Ejection Fraction (B), Fraction Shortening (C), LVPW;s (D) and LVPW;d (E) for control and Trp53inp2-cKO mice at baseline and 4 weeks post TAC surgery.*, p < 0.05, Two-way ANOVA followed by Tukey's multiple comparisons test were used for statistical analysis (F,G).Cardiac hypertrophy status reflected by heart weight/Tibia ratio (F) and left ventricle weight/Tibia ratio (G) in control and Trp53inp2-cKO mice post TAC surgery.*, p < 0.05, ***, p < 0.005 One-way ANOVA followed by Sidak's multiple comparisons test were used for statistical analysis (H) Lung/Tibia ratio in Control and Trp53inp2-cKO mice post TAC surgery.One-way ANOVA followed by Sidak's multiple comparisons test were used for statistical analysis (I).Left atrial weight/Tibia ratio in Control and Trp53inp2-cKO mice post TAC surgery.One-way ANOVA followed by Sidak's multiple comparisons test were used for statistical analysis.(J,K) Real-time PCR analysis of ANF (J) and BNP (K) in control and Trp53inp2-cKO mice left ventricle tissue post-surgery.n = 3-5 each sample.** p < 0.01, ****, p < 0.001 One-way ANOVA followed by Fisher's Least Significant Difference multiple comparisons test were used for statistical analysis (L).WGA staining of cardiac section in control and Trp53inp2-cKO mice post sham or TAC operation.(M) Cardiomyocytes cross section area quantification based on WGA staining.****, p < 0.001, One-way ANOVA followed by Tukey's multiple comparisons test were used for statistical analysis.
FIGURE 4 Trp53inp2
FIGURE 4 Trp53inp2 plays an important role in glucose utilization under different cardiac stresses.(A-D) Real-time PCR analysis of HK-1 (A), HK-2 (B), Cpt1a (C) and Pfkm (D) in Trp53inp2-cKO mice left ventricle tissue post sham or TAC surgery.n = 3-5 each group, *, p < 0.05 Student t-test was used for statistical analysis.(E-H) Real-time PCR analysis of HK-1 (E), HK-2 (F), Cpt1a (G) and Pfkm (H) in Trp53inp2-cKO mice left ventricle tissue post sham or TAC surgery with HFD.n = 4-5 each group, *, p < 0.05, Student t-test was used for statistical analysis.(I-J) Real-time PCR analysis of BDNF in control and Trp53inp2-cKO mice left ventricle tissue post pressure overload injury (I) or post cardiometabolic stress (J).n = 4-5 each group.Two-way ANOVA followed by Fisher's test were used.(K,L) Hexokinase activity in control TAC and Trp53inp2-cKO TAC hearts (K) or control HFD + TAC comparing with Trp53inp2-cKO HFD + TAC hearts (L).Student t-test was used for statistical analysis.
TABLE 1
List of primers. | 4,916.8 | 2023-12-18T00:00:00.000 | [
"Medicine",
"Biology",
"Environmental Science"
] |
Signal power asymmetry optimisation for optical phase conjugation using Raman amplification
We numerically optimise in-span signal power asymmetry in different advanced Raman amplification schemes, achieving a 3% asymmetry over 62 km SMF using random DFB Raman laser amplifier. We then evaluate the impact of such asymmetry on the performance of systems using mid-link OPC by simulating transmission of 7 × 15 Gbaud 16QAM Nyquist-spaced WDM-PDM signals.
Introduction
The nonlinear-Shannon limit sets a cap to maximum capacity in single mode optical fibres [1].To combat fibre nonlinear effects, using mid-link [2] or transmitter-based [3] optical phase conjugation (OPC) enables real time compensation of all deterministic (signal⇥signal) nonlinear impairments.However, the degree of nonlinear compensation using mid-link OPC is related to the asymmetry match of the conjugated and transmitted signal power evolution in the fibre.Meaningful performance improvement has only been demonstrated in Raman-based amplification optical links [4], thanks to the better control over signal asymmetry provided by distributed amplification, as well as its improved noise performance.The key to maximise performance in OPC-assisted systems lies in reducing signal power asymmetry within the periodic spans while ensuring a low impact of noise and non-deterministic nonlinear impairments in the overall transmission link.In this letter, we demonstrate, using proven numerical models, that almost ideally symmetrical signal power evolution can be achieved in advanced distributed amplification schemes, with the best results obtained for half-open-cavity random distributed feedback (DFB) Raman laser amplifier with bidirectional 2 nd order pumping [5][6][7].This setup allows to potentially reduce signal power evolution asymmetry inside the span with respect to its middle point to a mere 3% over a realistic span length of 62 km SMF, which constitutes the lowest asymmetry level achieved up to date on such a long span [8].Furthermore, in order to investigate the best practical Raman-based link design and the potential impact of the reduced signal power asymmetry, we consider 7⇥15 Gbaud-16QAM Nyquist WDM simulated transmission with mid-link OPC using random DFB Raman laser amplifier, and numerically investigate system performance dependence on power asymmetry levels.The optimal transmission performance with forward and backward pump power ratio close to 1 is obtained for the setup that combines the lowest level of asymmetry with low non-deterministic impairments.In our search for an optimal setup for OPC we consider three different bi-directional distributed Raman amplification schemes.In each configuration, signal power excursion for different pump power ratios and span lengths was simulated using the experimentally verified [5] model with an appropriate boundary conditions that is fully described in [7,9].Simulations are performed at room temperature with the assumption that Raman pumps at 1366 nm are fully depolarised.The noise was calculated in a bandwidth of 0.1 nm.The Raman gain and attenuation coefficients at the laser wavelength were obtained from measured gain and attenuation curves for standard SMF silica fibre [7], respectively.The values of the Rayleigh backscattering coefficients at pump wavelength at 1366 nm, lasing at 1455 nm and the frequency of the signal are assumed to be 1.0⇥10 -4 , 6.5⇥10 -5 and 4.5⇥10 -5 km -1 , respectively.
1 st -order Raman amplifier
The conventional 1 st order Raman amplifier [Fig.1(a)] is bi-directionally pumped from both ends of the transmission span at 1455 nm, with the signal being amplified via the first Stokes shift.
2 nd -order ultra-long Raman fibre laser amplifier
The configuration of an ultra-long Raman fibre laser (URFL) amplifier [Fig.1(b)] allows achieving 2 nd order pumping with a single wavelength pump [9].To form a distributed 2 nd order URFL amplifier, Raman fibre laser pumps are downshifted in wavelength by two Stokes with respect to the frequency of the signal.High reflectivity (99%) FBGs centered at 1455 nm with a 200 GHz bandwidth were deployed at the beginning and the end of the transmission line to reflect Stokes-shifted light from the pumps at 1366 nm and, once the threshold of about 0.8 W is reached, form a stable ultra-long lasing acting as a 1 st order pump that amplifies the signal.The advantage of this model is that the gain bandwidth and profile can be modified by selecting appropriate FBGs rather than deploying an active seed at different wavelength.In this case the reflectivity of the FBGs was chosen high to provide better pump-to-signal power conversion efficiency.
2 nd -order random DFB Raman laser amplifier
The schematic design of the random DFB Raman laser amplifier [Fig.1(c)] is similar to that of an URFL with the difference that instead of using a closed cavity with a pair of FGBs, a single high reflectivity FBG at 1455 nm (we also simulated FBGs reflectivities of 50% and 70% but found no significant improvement to the signals symmetry) is deployed at the end of the transmission span to reflect backscattered Rayleigh Stokes-shifted light from the backward pump at 1366 nm and form a random DFB laser [10] at the frequency specified by the wavelength of the FBG.The lack of an FBG on the side of the forward pump reduces the RIN transfer [11] from the forward pump to the Stokes-shifted light at 1455 nm at the cost of a reduction in the power efficiency conversion in comparison to the 1 st order Raman and URFL amplification schemes.This is particularly important, as forward-pumping RIN transfer from inherently noisy high-power pumps can seriously hinder data transmission [12,13].
Signal power asymmetry in distributed Raman amplifiers
To compare signal power asymmetry in the proposed configurations, we simulated a singlechannel in the middle of the C-band at 1545 nm with the fixed launch power (0 dBm) into the transmission span.For each forward pump power (FPP) (100 mW step), the backward pump was simulated to give 0 dB net gain for the span lengths from 10 to 100 km.Signal power asymmetry within the span was determined as [14] where L is the span length and P represents average signal power evolution.Figure 2 summarises some of the most relevant span optimisation results.The lowest asymmetry values and highest signal OSNRs for all span lengths above 58 km were achieved with random DFB Raman laser amplification.Note that optimal asymmetry in 1 st -order Raman amplification is found for backward pumping only.For URFL, optimal forward/backward power ratios are very close to 1 for spans of up to 50 km, but the optimal contribution of backward pumping grows for longer span lengths (forward/backward ratio of 0.27 at 100 km), whereas the random DFB configuration favours backward pumping at short lengths up to 30 km, but ratios close to 1 for longer spans.Figure 2(b) shows accumulated residual phase shift (a product of an optimal asymmetry at a given distance and corresponding nonlinear phase shift).The asymmetry (Fig. 2), an OSNR [Fig.2(a)], residual phase shift [Fig.2(b)] results and its better resiliency to forward-pumping RIN in coherent transmission applications [5], shows that bi-directionally pumped random DFB laser with a single grating seems to be the best option, performance-wise, for amplification in long spans with OPC.Considering these results, random DFB Raman laser amplifier was chosen for the further characterisation study.
Characterisation of random DFB Raman laser amplifier for a transmission with OPC
The asymmetry of the signal power evolution in the transmission fibre using random DFB Raman laser amplifier with span lengths up to 120 km as a function of FPP with the optimal backward pumping is shown in Fig. 3(a).The "sweet spot" is found to be at 62 km with the signal power asymmetry just below 3%, for a symmetrical forward/backward pump power split.In this scheme, the same asymmetry level can be achieved using two different values of the FPP, which allows us to further study the design principle considering both ASE noise and nonlinearity compensation.The optimal forward/total pump power (FPP/TPP) ratio in each case as a function of forward pump power values is shown in Fig. 3(b).To visualise the signal power distribution at different lengths, example power evolution profiles for 62 km and 100 km spans are shown in Fig. 4. Signal power asymmetry as a function of a single channel launch power is shown in Fig. 5 (left).The asymmetry is pretty constant with the launch powers up to 5 dBm and increases steadily after that.To simulate the impact of the pump depletion on the signals asymmetry in dense WDM (DWDM) transmission, the pump powers were optimised for a central channel at 1545 nm to give 0 dB net gain and the number of 25 GHz spaced WDM channels (0 dBm per channel) was incremented.The DWDM channel provisioning started in the centre of the C-band at 1545 nm, with subsequent channels being added in either side in the band centre building out towards both ends of the band.The results for the asymmetry in DWDM transmission up to 42 channels assisted with the random DFB fibre laser amplifier are shown in Fig. 5 (right).The results in Fig. 5 shows great asymmetry tolerance to increased launch power and pump depletion using random DFB Raman laser amplifier in OPC assisted DWDM transmission.5. Modeling the 7⇥15 Gbaud 16 QAM transmission with an OPC Fig. 6.Schematic design of a OPC system.
To investigate the impact of signal power asymmetry on the performance of system employing mid-link OPC (Fig. 6), we simulated the transmission of 7⇥15 Gbaud 16 QAM Nyquistspaced WDM PDM signals.For each channel and polarisation, a random binary sequence of length 218 was first mapped into the complex plane using 16 QAM, oversampled by a factor of 20 and then passed through a Nyquist filter to generate a Nyquist-shaped signal.The filter length was 128 and the baudrate was 15 Gbaud.After polarisation combining, the WDM channels were multiplexed with a channel spacing equal to the baudrate.The transmission link consisted of 40 Raman loops and an OPC placed in the middle, after the 20 th loop.The propagation of signal in the fibre was simulated using a well-known split-step Fourier method, with a step size of ⇠1 km considering the simulated gain and noise profiles.At the receiver, the channel under test (central) was coherently detected, the received signal was resampled and then the Q 2 factor was estimated though EVM.
Simulation results and discussion
We simulated the performance of an OPC-assisted system (Fig. 6) with random DFB amplifier for all pump power split ratios at 62 km.To show the true impact of the asymmetry on the OPC system we considered the case with fixed noise power (the worst OSNR case, that is backward pumping only, Fig. 7(a)) as well as the actual noise power in each configuration [Fig.7(b)].There is a perfect match of the pump powers ratio requirement for the optimum signal power asymmetry in 62 km link (Fig. 3) and the Q-factor performance of the investigated OPC-assisted system that is 1.2 W for the forward and the backward pump.The optimum Qfactor as a function of FPP (BPP was simulated to give 0 dB net gain) is shown in Fig. 8.We can notice that when the noise is fixed, the optimum Q-factor varies by 5 dB, showing clearly that the asymmetry of the signal power evaluation has a significant impact on the performance of an OPC-assisted system.In the case of actual noise power, the optimum asymmetry level offers an additional 3 dB performance gain in comparison with the backward pumping only case, indicating the importance of the optimisation task performed in this work.
Conclusion
We have evaluated the impact of signal power asymmetry on transmission performance in Raman-amplified systems with mid-link OPC.We have shown that random DFB Raman laser amplifier is the most suitable solution for OPC-assisted WDM systems using span lengths between 60 and 100 km.Through simulations, we have verified, using 7⇥15 16QAM Nyquist-spaced WDM PDM signals, that the minimisation of asymmetry up to a 3% over a 62 km span leads to greatly improved transmission performance, improving Q-factor by 5 dB.
Fig. 2 .
Fig. 2. Lowest signal power asymmetry for a given length and amplification setup.Insets show the corresponding best OSNR (a) and the accumulated residual phase shift (b).
Fig. 3 .
Fig.3.Signal power asymmetry as a function for different span lengths and FPP (a) and the optimal forward/total pump power (FPP/TPP) ratio as a function of FPP (b).
Fig. 4 .
Fig. 4. Power evolution profiles for configurations with minimal power asymmetry corresponding to 62 km (left) and 100 km (right) periodic spans.
Fig. 5 .
Fig. 5. Signal power asymmetry as a function of a single channel launch power (left) and the number of the WDM channels in 62 km (right).
Fig. 7 .
Fig. 7. Q-Factor vs. launch power with the fixed noise based on backward pumping only configuration (left) and the actual noise (right).The backward pump power was simulated to give 0 dB net gain.
Fig. 8 .
Fig. 8. Optimum Q-factor at different forward pump powers in 62 km link. | 2,944.2 | 2015-12-14T00:00:00.000 | [
"Engineering",
"Physics"
] |
The Credibility of the Catholic Church as Public Actor
This article assumes that there is a profound crisis of credibility in the Catholic Church today. This is distinct from the issue of the credibility of Christian faith or the credibility of theism, for many who believe, indeed many Catholics, are affected by this sense that the Church, as a public actor, lacks credibility. Moreover, while it
would be a mistake to seek the roots of this lack of credibility within general appeals to “modern unbelief,” so it would also be a mistake to imagine that it is purely a matter of “image” or as a direct result of the
revelations about clerical child-abuse and its cover-up. It argues that modern society has evolved, through painful experience, a healthy scepticism about large organisations and with this has developed a set of social values (e.g. mutual responsibility and transparency) that
are at odds with many of the values (e.g. hierarchy) that the Church has inherited from its past. Far from seeing these developments as part of a pathology of modernity, they can be seen as the work of the Spirit and a challenge to the Church to embrace new ways of being a witness to the truth and new ways of embodying the Christ in its living.
then unparalleled, of the First World War introduced into our consciousness a scepticism of all large claims made by big organisations calling for our loyalty. The parents who erected a modest plaque to their son's memory "who died on the field of honour in France" are, to us, speaking in a foreign, ancient and discredited language. The legacy of that war is a healthy suspicion not just of nationalism and other cults of the state, but of large organisations that claim to work for our good, because we know that: "he did for them both with his plan of attack." Any international organisation, proud of its high level of internal cohesion and communications, such as the Catholic Church, cannot hope to be immune from this suspicion.
The twentieth-century was also the century of the great "-isms" each promising its own salvation and leaving in its wake a trail of destruction. Large-scale rhetoric, all-embracing claims, groups with vast networks to communicate their visions were matched by minimal delivery and deliberately inflicted misery. Once again, it was those who had not committed themselves to these great systems who eventually picked up the pieces. In the aftermath, we have a hard won awareness that the big organizations that we do need should be build from the bottom up by consensus and in answer to specific needs. The U.N. is not the Congress of Vienna, nor is the W.T.O. a reformulation of imperially driven "Free Trade". Moreover, amid the misery of those systems has emerged new visions of humanityalmost uniquely in history without the explicit appeal to religious authority -such as the Universal Declaration of Human Rights, which mark a real achievement in creating a vision, focussed on the small and vulnerable, in the aftermath of the megalith-scale approach to humanity that can be found in the actions of Hitler, Stalin, Mao Tse Tung, Pol Pot, or any number of others who have sought to lead their people from the front. Styles of leadership, monarchical in shape, that were unquestioned for centuries are now seen to be abusive of power even when there is no specific event of abuse. Appeals to divine approbation for any such system, as the Church does claim, merely remind us that it was not so long ago that every potentate claimed such mystical approval, but when we see D.G. on our coinage do we still think that a king or queen rules by God's favour?
Religion, also, has produced some shocks. It is a matter of debate as to whether the First World War created a crisis of faith or merely accelerated a trend whose roots lay in the breakdown of a rural society and industrialisation. But clearly the contempt that resulted for all the churches, in every country, from blessing the coloursor as happened in France, where the Church saw supporting the Third Republic as a way of gaining post-war credit -has created for many a suspicion at both the large and small scale. It seemed as if all "big boys" wanted to play together at the expense of actual people. Thrones, states, rulers, army officers or police will never reject support from any source, and religion was fulfilling its natural civic function when it supported them, but as they became discredited, it seemed ever more incongruous that bishops and church institutions had been among their most ardent and vocal supporters. One can always point to the exceptions; but we have a taste deep in our mouths that power supports power at our expense. In the light of bitter experience, we can be thankful for a little scepticism before any religion's demands on our loyalty.
One religious shock is unique in that it did not happen in the exotic east, but in one of the homelands of our culture: the Holocaust. We do not simply do theology in the aftermath of Auschwitz, but if we engage in any way with religion in its aftermath without reflecting upon its implications, we are not functioning as learning, conscious animals. Old loyalties and enmities had often been allowed to fester for centuries without question. Now one of the oldest of those enmities took on such a new and virulent form that it became destructive as never before. Defenders of the churches can raise a mighty defensive artillery at this point to exonerate Christianity in the face of Nazism, but a more widespread and diffuse truth remains: instances of religious hatred abounded and can still abound. Indeed, the new directions taken at Vatican II towards other churches, towards Judaism, and other religions are inverse testimony to that dark inheritance. That legacy is also part of our historical context: religious structures that are perceived to be monolithic are suspect; and absolutist claims for particular religions or systems are now held at arm's length. Such claims have led us astray before, and we do not want to repeat mistakes.
Closer to us in time are all the "-gates". Historians have always taken it for granted that organisations operate to protect themselves, preserve and extend their power in decision-making, and seek to cover their tracks while burying their mistakes. Young historians are trained to sift archives, to deal with the complexities of diplomatics, and to employ that ancient search-engine: cui bono fuisset? But these skills of suspecting the motives of the mighty, which have so often brought opprobrium on historians from churchmen and theologians, are now far more diffuse in society. When Mandy Rice-Davies said in exasperation "He would say that, wouldn't he?" she made a point that few at that time had seemingly thought about. Today, we simply assume that when a banker or someone similar is being questioned that many of his answers will fall within the category of self-or group-protection. With the Watergate cover-up, whatever was left of the sense of the awe of majesty that "they" had a better understanding and better standards was shattered. The Watergate break-in was not as shocking as the covering-up after the fact. A new ethical dimension entered our general consciousness: transparency. For anyone in any form of authority or leadership, this is as much a fundamental ethical demand as honesty with the accounts or care of life and limb. With increasing force, the need for transparency has moved through our culture but to those who reject it, there is the horrid fact that almost ever week we have another scandal where the lack of transparency is seen as part of the problem.
If transparency is a virtue called into being by our cultural matrix, surely people need not extend it into the sphere of religion. Surely they, as religious people, could stand apart from power as the guardians of an ancient or spiritual dimension where such demands are unnecessary? But drip-by-drip, with the church fighting a fierce political and legal rearguard, the church has been found to be involved in cover-up after cover-up, in country after country, and this has been found to be happening as far back as the investigators have chosen or been able to go. The child sex-abuse scandals have not merely shattered trust, but their cover-ups, combined with in-grained habits of formal secrecy and clerical esprit de corps, have produced a situation where the church is seen to be in as great a need of an infusion of transparency as any other power-structure. Claims that the church is not a power structure but a "mystery" are not only a failure to understand the problem, but are seen as proof of the problem and of there being much to hide.
But the danger in any study of the question of credibility today is that it simply focuses on child-abuse and its cover-ups. Such a concentration, explicable in the moment in which we live, fails to see that every organization that claims to teach the truth, show people how to live, and be a political actor in society faces challenges to its credibility that have been building up over the past century. While there have been challenges to credibility before, notably of the western church in the aftermath of the Black Death, and of the Roman Church in the sixteenth-century, this is something new. It is a part of modernity. The crisis of credibility should be embraced as an invitation to grow in our awareness of who we are and what we have to offer as the People of God. But alas, I suspect that for many Catholics it is an unwelcome intrusion -modernity is itself suspector it is simply ignored as one of those questions that seem just "too big"! I want to try to tie the question down using four overlapping frames that may help us to see the way the credibility gap enters into our discourse and action at every level.
Frame I: Dissonance
There is no better introduction to dissonance than George Orwell's novel Nineteen Eighty-Four where institutions and even language have become so corrupted that they cannot be trusted. The "Ministry for Information" deals with censorship and "peace" means war; "newspeak", the language of that unpleasant land, is a series of misdirections. The link between what is signed -but still understood -and what is meant is broken; communication is no longer credible or a vehicle of trust. One can only preserve one's integrity -or seek a human integrity -by subversion because there is no credibility within the whole system of language and the interrelationships it describes. "Newspeak" has become a synonym for language used by power-organisations when presenting themselves in a good light. So we have "Whitehall-speak" when "restructuring" means redundancies; "army-speak" when we hear of "collateral damage"; and there is also "Vatican-speak" when "authentic" tradition means the Vatican's own understanding of tradition. Modernity has become suspicious of such misuse of language -and rightly soand so it is incumbent on religious leaders (as a religious value) to communicate in such a way that it promotes credible communication as a value in itself.
However, "dissonance" is also a key to other aspects of the crisis of credibility. If a lie is a breakdown between the linguistic and the experiential worlds that is made deliberately, then incredibility occurs when the linguistic and experiential levels so diverge or come into conflict that I can no longer know whether or not to take the statements seriously -and so I must withdraw my trust. Sadly, it is easy to find dissonance both in language and in practice in the church. In language we use warm and woolly words to create a sacral veneer when we are simply keeping an organisation running -and which as a human organisation is subject to the same stresses and problems as any other. One example: parish clergy do not like seeing their role defined as functionally equivalent in terms of duties or skills to the manager of small branch franchise selling fuel, groceries, or fast-food -and indeed there is far more to being a presbyter than being a manager. But failure to acknowledge that, within our present system, many of the demands made upon a cleric are those of a small-scale manager has been the source of confusion and misery. It often means that management is very badly done by people who resent doing it and who are inadequately trained and supported. We provide a seminary education that is not suited to the tasks that will be encountered, and we create expectations that cannot be met. Often the result is that not only is the job not done, but there is a personal cost to the men involved that has created a failure rate that would be not be tolerated in any other organisation. Yet the language used is that of "evangelisation", "service", "ministry", and (at the personal level) "vocation", while the reality is that of devoting time to peoplemanagement, small-scale human-resources dispute-resolution, plant maintenance, and oversight of a budget, with the "sector distinctive" skills of leitourgos, didaskalos being just a part of the mix. "But surely", comes the reply, "one must see the larger picture". That, however, is the nub of the problem of credibility: why must one see beyond the language used, why not seek to match the language to the reality at the outset?
Long-lived institutions also suffer from factual dissonance. My favourite example is that of the august institution the Board of Trade, whose President was a member of the Cabinet and has had such worthy incumbents as Sir Winston Churchill. To celebrate its bicentenary in 1986 there was, as one would expect, a special meeting of the Board. What you might not know is that that meeting was the first that had taken place since the nineteenth-century! Situations change, growth and decay occurs, and often we are left with systems, which may have once been useful, but which over time have so embedded themselves in our practice that they are now quite dissonant with what we realise we should be doing and proclaiming as a church. Another example: I hope it is agreed that it is an oxymoron to talk about "buying grace" and, equally, that a do ut des vision of religion, however deeply it might be embedded in human religious consciousness, is unworthy of both God's transcendence and our understanding of covenant. Equally, that the notion of the Eucharist as a spiritual commodity, a quantum of force, or that it is the specific work of the presbyter rather than that of the assembly are accepted as perverse teaching. Now look at what is directly implied in the practice of Mass stipends. Note: I did not ask you to look at what is the "theological" explanation or defence of this practice, but look at what people are doing and saying they are doing in carrying on the practice, and look at what signals that practice is sending out to others as the practice of the church? The fact that the Code of Canon Law warns about the danger that it could be seen as a commercial operation is proof that that is exactly how it is seen.
It is indeed hard to eradicate such a practice -and there is the ever-present prescription for inactivity in the form: "do not disturb the simple faith of the people" -but the accumulation of such anomalies between what-is-formally-taught and what-is-perceived-from-what-isdone has the effect of undermining the whole. This phenomenon affects all organisations with a history, hence my reference to the Board of Trade, but in the church's case, the loss of credibility touches not just the organisation but faith itself.
If this last point seems extreme and factual dissonance between what we consciously proclaim and the signals we transmit is seen as just a matter of some peripheral practices, then consider this example which suggests we have deeper problems with our historical inheritance. Consider this sentence:
Illi viri sacerdotes novi testimenti sunt, et in mysterio eucharistico officiant.
Is there anything wrong with this statement? Surely one needs priestly orders to officiate at Mass? But now consider this argument from a Lutheran exegete: the papal injunction against the ordination of women is in keeping with the deepest beliefs of the Catholic Church on ministry because Catholics do not model their priesthood on the New Testament models of ministry but on a fulfilled variant of the Levitical priesthood; and since only intact males could enter sanctuary of the temple, so too Catholics cannot ordain women! Most Catholic theologians would reject this, and not a few would see it as a parody. We do profess that there is only one sacerdos/hiereus in the new covenant; and link that to a different early theology that sees the whole people as genus electum, regale sacerdotium (1 Peter 2.9) in which "priests" are presbuteros/presbuteroi. But my Lutheran friend is a modern scholar studying what religious language means by looking, not to a dictionary or what its users claim it means, but how it expresses meaning in usage and action. He looks at the structures surrounding ministry and finds (1) a cult of sacred persons: intermediaries who enter the temple where ordinary believers cannot; (2) a cult of purity where marriage is seen as making someone incapable of ordination; and (3) a cult of deference that assumes a contact with the holy not granted to others. All these aspects of the "priesthood" (we might note that while etymologically "priest" comes from presbuteros, functionally it is equivalent to hiereus/sacerdos) can be traced to the use of the Old Testament to create a social character for the ordo (in the Roman imperial sense that is a mix of our categories of "class" and social "rank") in the fourth-century (whose taxing units, paruchiae, we still use to organise our organisation and pay those in the sacred ordo who live there). Hence, Christian ministers became the equivalent of the flamines and pontifices of the Roman civic cult (this was explicitly stated in the Council of Elvira) and Levitical trappings filled out the picture. The cost was that baptism was no longer seen as making us all one in Christ, because the Christian body now had to be layered in parallel with late Roman society. Moreover, this is still the case: clergy still dress distinctively, use titles such as "father" and "reverend" that imply the subordination of others within Christ. The bishop of Rome still uses the highest of the pagan sacerdotal titles: "pontifex maximus" and my Lutheran friend recently heard a choir in Germany practicing the "Ecce sacerdos magnus" to welcome a new bishop. So whatever we say about ourselves and our ministry, there is a massive dissonance with what is seen, perceived, and lived. We believe one thing in our textbooks and another in our practice.
Tackling dissonance on this scale, which would merely bring our practice into line with our teaching, would entail a level of disruption few existing presbuteroi would contemplate with relish, while it would be most off-putting to the clerical tastes of many contemporary seminarians. When we think of the issue of credibility and dissonance, it is disruption on this scale that we must face. However, one might reply: we have done all these things for so long, we are so comfortable in our old shoes, and this is a tradition sanctioned by holy wisdom and saintly practice! Such arguments for the defence would be telling if it were not for the fact that we do not relate in faith to the tradition, but to the encounter with the living God within a tradition.
Even when we recognise the complex nature of what we mean by "tradition" and our relationship to it; most of us slip, de facto, into a simpler view of tradition being what I saw when I was a child in my home parish -this is, after all, the definition used in advertising. But this notion of the traditional is a false friend for what we remember from our childhood is not what actually happened in our home parish, but the salient past that is present and comforting to us now. Memory is an activity serving a vision of the present moving into the future, not a recording (as on videotape) of the past. It is this present-dimension of memory recalling the salient past that is the epistemological flaw in the so-called "hermeneutic of continuity", which is actually only a recipe for further dissonance. The better course is to think of tradition in the words of Pablo Picasso: "tradition is having a baby, not wearing your grandfather's hat".
Frame II: Mythic Fracture Every religion, every great event in human culture, lives within a myth, a story that is itself larger and more all embracing that all the empirical details of the system, and within that myth all the details are simply "facts". The Church of the west, and its later form as Catholicism, is one such myth, and much of its mythic structure has been in place since the Carolingians. This myth is how we represent the church to ourselves. A variant of it is the face we present to others. The myth is the matrix within which all the other deeds, decisions, statements, and attitudes cohere, make sense, with the effect that it sometimes makes death appear preferable to its abandonment. A myth is a far more all-embracing phenomenon than a paradigm (in the sense of a scientific paradigm) in that a myth is far less open to empirical description and usually is seen by those who live within it as coterminous with reality itself.
Here I want to look at just a couple of aspects of this myth and note that while many still behave as though all were living within the myth, for others the myth has ceased to have any value. One of the core elements of our myth is that we are a whole and, as the Catholic Church, we are embrace everything, so that those who do not accept this identity are "our separated brethren" or simply "schismatics" or worse. Those not with us lack something; and if they were to return to us they would somehow find themselves fulfilled. Within this myth, other Christians, who are not linked to the Church of Rome, are not "different Christians" or "other Christians", but can be defined as "non-Catholics", which relates to their ontological status within the People of God. This means that today when we seek ecumenical relations, particularly with the ancient churches of the east, we often end up in corner: lovely words, yet nothing happens. The problem is that we both want to speak to those who think of themselves as belonging to distinct "churches" while preserving our myth with its notion of our entirety. So while we desire to engage in dialogue, which assumes equality in the modern understanding of the term, we also want to make use of scholastic distinctions about the Church "subsisting" in the Catholic Church. What rarely dawns on Catholics, but is immediately obvious to a "non-Catholic", is that once one makes any such a distinction, one has unwittingly reinstalled the distinction between church and schism, only in a more user-friendly form.
This notion of our integrity, so much part of our Catholic myth, has other dimensions. It means that we think of ourselves and preach that we are connected almost uniquely and wholly securely to our origins; we have not failed in anything! To quote a strap-line I saw on caféchurch in Nottingham recently: "what Jesus wanted is what we offer [!]" and we Catholics might add "and we have done so continuously in perfect continuity with him." Moreover, it is part of our faith that our encounter with the living Anointed One is identical with that of his first followers, but that does not mean that historically we form a simple chain. When this theological dimension of the myth encounters the "facts" within the myth -an encounter that takes place on the level of historical investigation -the whole myth is found wanting; and the Church's claims, in turn, lack credibility. This lack of credibility has been growing since the arrival in the mid-nineteenth century of critical scholarship on the scriptures and the history of beliefs, and its effects have been rippling out ever since undermining the myth of Catholic totality. Grudgingly the biblical dimension was accepted (though its implications are often simply ignored in preaching and non-academic teaching), but the theological dimensions have had hardly any impact within formal structures, and still face determined opposition. By contrast, those who are not Catholics look on our mythic claims with the curious look of amazement that would come over a Latin cleric visiting certain Greek monasteries and telling them he is allowed to receive communion from them, only to be told that he would be welcome to become a catechumen and begin his preparation for Holy Baptism.
This mythic fracture is not just a phenomenon among theologians. We live in a society where we reject sectarianism, officially endorse ecumenical endeavour, and a society where most Catholics are married to those who do not share their faith. Ecumenism is not worked out in scholarly colloquia or elaborate meetings of church leaders, but at a million family breakfast tables with questions like "which church are you going to today?" or "where are the kids going?" or "why is it such a big deal anyway?" Such contact has often led to people wondering what the fuss is all about, and to practical solutions such as going to one church one week and to the other on the following week. These endlessly varied solutions, when brought before the official notice of the Catholic Church, are often looked upon as evidence of poor catechesis or a symptom of a great malaise such as "religious indifference". They are usually nothing of the sort. These are pragmatic and, we hope, successful solutions, entered into by sincere disciples of the Lord, in a world that is known to be more complex than that envisioned in the myth of Catholic totality. In the face of that more complex world the myth's hold over particular decisions is broken.
If a significant mythic fracture has occurred for the Catholic Church, how does that relate to the issue of its credibility? Once mythic fracture has occurred, anyone who continues acting within the myth gives a surreal quality to all their statements and actions. The action takes on the character of a curiosity, but in terms of actual living it is deemed irrelevant, the pursuit of a world that is past. In such a situation everything that is valued by "tradition", consciously invoked, becomes tinged with the label "old-fashioned" and the related concerns are thought to be academic, otiose, or downright silly. In such a climate, the church acting "in accord with its tradition" is discredited in the face of those who are just as serious in their discipleship. The church seems ensconced in a frilly fussiness about antiquated details, rather than seeing the new big picture.
There have been a succession of mythic fractures over the last three centuries -"the discarded image" as C.S. Lewis called itand very often the church has seen itself as simply part of the ancient régime valiantly struggling to catch up. This phenomenon of being on the losing side in the succession of changes, all the various revolutions, which have transformed human societies over that period is probably due to a basic aspect of human religious structures. We tend to ritualise patterns of behaviour and then perpetuate them, giving them new meanings, long after the specific situations that generated those patterns have passed away. This means that as society changes, the basic religious response is a ritual pattern which perpetuates responses to past needs. Evolution is always taking place, but conscious demands for change while focussing on our basic insight, the Christ-even, goes against the religious grain. One could illustrate this phenomenon from virtually any religion in any period, but a Catholic example is apposite. When I was a small boy, I stood one step below the praedella waiting to serve "last cruets" at Mass, and watched the priest methodically sweep the corporal for crumbs and the more scrupulous did this with great care in the manner prescribed by the rubricians. I recall that as I stood there I often wondered why did he not just take more care not to "make crumbs" (as my mother often instructed me at home) and save himself this bother. At this point you need to note (1) that the bread was broken over the paten in the 1570 rite, so no crumbs should have fallen beyond the paten, and (2) one of the great boons of "altar bread" was that it was designed to be crumb free (so there should have been no crumbs to sweep). It took me many years to get to the bottom of this ritual: until the tenth-century when, for economic reasons, unleavened bread appeared in the west, a large loaf (much like that used in the Orthodox churches today) was used and its breaking produced any amount of crumbs (hence the corporal to collect them) and these had to be swept up. But we had grown so used to sweeping up the crumbs that we continued to do it for almost a thousand years before we realised there were no longer any crumbs to sweep! This tendency to perpetuate the past within religion has two additional effects on the issue of credibility. First, we become immune to our own quaintness and people's reactions to our behaviour; incomprehension becomes their fault and the result of their unwillingness to understand. This reaction is the common human response to difference, but it is incompatible with a commitment to evangelization which is about making oneself understood. Secondly, having retreated more or less continuously since the Enlightenment, a certain complex has evolved within church circles that if we are out of step with modernity, then we are "counter-cultural" -and most probably right! This attitude is a variant on the Golden Age of the past myth and it fails theologically on three basic issues. First, Christian perfection is not to be identified with any historical moment in the Church's life but is to be seen as our straining towards the eschatological. Second, our belief in the Holy Spirit means that truth is to be found in every situation and our challenge is to integrate it into our belief. We are, as part of our vocation as a sacerdotal people, to search out the blessings of modernity and relate them to their origin and end. Thirdly, over that period there have been many great opportunities to be counter-cultural such as opposing tyranny (a very mixed record once the rights of the church were not infringed), opposing slavery (we did not consider it wrong until it had all but disappeared in Christian states), opposing abusive societies (the church has a bad habit of siding with generals and landowners), opposing warfare as a way of solving political disputes (again a patchy record), opposing the oppression of women (patchy to say the least). Holding up the gospel to criticise society is a far more dangerous business than defending the "values" that a previous generation accepted without question.
Some might argue that we can re-invent the myth? There have been valiant, indeed heroic, efforts in the past: the attempt to recreate a medieval landscape in the church in the nineteenth-century or the Thomist revival called for by Leo XIII whose effects dominated Catholic theology for much of the twentieth-century. But the religious myth in which modern people live is not something that can simply be created or re-created. Indeed, the myth in which we operate today is almost invisible to us, but we must be credible within that myth. The very fact that we can so describe the fractured myths that still animate much official Catholic thinking is evidence that it is a way of viewing our situation that no longer evokes credibility.
We should not imagine that it would be easy to move out of the myth of being the integral and direct successor to the perfect apostolic church. Let us consider another myth that has arisen since the sixteenth-century: the myth of the perfect all-embracing book. In that myth what is Christian is what-is-in-the-book-as-I-read-it, and it is accessible to me for me, and that which is not-to-be-found-there can be dismissed as non-Christian or unimportant. Both this myth and the Catholic myth are based within a belief that all that is needed is to be found in a perfect origin, and there is the possibility of a perfect continuity with that origin. We recognise the myth of the perfect book as a private world: the world of biblical fundamentalism; and we find it so incredible that it difficult to enter into dialogue with it intellectually. It is not in our world. If we were asked to demonstrate what we meant by this, we would make an appeal to historical scholarship. "Their" view of what the book is, what is says, how it says it, what we say about the same situations, and how we look upon reading ancient religious texts, all indicate that "their" approach does not do justice to the facts as we now study them and now relate them to others fields of human knowledge. Now look at the scholarship on the early church of the last hundred and forty years -and note that this is an appeal to empirical history -and ask whether it supports the classic image of the unified apostolic church? That image is as threadbare historically as the myth of the perfect book.
Frame III: Vectoral Accountability
One major obstacle to the church's credibility is the lack of responsibility and accountability that has emerged in investigations carried out over recent years as a result of the abuse of children becoming known. Judges have expressed shock that such complex organisations have been put into such incompetent managerial hands, and at how patterns of secrecy had become so in-built that those who operated the systems did not even see how they transgressed the rules of natural justice.
What is all the more surprising is that the Catholic Church would seem to have a thoroughgoing internal system of accountability. The cleric promises obedience at ordination to his ordinary. The bishop is appointed after a careful scrutiny by Rome and is held to account. The system is regulated by law. Rome carries out investigations of theologians and institutions, and is not slow to claim powers over speech and writing that would be found abusive of human rights in most developed countries today. Most clerics live with one eye on their bishop -and bury their frustrations and resentments; while virtually all bishops in the modern world work with the awareness of constant surveillance from Rome, and act so that it can be clearly seen by that surveillance that they are not "doing their own thing". So how did it happen in such an environment that child abuse could grow to be such a problem without it even being mentioned?
Functionally most clergy think of the Church as hierarchical, in the strict sense. To imagine the church as hierarchical is not simply a matter of a pyramid-shaped flowchart or a chain-of-command structure. It is to think of it in the precise terms of Pseudo-Denis who introduced the myth of hierarchy into Christianity as saw it manifested in the social structures of those who formed its ruling class (ordo). Hierarchy is the notion that holiness, grace in western terms, power, authority, authorization, and authentication flows from "the higher" to "the lower". So, how do I know that my little gathering in a village is part of the divine redemptive plan? Because it has been authenticated from above, and that by stages authentications can be followed upwards to heaven itself. In western terms this means that divine authentication comes through Christ's vicar: the Bishop of Rome. But while each use of power or authority can have its lineage traced back to a higher source, the authority and power flows in just one direction: downwards. This belief, rarely stated in its full form, manifests itself in virtually every aspect of Church life. Let us imagine a large gathering of Catholics who cannot find a priest yet want to fulfil their Sunday obligation: can they authorise one of the men in their group to preside over their Eucharistic meal? Noabsolutely not -for that authority must come from one who has it, and it cannot arise from below no matter how great the need (otherwise, in Catholic eyes, many non-Catholic ministers would have valid orders). So, while we may talk about ministry as "a response to need", no need, however great, can generate ministry. Let us imagine a group of Catholic students in a university wanting to set up a discussion group. Unlike their fellow Christians who will just do it, they will seek approbation from the Catholic chaplain, who, in turn, will invariably see a need for an oversight role such that it ceases to be a genuinely student-led group! Imagine someone asking prayers for the sick: will they imagine their prayer is more likely to be heard if it is mentioned by them when they stand making priestly intercession at the Prayer of the Faithful or if an ordained minister "offers Mass" for the intention? Right through Catholic thinking and behaviour the notion of hierarchy is part of our mythic world.
Because authority flows in just one direction in that world, responsibility also flows in one direction. The priest or the bishop is responsible to higher authority for their people, they are not responsible to their people. If it were otherwise, there would have to be structures such as one finds in other Christian congregations to hold ministers to account. Catholics can and do complain about their priests to their bishops but this is noise in the system. It is not the case that it is part of a minister's role to be responsible to those to whom he ministers -he only need answer to the bishop for how he has discharged the duty entrusted to him by the bishop. In such a world, so long as "duties-upward" were performed, men could rest content that they had fulfilled their duty, but they failed to see that they had other duties to those who looked up to them as leaders. This may seem a hard judgement but as I read the various Irish reports recently I was struck by the frequency with which contrite clergy noted that they had carried out what was required in Canon Law while not seeming to notice more basic, uncodified, demands. The pervasiveness of hierarchia in Catholic thinking has produced a culture of vectoral accountability: one is answerable / responsible in one direction only, namely, towards whence one perceives one's authority to come.
It should not be imagined that this notion of hierarchy with its concomitant view of a one-way responsibility to the authorising source, is somehow a preserve of the Catholic Church. Its functional origins lie deep in the mythic past of sacral kingship (there is a curious counterblast to the notion in the Judges and Samuel). It was given its legal form in Roman jurisprudence. It is reflected in Acts when Paul's appeal to the imperial court superseded the authority of the lower court; and it was given its intellectual form by the Neo-Platonists, particularly in Iamblichus and Plotinus, from whence it entered theology, both east and west, through Pseudo-Denis and the Liber de causis. It fitted the world of the emperor in Constantinople and of the Holy Roman Emperor in the west, and reached its apogee in the Boniface VIII's Unam sanctam. But, some churches aside, recent centuries have not been kind to the notion of hierarchy. The modern view of authority, responsibility, and two-way accountability can be seen in the opening words of the U.S. Constitution: the people "ordain and establish" their own government. This classic piece of eighteenthcentury Enlightenment thought echoes through every modern piece of constitutional thinking. It forms part of the myth of every statement of rights, and it is fundamental to the use of power in our societies. We can even see a religious formulation of it in the preamble to the Irish constitution: "all authority" comes from God, but it is the Irish people who "adopt, enact, and give to ourselves this constitution." In a world where the hierarchical model has been both abandoned and formally rejected, to have within it a group of people who structure their lives hierarchically is going to cause problems. That group's decision-making is going to seem incredible in terms of the interpersonal ethics upon which modern societies are based. The reaction of the Irish tribunals to the values used in church decision-making in several Irish dioceses is an explicit instance of this. But there are detailed lessons to be learned here.
First, the distance is not one of faith or even ethics (otherwise church leaders would not have felt embarrassed by the events or made attempts to express sympathy with victims) but a crisis of social culture. As all missionaries down the centuries have known: the church must be credible in whatever social culture it finds itself or its message will fall by the wayside.
Second, there is a rogue intellectual passion that can often be found in religiously fervent people, which is attracted to being seen as incredible, alternative, and following its own drumbeat. Such people have their own peculiar needs met in the church's faith being sidelined, and they will be the first to speak, write, and blog in favour of the institution, to encourage the leaders to be even more strident in "not being conformed to the world". But the calls of these vociferous, articulate, and very engaged combatants should be heard as those of the sirens! We must never forget the basic impulse of all our proclamation: that there is something in the message of the Christ that fulfils human longing, Augustine's cor inquietum, and that we find the Spirit's positive stirrings in every human culture. While one cannot say that every value in a human society is the stirring of the Spirit, it is certainly false to assume that a clash of social cultures is an indicator that that which is at odds with the church is wrong.
Third, the longer any pattern of behaviour has been operative in the church, the more it has embedded itself in law, community rituals, devotions, liturgy, models of behaviour and formal theology, often to the extent that it appears an essential part of "the deposit of faith". It is the duty of theologians to help disentangle this complex and propose better forms of interaction with the societies in which we live, and it is the duty of historical theologians to remind the church that most of this complex arose in very specific cultural circumstances and that there was a time when these ideas were not known. While it is easy to discover alternative models of responsibility in Christian experience, we should not imagine that a notion of responsibility to the church (i.e. the people in real, local communities) is going to be easy! Consider this case. It is the duty of the bishop to regulate the celebration of the Eucharist in his diocese. Any male Christian with the ability to preside can be ordained. Whereas in the past levels of general education and the legal tasks imposed on clergy within late medieval and renaissance societies meant they needed a long training time, today most men could be given the key presiding skills through part-time and distance learning techniques, as is the case with deacons. So, any bishop who sees himself as serving the needs of the communities of his diocese will have no shortage of presbyters as he will, like Paul on his way from Lystra to Antioch appointing presbyters in every church (Acts 14:21-3), have ordained at least two (so that one can be on holiday) in each community! Ah! But it's not so simple! Such an action would not be authorized! The actual needs of Christians are an alternative source of authorization that is often simply invisible to us.
If in actual practice changing to a different model of responsibility is too difficult for most bishops, that does not mean that other consequences of the existing model of authority are not undermining the credibility of the church's structures. Many pastors out of a genuine sense of concern use the language of "being there to serve their people", but these statements are usually so hedged around with qualifications that they appear risible. Rather than this dissonance between language and agency, it would be better if all concerned noted the distinction between "responsibility for" and "responsibility towards" on the one hand and "accountability to" on the other. Modern ideals of society assume that there is an identity of object between these; paternalistic societies assume the distinction. While there is no absolute need for the Catholic Church to adhere to the paternalistic model (historians, after all, can trace its introduction and growth), most hierarchs -bishops, presbyters, and deacons -see any move from that model from a perspective within the model's own myth and, as such, see any departure from it as a betrayal of the very authority from which they see themselves having authorisation. That being so, they should abandon the use of any language that, given today's cultural assumptions, presents themselves as "servants of" or "seeking to fulfil the needs of" people. They may believe this (just as many kings, prelates and generals have firmly believed that they were acting for the good of "their" people) but they should not state it. Stating it appears to be little more than repeating the familiar utterances of totalitarian regimes, which has made modern people suspicious. When church leaders are suspected of the pious equivalent of "newspeak", their credibility is affected both within and without the church.
Frame IV: Transparency The other major implication for church leaders who use a vectoral model of responsibility is that they are not likely to give sufficient attention to one of the most important developments in our political culture in the last half century: transparency. "Transparency" is a concept that arose in East Germany during the Soviet era as a means of attacking the Federal Republic, but it rapidly spread throughout western culture within societies that were throwing off the vestiges of the paternalistic state. It is best to see it as at the counterpart of a model of society where authority resides with those being governed. If we are the source of authority, we should be able to see how that authority is being exercised by those to whom it is delegated, and in that act of observation we have a check against the abuse of authority which bitter experience has told us can all too often occur. Indeed today, most of us give this two-part dictum the status of an axiom: "power tends to corrupt and absolute power tends to corrupt absolutely." As such, transparency in our world is a corporate virtue, and to act in a non-transparent manner is to act unethically.
If one conceives of church leadership as a network of interlocking ministries which differ in function within the community of the baptised, then transparency is not a problem. Indeed, the structures within the Didache for identifying corrupt ministers are one of the earliest examples within the historical record for the use of transparency within a society. But transparency is not a notion with a long history within western societies or law, and in a paternalistic or hierarchical society it can seem little more than a needless addition, or a useful add-on for those who think the crisis of credibility is simply a crisis of the church's public image. However, we must not be lulled into such obscurantism. Transparency is now a demand in every area of social interaction and consequently will also be an expectation within the church. Moreover, so long as transparency is expected, any lack of transparency in the church will result in a lack of credibility.
Would transparency be all that difficult? Could not the proceedings of meetings of Episcopal Conferences be published? The American bishops, for example, publish far more than happens elsewhere. Transparency is more than a freedom of information initiative: it is a cultural value. It would mean that there were structures in place for the selection of deacons and presbyters within a community. It would mean that the selection of bishops would take place openly among those among whom the bishop will minister. It would mean that specialist ministers such as hospital chaplains, seminary teachers, and counsellors would have to show they have the same personal skills and formal education that are normally expected within society at large. It would require that ministry was seen to be a community responsibility and not an individual's belief in his or her own vocation.
What if we genuinely believed that transparency is a truly human value. Perhaps the church's wholehearted embrace would itself be a beacon to governments, banking, and all those other areas of human endeavour where we seek it. The church's deep embrace of transparency would certainly enhance its credibility when it seeks to preach to money and raw power, indeed it would become an aspect of the testimony that we are called to bear towards the truth (see Jn 18:37). It would also help us create an ecclesial practice that reflects far more clearly the ecclesiology that we increasingly embrace theoretically. In a world where people expect that those who serve them (medics, politicians, police) are accountable to them, then we need to confront this choice. On the one hand we are confronted with an ecclesiology and praxis that sees the actual community in which people live, interact, and worship as akin to the local outlet of an multinational; on the other, a real community that believes it has gathered in Christ and finds from among itself those who will preside, those who can serve its various needs, help it with its outreach to the needy and the world, and give it the ability to reflect the Christ in itself and its collective ministry, which with their networking with other communities forms the catholica.
Conclusion
Our four elements of incredibility (dissonance, mythic crisis, vectoral accountability, and transparency deficit) are interrelated. None of them is new as a problem for the church, but it is their combination in a culture that is suspicious of large organisations that claim authority that makes our situation unique in Christian history. Hence, attending in depth to our incredibility -which is distinct from the credibility of faith -is a matter of the greatest urgency. Let me conclude with two items of "thick description" that illustrate the profundity of the crisis. On a cold Saturday in February this year I found myself called to the bedside of an old Irish gentleman who had suffered a stroke. In every way he was typical: a great-grandfather full of honour and hard work, coming from stock that has held the faith since the fifth-century "in spite of dungeon, fire and sword". We spent a long time in conversation and agreed to meet again. In | 12,026 | 2013-03-01T00:00:00.000 | [
"Philosophy"
] |
Using Crowdsourcing Based Health-map System in Bangladesh: Prospects, Challenges, and Development of an Effective Model
Despite a countrywide mobile network and amassive number of subscribers,Bangladesh till now failed to usurp the full potential of e-health service. With the aim of analyzing the prospects and challenges of using crowdsouring based health-map system in Bangladesh, this study adopts a mixed method approach and collect data using structured questionnaire, key informant interview, in-depth interview, informal interview, focus group discussion, case study, etc. The results of this research represent that, except for a small proportion (11%), the majority (89%) of the respondents don’t go to doctors for minor health related problems. The major causes of not going to the doctor encompass high crowdedness in the hospital, high visiting charge of the doctors, etc. Television, mobile phone, and internet are three main sources of receiving health related information. The lion’s share (76%) of the respondents have interest in e-health service, but cynically a minor(22%) section of the respondents ever used e-health service. Present mobile phone based e-health services have several problems such as no preservationof patient’s record, no scope of continuous contact with the same doctor, and hectic procedure in reaching a doctor. This study proposes a crowdsourcing based e-health service model, which will help to create a massive data set and assist in both treatment of the patient and predict future outbreaks of health crisis.
Introduction
Bangladesh is a developing country, where over one third of the population is living in poverty and another one third live just above poverty level (The World Bank, 2010).Despite this backdrop, Bangladesh has achieved substantial progress in some sectors of health such as reduction of infant mortality, under 5 mortality, and maternal mortality, etc. (World Health Organization &Ministry of Health and Family Welfare, 2015).Projection of mortality and health related indicators from the period 1990 to 2014 displayed in Table 1 also show a sizeable progress in the health sector of the country.However, despite a substantial progress in reducing child and infant mortality, Bangladesh still has poor prenatal and postpartum care, nutritional deficiencies, high incidence of non-skilled birth attendant utilization and the second highest maternal mortality and morbidity rates next to sub Saharan Africa (WHO, 2011 in Walton and Schbley, 2013; Unicef, 2009).The country is also suffering from a striking shortage of physicians and skilled health attendants.For every 1000 people, there are only 0.5 doctors and 0.2 nurses (Ministry of Health and Family Welfare, 2015).There are approximately 0.58 health worker per 1000 residents against the WHO suggested level 2.28 workers per 1000 persons (The World Health Report, 2006).The distribution of doctors is also geographically uneven and negatively skewed in the rural areas, as doctors have the tendency to live in metropolitan area (WHO, 2011).Bangladesh is experiencing a rapid growth in the number of mobile subscribers from its inception in 1993.The country has a comprehensive coverage of mobile network and according to the World Bank collection of development indicators, 540% population is covered by mobile cellular network (Trading Economics, 2006).The mobile phone is also serving as the major device for internet access.According to BTRC, at the end of year 2015, the total internet connection of the country stood 54.1 million of which 51.5 million were on mobile phones (Islam, 2016).This wellstructured mobile network and huge number of subscribers eventually put Bangladesh in a favorable position in utilizing mobile based health service.Although Bangladesh is in a favorable position in utilizing mobile based health service, the country still lags far behind in utilizing this potential.In this context, the Based on the administrative structure of the country, the structure of health service delivery in the country is divided into six hierarchical segments, such as ward in the bottom level followed by the union, upazila (Subdistrict), district, divisional, and national level.A detail of each stratum is present in Table 2. Despite an extensive infrastructure, the capacity of public health sector in providing quality health service for the citizens is still feeble in Bangladesh (Table 3).Many sick people failed to manage a minimum treatment for their survival.Public health service in Bangladesh is facing a chronic shortage of manpower, resources, and technologies.The government financing in the health sector is still not satisfactory.In terms of health spending as the share of GDP, Bangladesh has remained as one of the lowest countries in WHO South-East Asian region placed above only Myanmar and Indonesia (Ahmed et al., 2015).The number of hospital beds per patient indicates availability of resource of a hospital to serve a patient.In Bangladesh less than one hospital bed is preserved for 1000 population, which is a clear indication of resource constraints in delivering hospital service (WHO, 2012).Massive shortage of skilled human resources in the health sector of Bangladesh is also an acute problem.As identified by WHO, Bangladesh is one of the 57 countries having a critical shortage of health workforce (WHO, 2006).The country is running with an immense shortage of 60,000 doctors and 140,000 nurses (Mahmood, 2012).The ratio of the necessary health workforce for quality service is also stridently unbalanced.For instance, the ration of physician, nurse, and health technician is 1:0.4:0.24, which represents a colossal disparity with FAO suggested ratio 1:3:5 (Ahmed et al., 2015).Not only the number, the distribution of the workforce is also uneven and skewed to urban areas, leaving rural facilities overburdened, understaffed, and insufficiently equipped (Ahmed et al., 2015).According to the World Bank (2012), the public sector health facilities in Bangladesh are poorly equipped with medical devices, instruments and supplies.More than half of the surveyed Maternal and Child Welfare Centers did not have child height measurement scales and 255 surveyed district hospitals lacked minor surgical tools, while about half of the surveyed community clinics didn't have a blood pressure measuring tool or thermometers (World Bank, 2012).
Crowdsourcing in Risk Management: Evidences around the Globe
With the tremendous concentration of the scholars and recent progress of Web 2.0 technologies, crowdsourcing application has been one of the emerging phenomena in the last decade.The idea of crowdsourcing was first introduced in a Wired Magazine article in June 2006 by Howe (Zhaoand Zhu, 2014; Brabham et al., 2014).Crowdsourcing can be referred to as a distributed problem-solving model where a crowd having no exact size can be involved in finding the solution of a complex problem (Chatzimilioudiset al., 2012).In a typical crowdsourcing process, firstly an organization identifies the tasks to solve a problem, and then the tasks are released to the crowd which is an online community for their contribution.Anyone among the members, who are interested in incentives or facilities given by the organization, can contribute on behalf of the organizations.It is a simple method with powerful impact as it mobilizes expertise and competence.So a vast number of individuals can work individually or in a collaborative manner to accomplish the tasks.After completion, the tasks are submitted to the organization and the organization can further assess the quality of the work done.Wikipedia is an example and a classic form of crowdsourcing process (Zhao and Zhu, 2014;Chatzimilioudis et al., 2012).In Bangladesh, there is no heath map platform for monitoring and reporting public health situations in different regions.For instance, when a massive number of people get affected by a contagious disease, people could hardly know the mode of disease proliferating as there is no designated platform to gather adequate information on contagious disease continuously and disclose it to the public on time.If the individuals have access to the information about the imminent disaster, he could take necessary initiatives and preparation to minimize the loss during the event.This advance preparation can also safeguard a huge number of lives from looming disaster and make it easier to handle the health crisis with the least efforts.Adequate information can also assist health officials in developing mathematical models to predict the possibilities of the future occurrence of a disaster.For example, prediction can estimate the quantity of the elements, succor, medicine or vaccine that will be required in the near future to handle the crisis.Suppose, the country needs a massive supply of medicine for the imminent crisis, the government could take advance initiatives to dispose the medicine in advance and save precious lives of the citizens.
Proper distribution of aid is another important challenge during the event of an epidemic.If Sahana software was originally developed by members of the Sri Lankan IT community who wanted to find a way to apply their talents towards helping their country to recover from the immediate aftermath of the 2004 Indian Ocean earthquake and tsunami.During a health crisis, time should be utilized in an effective and intelligent way as delivering medicine or succor at the right time can save many lives.As health map shows real time public health status in different areas and can also visualize the prediction, it will have a great impact on the timely realization about the severity of situations.Nonetheless, if the public has reporting opportunities and government has direct access to these reports, government can also determine their efficiency and lacks in crisis management.So, authority may bring necessary changes in the approaches to crisis management in time.The widespread use of Smartphone allows people to post geo-location specific reports and pictures with ease.Disaster Crows was developed by the Pittsburgh Independent Media Center for coverage of the G20 summit and subsequent protests in Pittsburgh in September 2009.Crows has not become popular as a crowdsourcing platform, may be because of its limited functionality.By using crows, one can submit reports and view them on Google map.It also includes widgets for Twitter, Flickr, YouTube, and Podcast integration as well as a Smartphone compatible mobile version of the website.Ushahidi and Sahana systems were too large to manage limited resources, but crows, has made smaller size management more feasible.The lack of certain features made it easy to customize the platform as our own needs(Weaver and Boyle, 2012).In the event of a contagious disease outbreak or natural disaster, the government in many developing countries, including Bangladesh is not sufficiently prepared or equipped to gather and disseminate information to the public quickly at the right moment.Experiences around the globe confirmed the fact that developing a health map platform using an effective e-health system as a source of information would be a decidedly valuable real-time tool for managing and understanding situations rather than relying heavily on retrospective analysis.
Methodology
This study was conducted in selected villages and district headquarters of Barisal and Jhalokathi in Bangladesh.To achieve the objectives, this research adopted mixed method approaches, i.e. blended both quantitative and qualitative methods for better understanding of the phenomenon under study.The population considered in this study encompasses general people, patients, and physicians.Using purposive sampling method, a total of 200 questionnaires were supplied to the respondents of which 108 gave the complete feedback.Among the total questionnaires, 20 questionnaires were filled-up by the physicians serving in the Sher-E-Bangla Medical College Hospital, Barisal.The structured questionnaire sought information on general characteristics of the respondents, health related information seeking behavior, sources used for receiving health related information, awareness, use and interest in e-health service.The questionnaire was developed based on brain storming and expert opinion.
This research also conducted 10 in-depth interviews with the physicians, 10 informal interviews and 2 case studies with the patients, 4 focus group discussions (FGD) one each with doctors, patients, intern doctors, and medical students.The questionnaire was distributed directly among the people, but in some cases, where the patient was unable to fill-up questionnaire, it was filled-up by interviewer according to the answer given by the patients.For analyzing data, this study used descriptive statistics such as mean, median, mode, frequency, observed ratio, etc. and adopted SPSS 16.0 software.
Health Related Information Seeking Behavior
Data allied to health related information seeking behavior in Table 2 illustrates that apart from a small proportion (11%), the lion's share of the respondents didn't go to the doctor in minor health related problems.The reason for not visiting a physician encompasses the over crowdedness of the hospital, high visiting charge of a physician, high distance of the hospital from the patient's residence, and overindulgence of the patients with daily chores.In an in depth interview, an individual called Mohammad Sorwar from Barisal city said-'People living in the adjacent areas of Shere-Bangla Medical College Hospital, Barisal usually don't prefer to go to the hospital for the treatment, rather they frequently take treatment from drug dispensaries.The reasons for not going to hospital embrace huge patient rush to the hospital and less inclination of the physicians in providing quality treatment.'Table 5 mirrors that the respondents used television as the major source of receiving health related information followed by cell phone based sources and the internet.A conspicuous section of the respondents (29%) also obtained health related information from newspapers.The telecommunication technology in Bangladesh is proliferating rapidly and eventually mobile phone is gaining vast popularity as a communication medium.The government of Bangladesh also recognized mobile phone as a very important medium in disseminating crucial information.For instance, at present, the government sends SMS to the mobile subscribers to inform about any imminent crisis.The mobile phone also acts as an important device for access to the internet throughout the country.So, with the boom in the number of mobile phone subscribers, the internet users are also elevating simultaneously.
Social media like Facebook, Twitter, etc., nowadays, evolved as a high-flying source of information.However, as these social media are internet Interest in e-health service At present, in Bangladesh, there is no system in place to detect, monitor or predict contagious diseases.So, health crisis occurred, it is not possible to assess the situation immediately.This time lag has a profound negative effect management and organization of the activities for successful management crisis.Figure 1 shows that 76% people surveyed in this study did not information in advance about contag diseases in the study area, whereas a there is no to detect, monitor or predict contagious diseases.So, if a sudden , it is not possible to assess the situation immediately.This time negative effect on the and organization of the management of the that 76% people not receive any about contagious diseases in the study area, whereas almost all the respondents posited that timely sufficient information regarding contagious disease outbreak can be very useful for public health management in Bangladesh Available information will assist citizens to estimate the degree of risk of a particular disease in advance, hence to take necessary precaution before the disease outbreak.In advance information can also assist the government in effective distribution of medicine and efficient management of the workforce.
Access to information about contagious disease outbreak dynamic system that the public health situation, outbreak and crucial real time Deplorably, at Present, there is no health map platform in Bangladesh.However, the respondents consulted in this study were enthusiastic about health map.91.49% respondents reside in that timely and contagious be very useful for health management in Bangladesh.Available information will assist the citizens to estimate the degree of risk of a , hence helps take necessary precaution before the dvance information the government in effective distribution of medicine and efficient there is no health map platform in Bangladesh.However, the respondents were very map.About 91.49% respondents reside in cities thought that health map can be very useful for them, while 62.26% rural residents had similar opinions.Three quarters (75%) of the physician respondents were in agreement that health map is necessary for public health management.Respondents feel that communicable diseases like tuberculosis, viral fever, etc. should be visualized on map for better control and management.If people can estimate the risk of imminent diseases, they will be able to take early precaution, which in turn can contribute to the effective control and management of the diseases.
Table 7: Perceived necessity of health map in Bangladesh by different categories of the respondents (n=100)
Challenges of existing e-health services
Along with numerous convincing benefits, e-health services also have a number of challenges.For example, internet based ehealth service needs a Smartphone for enjoying service.Regrettably, only 24% of the respondents (Table 4) possessed Smartphone.In Bangladesh, the major section of population evolved from lower middle class to low income strata, so Smartphone based e-health service may not be accessible to a large section of the population.The high cost of internet service is also an obstacle which limits the use of e-health service.Due to high call toll, the respondents also considered Cell phone call based e-health services expensive.In an in-depth interview, a shop keeper named Khairul Hassan from Barisal city said-'I have received quality treatment via mobile phone health line for treating my eye problem.However, the cost of the call seems expensive, as I had to talk with the doctor for several minutes.' A minimum level of literacy is mandatory for receiving health advice via internet or cell phone.Many medical terms don't have any Bengali name and the names of the drugs are also in English.So, it is difficult to consume the health related information if the service receiver is illiterate.The ehealth service is only suitable for the treatment of minor diseases and can provide support at the primary stage of critical diseases.This service is not suitable for critically ill patients needing intensive care for a long period of time under the supervision of physicians.
Present Status of e-health Services in Bangladesh
Despite a huge potential of IT deployment in healthcare sector in the country, the use of IT systems for healthcare has not been developed so far.The existence of the use of IT system in healthcare is limited to only e-prescribing services executed by the mobile phone operators.Among these services, Grameenphone's doctor helpline has been the most popular service.Despite several problems, this service is experiencing an upward trend of popularity for some reasons.In general, when a patient meets a doctor face to face, the majority of the doctors showed disinclination to the patients.On the contrary, when a patient contacts a doctor via phone calls, the doctor seems more attentive than usual.Phone callbased health service can also save money and valuable time of the patients.However, among the respondents using e-health service, a major section (62%) had complains about the existing mobile based e-health service, while the rest 38% were satisfied.
A Proposed Crowdsourcing based Ehealth Service Model
Considering the existing bottlenecks of ehealth services and health related information dissemination process as well as the opinion of the patients and physicians, this research proposed a model for crowdsourcing based health map in Bangladesh.
In the beginning, a Smartphone application is used for collecting and reporting health data from the mobile subscribers and these data will be processed in a web based system.When any user feels a health problem, he will report this via Smartphone application filling a report form (see Figure 2 for a sample form) with necessary images or videos if needed.This report will proceed to the doctor assigned for giving medical advice in this area, where the users reside.When a doctor gets a report from any patient, all previous health records as well as present health status about that patient will appear in the Web app.After analyzing the pool of data, the doctor will provide suggestions or prescribe necessary medicine.
If the doctor needs supplementary information to identify the health problem correctly, the doctor can directly make a phone call to the patient or send a query message to the patient.A central database will store all of the patient's record.This big data will help the concerned entities to analyze health and disease status in different geo-locations.If the same infectious disease status reported frequently from numerous users in a particular time period in a specific location, then with the help of proper mathematical models, the system will be able to identify the possibility of any contagious disease outbreak.Analyzing the trend of spreading cause, rate, and the direction, the system will be able to predict possible future outbreak.The Health Map will show every progress and downfall of public health status in the affected area in a continuous basis.The government as well as other humanitarian health organizations will be incorporated into the system for spatial and temporal monitoring of the health situation to take timely steps in building awareness, medicine and other resource supply, and better management of human resources.
Conclusion
This research implies that crowdsourcing based e-health map can improve efficiency in the public health crisis management and Bangladesh due to its widespread cell phone network coverage and its conducive position in using a crowdsouring based health map to monitor and manage public health crisis.Despite having a promising future, at present, the e-health service in Bangladesh is feeble and need extensive improvement.This research suggested a web based system and Smartphone based mobile application which will assist in accumulating and analyzing health data.The analysis of the data will help to predict geographical location specific future occurrence of health crisis, hence assist both the patients and concerned health organization in building awareness and timely efficient handling of the crisis situation to save precious lives and valuable resources.The model suggested in __________________________________________________________________________ ______________ Moinul Islam Sayed, Md.Mamun-ur-Rashid and Rejwana Islam (2019), Journal of e-health Management, DOI: 10.5171/2019.673705 __________________________________________________________________________ ______________ Moinul Islam Sayed, Md.Mamun-ur-Rashid and Rejwana Islam (2019), Journal of e-health Management, DOI: 10.5171/2019.673705management teams would be more benefited from the reports of more observers.
__________________________________________________________________________ ______________ Moinul Islam Sayed, Md.Mamun Management, DOI: 10.5171/201 Table 6: Respondent's awareness, Dimensions of e-health service Awareness about e-health services Ever used e-health service
Fig. 3 :
Fig. 3: A proposed model of crowdsourcing based health map
Table 2 :
Health care service delivery structure in Bangladesh
Table 3 :
Public health service capacity in Bangladesh the aid is available,
Table 4 ,
the mean age of the __________________________________________________________________________ ______________ Moinul Islam Sayed, Md.Mamun-ur-Rashid and Rejwana Islam (2019), Journal of e-health Management, DOI: 10.5171/2019.673705respondents were 31.3 years along with the standard deviation 8.46.A little less than two third (72%) of the respondents were male, while the rest 28% were female.A slightly more than three quarters (76%) of the respondents didn't use Smartphone, while the rest 24% used Smartphone
Table 4 :
General characteristics of the respondents Note: OR= *Observed range | 5,098.8 | 2019-03-13T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
Cross-Linking, Morphology, and Physico-Mechanical Properties of GTR/SBS Blends: Dicumyl Peroxide vs. Sulfur System
In this work, ground tire rubber and styrene–butadiene block copolymer (GTR/SBS) blends at the ratio of 50/50 wt%, with the application of four different SBS copolymer grades (linear and radial) and two types of cross-linking agent (a sulfur-based system and dicumyl peroxide), were prepared by melt compounding. The rheological and cross-linking behavior, physico-mechanical parameters (i.e., tensile properties, abrasion resistance, hardness, swelling degree, and density), thermal stability, and morphology of the prepared materials were characterized. The results showed that the selected SBS copolymers improved the processability of the GTR/SBS blends without any noticeable effects on their cross-linking behavior—which, in turn, was influenced by the type of cross-linking agent used. On the other hand, it was observed that the tensile strength, elongation at break, and abrasion resistance of the GTR/SBS blends cured with the sulfur system (6.1–8.4 MPa, 184–283%, and 235–303 mm3, respectively) were better than those cross-linked by dicumyl peroxide (4.0–7.8 MPa, 80–165%, and 351–414 mm3, respectively). Furthermore, it was found that the SBS copolymers improved the thermal stability of GTR, while the increasing viscosity of the used SBS copolymer also enhanced the interfacial adhesion between the GTR and SBS copolymers, as confirmed by microstructure evaluation.
Introduction
The increasing interaction and interdependence between countries (i.e., globalization processes) are the main causes of the growing consumerism [1], which reflects the unjustified acquisition of goods that do not meet needs resulting from social, individual, or environmental aspects. As one can imagine, this trend has critical implications for environmental protection, as it affects the increasing amounts of waste and influences the water and carbon footprint (i.e., an increase in demand corresponds to an increase in production). According to the World Bank, the main waste groups are food and green waste, glass, metals, waste paper, wood, plastic, rubber, and leather [2]. Although the first group is the largest (up to 44% of global waste), the challenge of polymer waste (i.e., plastic and rubber)-which has been on the agenda for many years-is no less significant. According to Geyer et al. [3], who conducted a detailed analysis of the production, use, and disposal of plastics, about 8300 million tons of plastics had been synthesized and released into the world by the time of publication. So far, five main streams of plastic waste management have been presented and used in practice: (i) landfilling, (ii) incineration and energy recovery, (iii) reduction, (iv) reuse, and (v) recycling.
Stelescu et al. [41] investigated the structure and properties of materials based on vulcanized rubber waste and styrene-butadiene-styrene thermoplastic elastomer. The results showed that the addition of vulcanized rubber powder led to improvements in the physicomechanical properties of SBS-based materials between the two polymer phases, which was explained by the similar structures of the SBS and styrene-butadiene rubber present in the vulcanized rubber powder. It is worth mentioning that styrene-butadiene rubber is also the main component of GTR.
Moreover, the data from the literature show that the use of thermoplastic modifiers might stabilize the thermo-mechanical treatment of cross-linked rubbers while also further processing waste-tire-rubber-based materials characterized by high GTR contents [27], representing a crucial step forward for the implementation of circular economy strategies.
The presented trends confirm the necessity of the development of GTR/SBS blends with well-defined composition and fully characterized processing and physico-mechanical properties. This approach allows for the tailoring of the performance properties of modified and highly modified asphalts, as well as for a better understanding of the interactions between GTR/SBS-based systems and asphalts.
Therefore, in this work, the processing, physico-mechanical, thermal, and morphological properties of 50/50 wt% GTR/SBS blends were characterized as a function of the SBS copolymer type (linear/radial) and cross-linking system (sulfur-based and peroxidebased). The experimental data presented in this study provide useful information about the interfacial interactions between the GTR and SBS phases, allowing for tailoring of the final processing and performance properties of GTR/SBS blends.
Materials
Ground tire rubber (GTR) obtained from passenger cars and truck tires, with a particle size of up to 0.6 mm, was supplied by Grupa Recykl S.A. (Śrem, Poland). The basic components of GTR are natural rubber (NR), styrene-butadiene rubber (SBR), butadiene rubber (BR), additives (curing system, activators, plasticizers, etc.), carbon black, silica, and ash. The composition of GTR, determined by thermogravimetric analysis, included rubbers and additives (63.1 wt%), and carbon black and ash content (36.9 wt%).
Four types of styrene-butadiene-styrene copolymers were selected and obtained from the Sibur Company (Moscow, Russia). The physico-mechanical properties of the selected components are presented in Table 1.
Sample Preparation
GTR was melt-blended with four different, commercially available types of SBS at a ratio of 50/50. The GTR/SBS blends were prepared in a Brabender ® internal mixer (type GMF 106/2 fromBrabender GmbH & Co. KG, Duisburg, Germany). The mixing temperature and time were 200 • C and 8 min, respectively, while the mixing speed was set to 60 rpm. After cooling the material for at least 24 h, the GTR/SBS blends were mixed with a suitable curing system: a sulfur-based system (composition in phr: stearic acid-0.3; zinc oxide-2.5; 2-mercaptobenzothiazole (MBT)-0.9; N-tert-butyl-benzothiazole sulfonamide (TBBS)-0.9; sulfur-1.5) or dicumyl peroxide (DCP, 2 phr), using laboratory two-roll mills with working space of 200 × 400 mm manufactured by Buzuluk Komarov (Komárov, Czech Republic). Two systems were used to evaluate the cross-linking behavior of the GTR/SBS blends pressed at 170 • C into 2 mm thick tiles using a PH-90 hydraulic press manufactured by ZUP Nysa (Nysa, Poland), with the optimal curing time determined according to the ISO 6502 standard.
Methodology
The Mooney viscosity of the rubber compounds was measured at 100 • C using an MV2000 Mooney Viscometer (Alpha Technologies, Akron, OH, USA) according to ISO 289-1.
The vulcanization process was studied and recorded using an Alpha Technologies Premier RPA (Hudson, OH, USA) according to the ISO 6502 standard. Further calculation of the cure rate index (CRI) was performed to determine the characteristic curing curve. This parameter is related to the cross-linking rate, giving insight into the differences between samples. The parameter was calculated on the basis of equations published in previous works [42,43].
FTIR analysis was performed in the range of 4000-650 cm −1 using a Momentµm microscope attached to a Nicolet iS50 FTIR spectrometer (Waltham, MA, USA) equipped with the Specac Quest single-reflection diamond attenuated total reflectance accessory.
The tensile strength and elongation at break were measured in accordance with the ISO 37 standard. Tensile tests were carried out on a Zwick Z020 machine (Ulm, Germany) at a constant speed of 200 mm/min. The results reported are an average of five measurements for each sample. Shore A hardness was assessed using a Zwick 3130 durometer (Ulm, Germany) in accordance with ISO 7619-1.
The swelling degree of the blends (approx. 0.2 g per sample) as a function of time was evaluated using equilibrium swelling in toluene (at room temperature). The swelling degree was calculated according to Equation (1): where Q is the swelling degree (%), m t is the mass of the swollen sample after time t (g), and m 0 is the initial mass of the sample (g). The sol fraction was determined based on the difference in mass between the initial sample and the dried sample after extraction, according to Formula (2): where F sol is the content of the sol fraction (%), F gel is the content of the gel fraction (%), m 0 is the initial mass of the sample (g), and m k is the mass of the dried sample after extraction (g). The density was determined by the Archimedes method as described in ISO 1183. All measurements were performed at room temperature in a methanol medium, without exception.
Abrasion resistance (∆V rel ) was measured according to the ISO 4649 standard by using a rotating cylindrical drum device from Gibitre Instruments (Bergamo, Italy). Before the measurement, the sample was weighed and then abraded over an abrasive test paper of grade 60 at a constant force of 10 N. After the distance of 40 m, the loss of the sample's weight was determined. The abrasion resistance for the studied GTR/SBS blends was calculated using Equation (3): where ∆V rel is the abrasion resistance (mm 3 ), ∆m t is the mass loss of the GTR/SBS blend (mg), ∆m const is the defined value of the mass loss for the reference compound (No. 1 was used) (mg); ρ t is the density of the GTR/SBS blend (g/cm 3 ), and ∆m r is the mass loss of the reference compound (mg). The morphology of the GTR/SBS blends was characterized using a JEOL 5610 scanning electron microscope (Tokyo, Japan). Prior to analysis, the samples were coated with a thin layer of gold.
Thermogravimetric analysis (TGA) was performed using a Netzsch TG 209 apparatus (Selb, Germany). The mass of the samples was in the range of 10-12 mg, to ensure that the thermal treatment was performed homogeneously. The samples were tested in the temperature range of 35-800 • C and under a nitrogen atmosphere, at a heating rate of 10 • C/min.
Mooney Viscosity and Curing Characteristics
In order to investigate the processing of the studied GTR/SBS blends, Mooney viscosity measurements were performed at 100 • C, and the obtained results are presented in Figure 1. It was found that the Mooney viscosity could be determined only for GTR with SBS L1 and SBS L3, while for SBS L2 and SBS R the maximum initial value was reached. This is related to the melt flow behavior of SBS grades used; for SBS L2 and SBS R, the melt flow index was not specified by the manufacturer. As expected, GTR blended with SBS L3, with MFI 190 • C/5 kg = 16-25 g/10 min, was characterized by a lower Mooney viscosity than the GTR/SBS L1 blend (SBS L1 MFI 190 • C/5 kg = 3-9 g/10 min). It was also observed that, regardless of the SBS grade, samples with DCP showed lower Mooney viscosity compared to samples with the sulfur-based system. This was due to the melting temperature of DCP, which is~40 • C [44,45]. As a result, melted DCP can act like a plasticizer during Mooney viscosity measurements at 100 • C, because it does not react at this temperature. As presented in Table 2, the Mooney viscosity at ML (1+4) 100 • C of the GTR/SBS L1 and L3 blends was in the range of 78.9-127.4 MU, while the Mooney viscosity at ML (1+8) 100 • C was slightly lower, at 73.0-116.5 MU, which was related to the prolonged heating of the material in the measurement chamber. For comparison, the Mooney viscosity at ML (1+4) 100 • C for commercial reclaimed rubbers can range from 65 to about 90 MU [46], which indicates that selected studied materials fit within this range. [46], which indicates that selected studied materials fit within this range. Prior to formulating the studied materials into the desired shapes, the curing characteristics of the GTR/SBS samples were determined, and the results obtained are shown in Figure 1 and summarized in Table 2. As can be seen, changing the curing system had a significant effect on the cross-linking curves measured at 170 • C.
The minimum torque (M min ) is a parameter that provides the first information about the sample processing. The processing characteristics of the sample were better for samples with lower M min . In this case, the only factor affecting this value was the grade of SBS copolymer used, and the effect of the curing system was negligible. For the GTR/SBS L1 and GTR/SBS L3 samples, the value of the M min parameter was in the range of 0.5-0.7 dNm, while for samples GTR/SBS L2 and GTR/SBS R it was in the range of 1.4-1.6 dNm. The differences between the samples were related to the melt flow index of the SBS copolymers used, and the trends observed for M min corresponded to the Mooney viscosity of the studied GTR/SBS blends (see Figure 1).
Maximum torque (M max ) and extent of cure (∆M) are parameters related to the stiffness and cross-link density of a material [47,48], which vary considerably between different types of curing additives. Comparing the results, the GTR/SBS blends cured with the sulfur-based system were characterized by M max in the range of 4.9-6.2 dNm and ∆M = 4.4-4.7 dNm, while the GTR/SBS blends cured with DCP showed M max in the range of 16.9-18.1 dNm and ∆M = 15.5-16.8 dNm. These results indicate that under the conditions studied, the SBS grade itself does not influence the course of the cross-linking process of GTR/SBS blends.
It is obvious that the efficiency of the curing system is strongly correlated with the cross-linking temperature. It was found that, regardless of the type of curing system, the GTR/SBS blends showed very short scorch times, which were in the range of 0.4-0.7 min. Moreover, as can be observed, for the GTR/SBS blends cross-linked by the sulfur-based system, the optimal cure time was in the range of 1.4-1.7 min, while for the GTR/SBS blend cured with DCP the optimal cure time was much longer, in the range of 9.5-9.9 min. These results confirm that in the studied conditions, the impact of the SBS grade on the cross-linking behavior of GTR/SBS blends is negligible. Moreover, the cure rate index (CRI) values clearly show that the sulfur system was more dynamic than the DCP at the test temperature (170 • C), which was related to the residence time of the material at elevated temperatures and the decomposition characteristics of the free radical initiators used.
FTIR Analysis
The FTIR spectra of the investigated samples are shown in Figure 2. The analysis of the obtained results shows that there were no significant differences between the samples. The bands of the C-H bonds of the CH 2 groups present in the aliphatic chains of the elastomers are located at 2915 cm −1 and 2850 cm −1 . The peak at about 1437 cm −1 is associated with C-H bonds of -C=CH 2 groups, while the band at about 1367 cm −1 can be associated with C-H bonds of -CH 3 groups. The band at 807 cm −1 corresponds to the skeletal vibration of the C-C bonds. In the range from 1100 cm −1 to 880 cm −1 , C-O-C bonds as well as S=O, C-C, and C-O bonds can be found, which can be attributed to the structure of the components used and their transformation (oxidation of GTR, revulcanization, etc.). The only obvious difference between the spectra presented is the presence of an additional peak at 1540 cm −1 in samples with a sulfur system. This peak is related to zinc stearate, formed during the reaction between ZnO and stearic acid [49,50], which is not present in GTR/SBS cured with DCP. As presented above, FTIR analysis indicated similar chemical structures of t prepared GTR/SBS blends, which simultaneously showed significant differences rheological behavior and curing characteristics (see Table 3) or-as described in the ne section-tensile properties (see Figure 3). This observation clearly shows that viscosity the SBS copolymers (i.e., their molecular weight and polydispersity) is a very importa parameter affecting the interfacial interactions between the GTR and SBS phases.
Physico-Mechanical Properties
The tensile properties of the studied GTR/SBS blends are presented in Figure 3. was found that samples with DCP had lower tensile strength and elongation break-by about 30% and 55%, respectively-compared to the specimens with sulfu The tensile strength and elongation at break results obtained for the GTR/SBS blen were in the ranges of 6.1-8.4 MPa and 184-283% for samples cured with the sulfur-bas system, respectively, and 4.0-7.8 MPa and 80-165% for samples cross-linked with DC respectively. This indicates that the sulfur-based system has a higher affinity f cross-linking of GTR/SBS blends than DCP. Poorer tensile properties compared sulfur-cured products represent a common disadvantage of peroxide-cured vulcanizat [51]. As presented above, FTIR analysis indicated similar chemical structures of the prepared GTR/SBS blends, which simultaneously showed significant differences in rheological behavior and curing characteristics (see Table 3) or-as described in the next sectiontensile properties (see Figure 3). This observation clearly shows that viscosity of the SBS copolymers (i.e., their molecular weight and polydispersity) is a very important parameter affecting the interfacial interactions between the GTR and SBS phases.
Physico-Mechanical Properties
The tensile properties of the studied GTR/SBS blends are presented in Figure 3. It was found that samples with DCP had lower tensile strength and elongation at break-by about 30% and 55%, respectively-compared to the specimens with sulfur. The tensile strength and elongation at break results obtained for the GTR/SBS blends were in the ranges of 6.1-8.4 MPa and 184-283% for samples cured with the sulfur-based system, respectively, and 4.0-7.8 MPa and 80-165% for samples cross-linked with DCP, respectively. This indicates that the sulfur-based system has a higher affinity for cross-linking of GTR/SBS blends than DCP. Poorer tensile properties compared to sulfur-cured products represent a common disadvantage of peroxide-cured vulcanizates [51]. Materials 2023, 16, x FOR PEER REVIEW 9 of 18 Figure 3. Tensile properties of GTR/SBS blends as function of SBS grade and curing system (data for GTR adapted from [52]; data for pure SBS copolymers were provided by the manufacturer).
Furthermore, it was observed that regardless of the cross-linking additives used, the highest tensile properties were determined for the GTR/SBS L3 sample, while the lowest tensile properties were determined for the GTR/SBS R sample. This is related to the flowability of the SBS copolymer (SBS L3 showed the highest melt flow index among the SBS copolymers used), which translates into better mixing efficiency between GTR and SBS.
To better understanding the effects of the cross-linking system on the tensile properties of the investigated system, uncured GTR/SBS blends were also investigated. Uncured GTR/SBS blends were characterized by tensile strength in the range of 2.0-3.7 MPa and elongation at break in the range of 91-336%-which, as expected, differed significantly from the cross-linked GTR/SBS blends. For comparison, the tensile strength and elongation at break of pure GTR are 2.6 MPa and 79%, respectively [52]. In addition, as predicted, most of the GTR/SBS blends were characterized by lower tensile strength than for pure SBS copolymers, with the exception of SBS L3 which, as mentioned above, may have been due to its having the highest melt flow index. More simply, the highest melt flow index indicates the lowest molecular weight of SBS L3 among the SBS copolymers used. . Tensile properties of GTR/SBS blends as function of SBS grade and curing system (data for GTR adapted from [52]; data for pure SBS copolymers were provided by the manufacturer).
Furthermore, it was observed that regardless of the cross-linking additives used, the highest tensile properties were determined for the GTR/SBS L3 sample, while the lowest tensile properties were determined for the GTR/SBS R sample. This is related to the flowability of the SBS copolymer (SBS L3 showed the highest melt flow index among the SBS copolymers used), which translates into better mixing efficiency between GTR and SBS.
To better understanding the effects of the cross-linking system on the tensile properties of the investigated system, uncured GTR/SBS blends were also investigated. Uncured GTR/SBS blends were characterized by tensile strength in the range of 2.0-3.7 MPa and elongation at break in the range of 91-336%-which, as expected, differed significantly from the cross-linked GTR/SBS blends. For comparison, the tensile strength and elongation at break of pure GTR are 2.6 MPa and 79%, respectively [52]. In addition, as predicted, most of the GTR/SBS blends were characterized by lower tensile strength than for pure SBS copolymers, with the exception of SBS L3 which, as mentioned above, may have been due to its having the highest melt flow index. More simply, the highest melt flow index indicates the lowest molecular weight of SBS L3 among the SBS copolymers used. Mechanical properties of the studied blends were improved by SBS (the optimal SBS content was 6 wt%) [53] * Estimated values based on available graphs. Table 3 shows the summary with descriptions of the processing methods and tensile properties of thermoplastic composites modified with waste rubber and SBS copolymers prepared by different research groups. The data from the literature show that the mechanical properties of polymer/GTR blends modified with SBS copolymers are usually improved. This is due to the compatibilizing effect of SBS caused by proper encapsulation of cross-linked GTR [15] and possible co-cross-linking between the phases at the interfacial region [53].
The elongation at break of GTR/SBS blends at a ratio of 50/50 wt% was in the range of 80-283%, while for the thermoplastic composites modified with GTR and SBS copolymers described in the literature, the value of this parameter was in the range~50-260%. For SBS + 20 wt% waste rubber composites, the elongation at break was in the range of~175-460%, which was related to tensile parameters of the used thermoplastic elastomer and the composition of the waste rubber (footwear waste).
Abrasion resistance is a very important parameter that determines the potential application of polymeric materials modified by waste rubbers in the production of footwear [41] or floor tiles [54]. The results of abrasion resistance for GTR/SBS blends are presented in the Figure 4. It was observed that regardless of the SBS grade, GTR/SBS blends cured using a sulfur-based system were characterized by higher abrasion resistance (∆V rel = 235-303 mm 3 ) compared to the samples cured with DCP (∆V rel = 351-414 mm 3 ). It was found that the GTR/SBS R blend was characterized by the best wear resistance, due to the radial structure of SBS R and having the highest viscosity among the SBS grades studied. waste), which was below 215 mm 3 [41]. On the other hand, the studied 50/50 wt% GTR/SBS blends were characterized by abrasion resistance comparable to that of SBR/carbon black + 30-50 phr GTR/devulcanized GTR (~250-325 mm 3 ) [55], NR/carbon black + 50 phr devulcanized GTR (~240 mm 3 ) [56], NR + 30 phr GTR (266 mm 3 ) [57], and GTR + 25-75 wt% SBR (219-470 mm 3 ) [58]. As can be seen, in selected cases, the prepared GTR/SBS blends showed even better abrasion resistance considering the 50 wt% waste rubber content in the studied materials. Table 4 presents a summary of the results for hardness, density, swelling degree, and sol fraction determined for the GTR/SBS blends. As can be observed, the hardness of the samples GTR/SBS L2 and GTR/SBS R was higher for samples cross-linked with dicumyl peroxide than when using the sulfur-based system, while in the case of the GTR/SBS L1 and GTR/SBS L3 samples the effect of the curing system on the hardness was negligible. This may be related to the more efficient mixing of the cross-linking system in GTR modified by SBS copolymers with higher flowability. It should be mentioned that the Mooney viscosity could be measured only for samples GTR/SBS L1 and GTR/SBS L3 (see Figure 1). The hardness of the studied materials was in the range of 66-78 Shore A and varied as a function of the SBS grade. Moreover, the hardness of the obtained materials was higher than that of pure GTR (57 Shore A). The highest hardness was determined for the GTR/SBS L3 sample, which also showed the highest tensile properties among the studied systems. The literature data showed that the abrasion resistance of the prepared GTR/SBS blends at a ratio of 50/50 wt% was worse than for SBS + 20 wt% waste rubber (footwear waste), which was below 215 mm 3 [41]. On the other hand, the studied 50/50 wt% GTR/SBS blends were characterized by abrasion resistance comparable to that of SBR/carbon black + 30-50 phr GTR/devulcanized GTR (~250-325 mm 3 ) [55], NR/carbon black + 50 phr devulcanized GTR (~240 mm 3 ) [56], NR + 30 phr GTR (266 mm 3 ) [57], and GTR + 25-75 wt% SBR (219-470 mm 3 ) [58]. As can be seen, in selected cases, the prepared GTR/SBS blends showed even better abrasion resistance considering the 50 wt% waste rubber content in the studied materials. Table 4 presents a summary of the results for hardness, density, swelling degree, and sol fraction determined for the GTR/SBS blends. As can be observed, the hardness of the samples GTR/SBS L2 and GTR/SBS R was higher for samples cross-linked with dicumyl peroxide than when using the sulfur-based system, while in the case of the GTR/SBS L1 and GTR/SBS L3 samples the effect of the curing system on the hardness was negligible. This may be related to the more efficient mixing of the cross-linking system in GTR modified by SBS copolymers with higher flowability. It should be mentioned that the Mooney viscosity could be measured only for samples GTR/SBS L1 and GTR/SBS L3 (see Figure 1). The hardness of the studied materials was in the range of 66-78 Shore A and varied as a function of the SBS grade. Moreover, the hardness of the obtained materials was higher than that of pure GTR (57 Shore A). The highest hardness was determined for the GTR/SBS L3 sample, which also showed the highest tensile properties among the studied systems. The results showed that the density of GTR/SBS blends is lower when the samples are cross-linked with DCP compared to the samples cured using a sulfur-based system, which is related to the density of the components used in the sulfur system (e.g., zinc oxide). However, this decrease in the density for samples with DCP is about 3%, so it is not a significant change. Moreover, the density of the GTR/SBS blends (in the range of 1.043-1.086 g/cm 3 ) was lower than that of pure GTR (1.149 g/cm 3 ).
The effects of the curing system and SBS grade on the swelling degree and the content of the sol fraction in the GTR/SBS blends were also analyzed. It was found that regardless of the grade of SBS copolymer used, the values of the swelling degree and sol fraction were lower for GTR/SBS blends cross-linked with DCP, indicating a more efficient cross-linking by this initiator. Figure 5 shows the effects of the cross-linking system and the SBS copolymer grade used on the morphology of GTR/SBS blends. The SEM images show the surface perpendicular to the direction of loading, which was obtained by breaking the samples subjected to a static tensile test (the speed of the crosshead was 200 mm/min).
The results showed that the density of GTR/SBS blends is lower when the samples are cross-linked with DCP compared to the samples cured using a sulfur-based system, which is related to the density of the components used in the sulfur system (e.g., zinc oxide). However, this decrease in the density for samples with DCP is about 3%, so it is not a significant change. Moreover, the density of the GTR/SBS blends (in the range of 1.043-1.086 g/cm 3 ) was lower than that of pure GTR (1.149 g/cm 3 ).
The effects of the curing system and SBS grade on the swelling degree and the content of the sol fraction in the GTR/SBS blends were also analyzed. It was found that regardless of the grade of SBS copolymer used, the values of the swelling degree and sol fraction were lower for GTR/SBS blends cross-linked with DCP, indicating a more efficient cross-linking by this initiator. Figure 5 shows the effects of the cross-linking system and the SBS copolymer grade used on the morphology of GTR/SBS blends. The SEM images show the surface perpendicular to the direction of loading, which was obtained by breaking the samples subjected to a static tensile test (the speed of the crosshead was 200 mm/min). As can be observed, regardless of the type of SBS copolymer, samples cross-linked with DCP were characterized by a smoother surface compared to the samples cured with the sulfur-based system. Moreover, the ∆M and swelling degree values were higher for GTR/SBS blends cured with DCP compared to GTR/SBS blends cross-linked with the sulfur system (see Tables 2 and 4). This indicates more efficient cross-linking of GTR/SBS blends by DCP, which could improve the compatibility between the GTR and thermoplastic phases [59]. Figure 5 shows that only for sample GTR/SBS R does the effect of the cross-linking agent on the surface of the samples after breaking seem to be negligible. This indicates that the radical structure of the SBS copolymer improves the interfacial adhesion between GTR and the SBS copolymer [15], which is also related to the high viscosity of the GTR/SBS R sample.
SEM Analysis
Considering data provided by the manufacturer of the used SBS copolymers, their viscosity increased in the order of SBS L3 < SBS L1 < SBS L2 < SBS R. Therefore, to better understand the effect of the SBS copolymers' viscosity on the breaking mechanism, the surfaces of GTR/SBS R (SBS R with the highest viscosity) and GTR/SBS L3 (SBS L3 with the lowest viscosity) were investigated at ×1500 magnification, and the obtained SEM images are presented in Figure 6. The presented results clearly show that regardless of the cross-linking agent, the surface of the GTR/SBS R blend (with SBS R, characterized by high viscosity) is smoother and more homogeneous compared to that of the GTR/SBS L3 sample (with low-viscosity SBS L3).
agent on the surface of the samples after breaking seem to be negligible. This indicates that the radical structure of the SBS copolymer improves the interfacial adhesion between GTR and the SBS copolymer [15], which is also related to the high viscosity of the GTR/SBS R sample.
Considering data provided by the manufacturer of the used SBS copolymers, their viscosity increased in the order of SBS L3 < SBS L1 < SBS L2 < SBS R. Therefore, to better understand the effect of the SBS copolymers' viscosity on the breaking mechanism, the surfaces of GTR/SBS R (SBS R with the highest viscosity) and GTR/SBS L3 (SBS L3 with the lowest viscosity) were investigated at ×1500 magnification, and the obtained SEM images are presented in Figure 6. The presented results clearly show that regardless of the cross-linking agent, the surface of the GTR/SBS R blend (with SBS R, characterized by high viscosity) is smoother and more homogeneous compared to that of the GTR/SBS L3 sample (with low-viscosity SBS L3).
Thermogravimetric Analysis
The results of the thermogravimetric analysis of the studied GTR/SBS blends are shown in Figure 7 and summarized in Table 5. As can be seen in Figure 7, regardless of the curing agent used, the thermogravimetric (TGA) and derivative thermogravimetric (DTG) curves of GTR/SBS showed a similar trend, which fits between the curves of pure GTR and SBS. This indicates that blending GTR with SBS improves its thermal stability, which is related to the higher thermal stability of SBS compared to GTR. Figure 6 shows the results only for GTR/SBS L1 blends, but similar behavior was also observed for the
Thermogravimetric Analysis
The results of the thermogravimetric analysis of the studied GTR/SBS blends are shown in Figure 7 and summarized in Table 5. As can be seen in Figure 7, regardless of the curing agent used, the thermogravimetric (TGA) and derivative thermogravimetric (DTG) curves of GTR/SBS showed a similar trend, which fits between the curves of pure GTR and SBS. This indicates that blending GTR with SBS improves its thermal stability, which is related to the higher thermal stability of SBS compared to GTR. Figure 6 shows the results only for GTR/SBS L1 blends, but similar behavior was also observed for the other studied GTR/SBS blends, as can be estimated from Table 5, which indicates that the effect of the SBS grade on the thermal stability of the GTR/SBS blends was negligible. other studied GTR/SBS blends, as can be estimated from Table 5, which indicates that the effect of the SBS grade on the thermal stability of the GTR/SBS blends was negligible. Moreover, it can be observed that pure GTR is marked by two characteristic peaks on the DTG curve. Tmax1, at around 385 °C , corresponds to the presence of natural rubber and butadiene rubber, while Tmax2 at ~460 °C corresponds to the presence of styrenebutadiene rubber and butadiene rubber [60,61], which are the main rubber matrices used in the tire industry. The maximum thermal decomposition of SBS occurs at ~450 °C , which is due to the similar chemical structures of SBS and SBR.
The residual mass for GTR was 36.9%, which corresponds to the carbon black and ash content [62]. The presented results confirm that the waste rubber powder was produced from scrap tires. It was observed that in the case of GTR/SBS blends, the residual mass at 800 °C was slightly higher for GTR/SBS cross-linked with the sulfur-based system compared to the samples cured with DCP, which was related to the presence of ZnO used as an activator [63].
Conclusions
The aim of this study was to investigate the effects of the SBS copolymer grade and cross-linking agent on the Mooney viscosity, cross-linking characteristics, Moreover, it can be observed that pure GTR is marked by two characteristic peaks on the DTG curve. T max1 , at around 385 • C, corresponds to the presence of natural rubber and butadiene rubber, while T max2 at~460 • C corresponds to the presence of styrene-butadiene rubber and butadiene rubber [60,61], which are the main rubber matrices used in the tire industry. The maximum thermal decomposition of SBS occurs at~450 • C, which is due to the similar chemical structures of SBS and SBR.
The residual mass for GTR was 36.9%, which corresponds to the carbon black and ash content [62]. The presented results confirm that the waste rubber powder was produced from scrap tires. It was observed that in the case of GTR/SBS blends, the residual mass at 800 • C was slightly higher for GTR/SBS cross-linked with the sulfur-based system compared to the samples cured with DCP, which was related to the presence of ZnO used as an activator [63].
Conclusions
The aim of this study was to investigate the effects of the SBS copolymer grade and cross-linking agent on the Mooney viscosity, cross-linking characteristics, physicomechanical properties, thermal stability, and microstructure of GTR/SBS blends, providing new insights into the interfacial interactions between GTR and SBS. The viscosity of the SBS copolymers increased in the following order: SBS L3 < SBS L1 < SBS L2 < SBS R. It was found that the GTR/SBS L1 and GTR/SBS L3 samples had the best processing parameters, and only for these materials could the Mooney viscosity be determined. The studied samples had Mooney viscosity at ML (1+4) 100 • C in the range of 78.9-127.4 MU, similar to the values determined for commercial reclaimed rubbers. The highest tensile strength was obtained for the GTR/SBS L3 sample, indicating that the application of SBS L3 copolymer-with the lowest viscosity among the SBS grades studied-allows for the formation of materials with the best processing and tensile properties, due to the higher mixing efficiency between GTR and SBS. On the other hand, considering the abrasion resistance of GTR/SBS blends, the best results were obtained with GTR/SBS R, due to the radial structure of SBS R and its high viscosity.
Studies on the effects of the cross-linking agent used-i.e., sulfur-based system vs. dicumyl peroxide (DCP)-showed that samples with DCP had lower Mooney viscosity values compared to GTR/SBS blends with a sulfur-based system. The results also confirmed that GTR/SBS blends cross-linked with a sulfur-based system had higher cure rates compared to samples cross-linked with DCP, which was related to the significant difference in the efficiency of the two initiators when used during cross-linking at 170 • C. The SEM images showed that the GTR/SBS cross-linked with DCP exhibited a smoother fracture surface compared to GTR/SBS cured with a sulfur-based system.
In conclusion, the investigated GTR/SBS blends showed relatively good tensile strength (4.1-8.4 MPa), elongation at break (80-283%), and hardness (66-78 Shore A), as well as abrasion resistance for selected blends (below 250 mm 3 ). Considering the high GTR contents (50 wt%) in the studied GTR/SBS blends, along with the abovementioned mechanical properties, the prepared materials have a huge potential for the production of technical rubber goods, footwear, or floor tiles. | 8,429.4 | 2023-03-31T00:00:00.000 | [
"Materials Science"
] |
Application of Experimental Design in Preparation of Nanoliposomes Containing Hyaluronidase
Hyaluronidase is an enzyme that catalyzes breakdown of hyaluronic acid. This property is utilized for hypodermoclysis and for treating extravasation injury. Hyaluronidase is further studied for possible application as an adjuvant for increasing the efficacy of other drugs. Development of suitable carrier system for hyaluronidase would help in coadministration of other drugs. In the present study, the hyaluronidase was encapsulated in liposomes. The effect of variables, namely, phosphatidylcholine (PC), cholesterol, temperature during film formation (T 1), and speed of rotation of the flask during film formation (SPR) on percentage of protein encapsulation, was first analyzed using factorial design. The study showed that level of phosphatidylcholine had the maximum effect on the outcome. The effect of interaction of PC and SPR required for preparation of nanoliposomes was identified by central composite design (CCD). The dependent variables were percentage protein encapsulation, particle size, and zeta potential. The study showed that ideal conditions for production of hyaluronidase loaded nanoliposomes are PC—140 mg and cholesterol 1/5th of PC when the SPR is 150 rpm and T 1 is 50°C.
Introduction
Hyaluronic acid (HA) is a polysaccharide containing alternating units of glucuronic acid and glucosamine [1]. HA is distributed in tissues and is particularly abundant in extracellular matrices. Apart from the cellular and molecular functions, HA protects local tissues and cells against compression. HA, because of its high swelling property and viscous nature, restricts the movement of molecules including pharmacological agents across the tissues [2]. Hyaluronidase is an enzyme that catalyzes breakdown of HA. Approved label use of hyaluronidase includes treatment of extravasation injury, for hypodermoclysis and urography. Of late, hyaluronidase is tested for management of secondary complications associated with plastic surgery [3] and as an adjunct in improving the efficacy of pharmacological agents [4].
Use of hyaluronidase as an adjuvant therapy for improving the pharmacokinetic properties of coadministered drug is of particular interest as many of the regulatory bodies including US FDA have approved its use in humans [5]. The potential use of hyaluronidase as an adjunct could be exploited if a suitable carrier based delivery system for hyaluronidase is developed. This would allow coadministration of a second drug by directly incorporating them in the same delivery system and administering both drugs as a single dosage form. Few studies demonstrated that the efficacy of anticancer drug [6] and local anesthetics [4,7] could be improved by pretreating the target site with hyaluronidase.
Understanding the effect of variables on the outcome is important for developing a product with all the predetermined requirements [8]. This will also ensure that such products do not show any batch to batch variation [9]. The effect of process variables can be studied either by checking the effect of one independent variable on the dependent variable at a given time or by using statistical tools, namely, experimental design. In experimental design, all the independent variables are varied together such that the effect of their interaction on the dependent variable is also analyzed [10]. The recent regulatory requirement suggests that quality by design approach should be adopted to build "quality into the product" [11,12]. This can be achieved by defining and analyzing the effect of variables on the outcome using experimental design. The product developed using these statistical tools shows more consistency in terms of quality and reproducibility [13]. Experimental design using statistical tools such as factorial design and response surface methodology allows the researchers to rapidly study and optimize the conditions to achieve products with the best possible quality and attributes using the limited available resources and without losing any essential information that may affect the outcome [10,14].
Liposomal delivery of proteins is one of the widely explored methods for delivery of proteins. In the present study, hyaluronidase was encapsulated in liposomes. The effect of process variables on size, percentage drug encapsulation, and zeta potential was studied using factorial design and CCD.
Preparation of Liposomes.
Hyaluronidase loaded liposomes were prepared using thin film hydration method. Lipids ( Table 1 gives the amount of phosphatidylcholine (PC) added during the experiment) were dissolved in 20 mL of chloroform taken in a 250 mL capacity round bottom flask. The flask was rotated continuously (rpm as per the Table 1) as the solvent was removed by evaporation under vacuum at temperature ( 1 ) indicated in Table 1 (Buchi Rotavapor R-215) and the lipids were allowed to deposit as a thin film within the flask. Traces of solvent (if any) were removed through desiccation. The lipid layer was hydrated with phosphate buffer saline (10 mL) (pH 7.4) containing hyaluronidase (protein content equivalent to 0.5 mg) in an orbital shaker (Remi Laboratory instruments, India, Model-CIS-24 BL) at 150 rpm, 28 ∘ C for overnight.
Size Measurement.
Liposomes formed after overnight hydration were sonicated at 60% amplitude control (probe sonicator, Sonics and materials Inc., USA) for total duration of 30 sec consisting of three cycles. Duration of each cycle was 10 sec with 10 sec interval. The formulation was maintained on ice bath during sonication. The liposomes were then transferred into polypropylene tubes and centrifuged at 450 g for 3 min to remove the coarse particles. The supernatant was then centrifuged at 34,600 g for 60 min (Sigma Laborzentrifugen 3k30, Germany) to collect the nanoparticles. The pellet obtained was used for calculating the particle size and percentage drug encapsulation. For size measurement pellet was dispersed in ultrapure water (Milli-Q, Millipore) and analyzed using photon correlation spectroscopy (Malvern Zetasizer). Nanoliposomes were made to release entrapped protein by treating them with 1% triton X-100 [15] and the amount of protein was estimated using Lowry method [16]. The amount of protein was calculated by comparing the absorbance (Biospec-1601, Shimadzu, Japan) with the standard curve plotted using bovine serum albumin.
Experimental Design.
The effect of variables involved in preparation of hyaluronidase loaded nanoliposomes was studied in two stages. First the effect of PC, cholesterol, temperature during film formation (solvent evaporation) ( 1 ), and speed of rotation of the flask (rpm) during film formation (SPR) on percentage drug (hyaluronidase) encapsulation was analyzed using fractional factorial design. The design had eight runs with zero centre points and single base for the selected four factors. The effect of interaction of two variables showing the maximum effect on protein encapsulation was further studied for the effect on protein encapsulation, size, and zeta potential using CCD. The level of other variables was fixed based on the results of fractional factorial deign. CCD consisted of thirteen runs for the two selected factors. Alpha level was maintained at 1.414.
Results and Discussion
Factorial studies showed that the outcome, that is, percentage of protein encapsulated, was significantly affected by the amount of PC present in the system. One-way ANOVA showed that the effect of PC on percentage of drug encapsulation was highly significant with a value less than 0.003. More than three-fourth of the outcome was contingent on the level of PC. The amount of the protein that could be entrapped in a liposome depends on the size of the protein and the amount of free aqueous phase within the vesicle. The portion of the aqueous phase interacting with lipid layer will be unavailable for the protein to occupy [17]. In the present study, the other independent variables did not have any significant effect on the outcome ( value > 0.05).
Although cholesterol as an independent variable did not have significant effect on the outcome, two-way ANOVA showed that the interaction of PC with cholesterol has significant influence on the outcome. This is due to the steric stability that cholesterol provides by controlling the lipid layer fluidity [18]. Inclusion of cholesterol is mandatory particularly in case of hydrophilic drugs as cholesterol reduces the leakage of the entrapped drug besides improving the stability of liposomes [19,20]. Interaction among the other independent variables did not have significant effect on the outcome ( Table 1). The outcome had very high dependence on the level of PC to an extent that the outcome was not affected by the level of other independent variables, namely, 1 and SPR. However, when the level of PC and cholesterol was maintained at their lowest levels, a higher amount of protein could be encapsulated if the SPR and 1 are maintained at a lower level (Figures 1(a) and 1(b)). When the level of lipids was high, the outcome, despite showing small dependence on 1 , was unaffected by the SPR. However, the true effect of 1 was dependent on their interaction with SPR (Figures 1(c) and 1(d)). While value of temperature and rpm were 0.933 and 0.886, their interaction had value 0.536. This shows that, although individual effect of 1 and SPR and their interaction on outcome is insignificant, the effect of each other on the outcome depends on the level of interaction between them. The effect of two-way interaction of the independent variables on percentage drug encapsulation is given in Figures 2(a) and 2(b).
CCD study showed that PC had significant effect ( value < 0.05) on all the dependent variables, namely, percentage of protein encapsulation, particle size, and zeta potential (Table 2). However, the SPR had significant influence only on size and the other two dependent variables were not affected much by the SPR. The interaction between PC and SPR also had significant effect on size (Table 2).
Although both the independent variables (PC and SPR) could affect the outcome, SPR is a physical parameter whose effect is always going to depend on the level of PC. SPR affects the distribution and homogeneity of the film within the flask [21]. As the solvent is allowed to evaporate under vacuum, the lipids will be deposited as a film within the flask. However, the distribution of the film will be affected by the SPR [22]. As a result, the size of liposomes will be affected when the lipid layer is hydrated since a uniform and thin film will help in producing uniform small sized particles. However, entrapment was mainly dependent on the amount of PC. In w/o liposomes the amount of protein affected will depend on the amount of aqueous layer. Although aqueous phase will be sterically affected by the amount of lipid present in the lipid layer, an optimum level of PC (with cholesterol) is required to obtain stable liposomes without drug leakage.
The percentage of protein encapsulated in all the trials during CCD was between 8 and 10% ( Table 2). The levels of the independent variables were decided based on the outcome of factorial design (Table 1 and Figure 1). This could have contributed to the closeness in the values of the outcome. Surface plot (Figure 3(a)) showed that percentage of protein encapsulated could be improved by maintaining the level of PC at its lowest level and SPR either at lowest or highest level. However, particle size of less than 500 nm would be obtained only when the amount of PC is between 120 and 140 with SPR less than 150 rpm (Figure 3(b)). At these conditions, the zeta potential will be less than −50 mV (Figure 4). The effect of both the independent variables (PC and speed SPR) on all three dependent variables could be clearly observed from overlaid contour (Figure 4). Overlaid contour plot shows that a percentage protein encapsulation of above 8% could be obtained if the level PC is maintained above 100 mg and SPR below 150 rpm.
The study on the effect of independent variables on zeta potential showed that PC had significant influence on the outcome (Table 2). SPR had more influence on the outcome (zeta potential when the level of PC was maintained low (<45 mg)). When the level of PC was maintained high, SPR had less influence on the outcome. The study showed that good stable liposomes would be obtained when the level of PC is either between 40 and 80 mg or 110 and 140 mg with SPR as predicted in Figure 4 and excellent stability between 90 and 110 mg with SPR as per Figure 4. Size and zeta potential of hyaluronidase loaded nanoliposome are shown in Figure 5.
Conclusion
Experimental design was used to identify the main variables with significant effect and effect of interaction among the individual variables on the outcome in terms of drug encapsulation efficiency, mean particle size, and zeta potential of nanoliposomes containing hyaluronidase. Fractional factorial study showed that phosphatidylcholine was the critical component whose level determines the percentage of protein encapsulation. The ideal conditions for production of particles with least possible and maximum encapsulation efficiency and good zeta potential were studied and identified using CCD. Under the optimized conditions (phosphatidylcholine − 140 mg, cholesterol − 1/5th of phosphatidylcholine, temperature during film formation − 50 ∘ C, and speed of rotation of flask during film formation − 150 rpm) the mean particle size was found to be 245 ± 9 nm with percentage protein encapsulation of 10 ± 2% and zeta potential −53.7 ± 3.5 mV.
Conflict of Interests
All the authors of this paper declare that there is no conflict of interests with any financial organization regarding the material discussed in this paper. | 3,057.6 | 2014-09-09T00:00:00.000 | [
"Biology"
] |
Thermally and Electrochemically Induced Electrode/Electrolyte Interfaces in Solid Oxide Fuel Cells: An AFM and EIS Study
In high temperature solid oxide fuel cells (SOFCs), electrode/electrolyte interfaces play a critical role in the electrocatalytic activity and durability of the cells. In this study, thermally and electrochemically induced electrode/electrolyte interfaces were investigated on pre-sintered and in situ assembled (La 0.8 Sr 0.2 ) 0.90 MnO 3 (LSM) and La 0.6 Sr 0.4 Co 0.2 Fe 0.8 O 3- δ (LSCF) electrodes on Y 2 O 3 -ZrO 2 (YSZ) and Gd 0.2 Ce 0.8 O 2 (GDC) electrolytes, using atomic force microscopy (AFM) and electrochemical impedance spectroscopy (EIS). The results indicate that thermally induced interface is characterized by convex contact rings with depth of 100–400 nm and diameter in agreement with the particle size of pre-sintered LSM and LSCF electrodes, while the electrochemically induced interfaces under cathodic polarization conditions on in situ assembled electrodes are characterized by particle-shaped contact marks or clusters (50–100 nm in diameter). The number and distribution of contact clusters depend on the cathodic current density as well as the electrode and electrolyte materials. The contact clusters on the in situ assembled LSCF/GDC interface are substantially smaller than that on the in situ assembled LSM/GDC interface likely due to the high mixed ionic and electronic conductivities of LSCF materials. The results show that the electrochemically induced interface is most likely resulting from the incorporation of oxygen species and cation interdiffusion under cathodic polarization conditions. However, the electrocatalytic activity of electrochemically induced electrode/electrolyte interfaces is comparable to the thermally induced interfaces for the O 2 reduction reaction under SOFC operation conditions.
Solid oxide fuel cell (SOFC) is one of the most efficient technologies for the conversion of chemical energy of fuels such as hydrogen and natural gas directly into electrical power.It employs a solid oxide electrolyte such as yttria-stabilized zirconia (YSZ) that serves as an ionic conductor in the temperature range between 600 • C to 1000 • C. The electrolyte separates the cathode from the anode and conducts oxygen ions from the cathode to the anode where they react electrochemically with the fuel.Electrons are then released to an external circuit, which provides a useful source of electrical power.][6][7][8][9][10] The cell performance and durability also depend strongly on the microstructural change at the interface under SOFC operation conditions. 11,128][19] O 2 reduction reaction on LSM cathodes is one of the most important and fundamental electrochemical processes in SOFCs. 1,131][22] Early studies show that polarization, either anodic or cathodic, can significantly alter/modify the interface, i.e., the coarsening of the contact rings on the YSZ electrolyte surface. 23Matsui et al. quantitatively studied the microstructural change at the LSM/YSZ interface under polarization by focused ion beam-scanning electron microscopy technique and observed the increased roughness of YSZ surface and the formation of closed pores at the LSM/YSZ interface, after cathodic current treatment at 300 mA z E-mail<EMAIL_ADDRESS>−2 . 24Further study showed that polarization at a much high current density (e.g., 1.2 A cm −2 ) can cause the densification of LSM and formation of nanopores at the interface region. 206][27] Chen et al. study the thermal stability and reactivity between LSM and YSZ in LSM/YSZ composites using in situ neutron diffraction technique. 28The results indicate that the reaction between LSM and YSZ starts at 1100 • C, forming La 2 Zr 2 O 7 , SrZrO 3 and MnO at the interface.In the case of LSM cathodes, electrochemical polarization also has a significant effect on the surface chemistry and segregation of LSM in addition to the microstructure and interface between LSM and YSZ. 23,25,29Based on the electrochemical behavior of LSM electrodes under the polarization conditions, we proposed that the removal/incorporation of surface passive species such as SrO under cathodic polarization is responsible for the activation behavior of the LSM on the O 2 reduction reaction, while anodic polarization accelerates the Sr surface segregation, resulting in the deactivation process. 30,31The extraordinary effect of polarization on the surface composition and chemistry has been confirmed by Mutoro et al. on (LaSr)CoO 3 thin film electrodes by in situ synchrotronbased X-ray photoelectron spectroscopy (XPS), 32 and by Huber et al. on LSM by in situ XPS and secondary ion mass spectroscopy (SIMS). 335][36][37] The excellent MIEC properties of LSCF imply that the O 2 reduction reaction would occur in the electrode bulk, away from TPB. 1 However, LSCF is highly reactive to YSZ electrolyte, forming La 2 Zr 2 O 7 and SrZrO 3 . 38To mitigate the reactivity problem, gadolinian-doped ceria (GDC) is used as the electrolyte or as a diffusion barrier in YSZ electrolyte system. 39Wang et al. studied the Sr and Zr diffusion in such LSCF/GDC/YSZ system and observed enhanced Sr diffusion under polarization conditions. 40an et al. added 1% Bi 2 O 3 into LSCF to form a dense LSCF film on the YSZ electrolyte and observed the significant increase in the interface contact area and reduction in the interface ion transfer resistance between cathode and electrolyte. 41In the case of MIEC type perovskite cathode materials like LSCF, the studies on the electrode/electrolyte interface are relatively rare.
In this study, the formation/evolution of interfaces between the most common cathodes such as LSM and LSCF perovskite and electrolytes such as YSZ and GDC is investigated on both pre-sintered and in situ assembled electrodes.In the case of in situ assembled electrodes, electrode coatings were directly assembled at test temperatures of 800 • C without conventional high temperature sintering process.The results clearly demonstrate that electrochemical polarization can induce the electrode/electrolyte interfaces on LSM/YSZ and LSM/GDC and in less extent on LSCF/GDC.The electrocatalytic activities of electrochemically induced interfaces are comparable or better than that of thermally induced interfaces for the O 2 reduction reaction.
Experimental
Materials and electrode preparation.-Zirconiaelectrolyte discs were prepared from 8 mol % Y 2 O 3 -ZrO 2 (YSZ, Tosoh, Japan) by diepressing, followed by sintering at 1550 • C in air for 4 h.Gadoliniadoped ceria (Gd 0.2 Ce 0.8 O 2 , GDC) was synthesized by solid state reaction from the mixture of CeO 2 (99.9%,Sigma-Aldrich) and Gd 2 O 3 (99.9,Sigma-Aldrich).GDC powders were ballmilled and die-pressed into discs, followed by sintering at 1600 • C for 4 h in air.The electrolyte discs were ∼1 mm thick and ∼20 mm in diameter.A-site deficient (La 0.8 Sr 0.2 ) 0.90 MnO 3 (LSM) powder was prepared by a coprecipitation technique and sintered at 900 • C. XRD showed the formation of single perovskite phase of the powder.LSM electrode ink was prepared and applied at the center of the YSZ electrolyte discs by slurry painting and sintered at 1150 • C for 2 h in air.The as-sintered LSM electrodes were denoted as pre-sintered LSM.The area of LSM electrode was ∼0.5 cm 2 and the thickness of the coating was ∼15 μm.Platinum paste was symmetrically painted on the center of opposite side of the YSZ discs to form the counter electrode and the reference electrode was painted as a ring around the counter electrode.The gap between the counter and reference electrodes was ∼4 mm.The platinum electrodes were sintered at 900 • C for 2 h.
In situ assembled LSM electrodes without pre-sintering were also prepared.In this case, LSM electrode ink was applied to YSZ electrolyte disc with pre-sintered Pt counter and reference electrodes.After drying, LSM electrodes were assembled in the test rig for the electrochemical testing.LSM electrodes without pre-sintering were denoted as in situ assembled LSM.Pre-sintered and in situ assembled LSM electrodes were also prepared on GDC electrolytes, in similar way as that of pre-sintered and in situ assembled LSM/YSZ cells.La 0.6 Sr 0.4 Co 0.2 Fe 0.8 O 3-δ (LSCF, Fuel Cell Materials) electrode ink was applied to GDC electrolyte discs by slurry painting.Pre-sintered LSCF electrodes were prepared by sintering the as-painted LSCF electrodes at 1100 • C in air for 2 h.In situ assembled LSCF electrodes were obtained by direct assembly of the as-painted LSCF electrodes on GDC electrolyte substrates with pre-sintered Pt counter and reference electrodes in fuel cell test rig without pre-sintering at 1100 • C in air.Pt mesh was used as current collector for the working and counter electrodes.
Characterization.-Electrochemical polarization and impedance measurements were conducted in a three-electrode set-up.Early studies show that cell configurations with symmetric electrode geometry and reference electrode positioned at the side of the working electrode and away from the exits of fuel and oxidant gases are suitable for the accurate performance evaluation of planar SOFCs and the cathodic and anodic polarization can be accurately separated if the thickness of the electrolytes is ≥ ∼250 μm). 42The LSM or LSCF cathodes were polarized at current densities of 100, 200, 500 or 1000 mAcm −2 in air for 3 h at 800 • C. Cathodic potential was measured against Pt air reference electrode.The polarization was interrupted to make the electrochemical impedance spectroscopy (EIS) measurements.EIS was obtained using Solartron 1260 frequency response analyzer in conjunction with a Solartron 1287 electrochemical interface from 100 kHz to 0.1 Hz at a signal amplitude of 10 mV.Electrode polarization resistance, R E was measured by the differences between the high and low frequency intercepts.Electrode ohmic resistance, R was obtained from the high frequency intercepts on the impedance curves.
The topography and microstructural change at the electrode/ electrolyte interfaces before and after the polarization treatments were investigated using tapping-mode atomic force microscopy (AFM).The examination was done under ambient conditions using a Digital Instruments Nanoscope IIIA microscope.To examine the electrode/electrolyte interface, the electrode coatings were removed by immersing in 1 M HCl solution for 30 min in an ultrasonic bath at room temperature.Prior to AFM examination, samples were treated with ethanol in an ultrasonic bath at room temperature for ∼5 minutes and then cleaned thoroughly with distilled water.This is to remove any residues or impurities that may have stained onto the electrode/ electrolyte interface.Care was exercised when handling the samples to avoid any accidental contamination of the electrode/electrolyte interface.current passage.For example, after polarized at 500 mAcm −2 for 5 and 180 min, R E is 4.3 and 0.8 cm 2 , respectively.4][45] The reduction in R E also depends on the current density.However, as shown in Fig. 1b, further increase of the current above 200 mA cm −2 has negligible effect on the reduction of R E for the reaction at the pre-sintered LSM/YSZ interface.As expected, R is more or less the same and does not change with the cathodic polarization current passage.
Results and Discussion
Figure 2 is the impedance responses of the in situ assembled LSM cathodes on YSZ electrolyte as a function of polarization time, measured under different current densities at 800 • C. The O 2 reduction reaction on the in situ assembled LSM/YSZ interface is characterized by a very large and depressed impedance arc with no clear separation of impedance arcs at low and high frequencies.The initial R E for the in situ assembled LSM cathode without high temperature pre-sintering is 36-42 cm 2 , which is substantially higher than 14-17 cm 2 obtained on pre-sintered LSM electrodes measured under identical conditions.However, similar to that observed for the reaction on presintered LSM, the size of the impedance arc decreases significantly with the cathodic polarization.After applying a cathodic current of 500 mAcm −2 for 5 min, R E of the in situ assembled LSM decreased from 37.1 cm 2 to 5.6 cm −2 , a 85% reduction in R E .At the end of the polarization treatment (500 mAcm −2 for 180 min), R E is reduced to 0.4 cm 2 , substantially smaller than the initial R E before the cathodic polarization (Fig. 2b).This value is also smaller than 0.8 cm 2 for the O 2 reduction reaction on the pre-sintered LSM/YSZ interface polarized under the same current density for 3 h (Fig. 1a).As shown in Fig. 2d, further increase of the cathodic current densities to 1000 mAcm −2 does not bring significant benefits to the reduction in R E .The impedance behavior under cathodic polarization shows typical activation process of the LSM-based cathodes under cathodic polarization (Fig. 2d), 30 similar to that observed on pre-sintered LSM electrodes (Fig. 1).
Figure 3 compares the polarization behavior of the pre-sintered and in situ assembled LSM cathodes on YSZ electrolyte for the O 2 reduction reaction, measured at 500 mAcm −2 and 800 • C. In the case of the reaction on pre-sintered LSM electrode, the cathodic polarization potential (E Cathode ) is characterized by a rapid decrease at initial stage with cathodic current passage, followed by a region where the decrease in E Cathode is much slower.The R is essentially constant and does not change with the cathodic polarization (Fig. 3a).The polarization potential for O 2 reduction on the in situ assembled LSM electrode behaves similarly to that on the pre-sintered LSM electrodes.The only visible differences from that observed for the reaction on pre-sintered LSM is an initial decrease of the R for the reaction on the in situ assembled LSM electrode (Fig. 3b).Initial R is 3.33 cm 2 and decreases to 2.1 cm 2 after polarization at 500 mAcm −2 for 15 min, becoming more or less stable with further current passage.This indicates that the cathodic polarization current improves the intimate contact between the assembled LSM particles and YSZ electrolyte.The significant decrease of E Cathode and R E under cathodic current passage is consistent with the activation behavior of LSM cathodes 30,33,[43][44][45] and confirms the activation effect of the cathodic polarization on the electrochemical activity of both in situ assembled and pre-sintered LSM electrodes.
Figure 4 shows the AFM micrographs of the YSZ electrolyte surface in contact with the pre-sintered LSM electrodes after cathodic polarization at different current densities and 800 • C. Before the electrochemical polarization, AFM micrographs clearly show the formation of contact convex rings with sharp edge on the surface of YSZ electrolyte grains (Figs.4d and 4e).The depth of the rings is 100-400 nm with width in the range of 70 to 100 nm.The diameter of majority of the convex rings is in the range of 0.6-1.0μm, in agreement with the particle size of pre-sintered LSM.The convex contact rings are formed during the sintering stage of LSM electrodes as the LSM electrode has not been subjected to the polarization treatment.The formation of contact convex rings at the LSM/YSZ interface is most likely due to the cation interdiffusion between YSZ and LSM during the high temperature sintering. 25As A-site deficient composition is used in the present study, the formation of lanthanum zirconate will be depressed as compared to the stoichiometrical LSM. 46After cathodic polarization treatment for 3 h, the sharp edge of the crater rings disappears and rings are flattened and grow outward.The growth and widening of the convex rings increases with the increase of the cathodic polarization current densities (see Fig. 4c), consistent with previous results. 23igure 5 is the AFM micrographs of the in situ assembled LSM/YSZ interface before and after the cathodic polarization treatment at 800 • C. Before the polarization, the YSZ electrolyte surface is clean with smooth YSZ grains and grain boundaries (Fig. 5e).After polarization in air for 3 h, there is a clear formation of nano-sized contact marks or islands on the surface of YSZ electrolyte, as shown by circles in the figure.With the increase of the cathodic polarization current densities, the number of contact marks or islands grows.In the case of in situ assembled LSM on YSZ electrolyte after polarized at 1000 mAcm −2 for 3 h, the YSZ electrolyte surface are covered by contact clusters (Fig. 5c).The clusters consist of numerous particleshaped contact marks with size of 50-100 nm (Fig. 5d).The YSZ electrolyte surface between the contact clusters is clean, indicating that the contact clusters are the contact points between the in situ assembled LSM particles and YSZ electrolyte.This clearly indicates that the contact marks or clusters formed on the surface of YSZ electrolyte are induced by the electrochemical polarization current generated for the oxygen reduction reaction.
LSM/GDC interface.-Figure6 shows the impedance curves for the O 2 reduction reaction on pre-sintered LSM cathode on GDC electrolyte under different current densities at 800 • C. The impedance responses for the O 2 reduction reaction on pre-sintered LSM/GDC interface are characterized by two separated impedance arcs at low and high frequencies.Similar to that observed on the pre-sintered LSM/YSZ interface, R E decreases rapidly upon the application of a cathodic polarization, while the R is independent of the cathodic polarization current treatment.The cathode polarization is primarily on the reduction of the electrode polarization resistance associated with the low frequencies.
The impedance responses for the O 2 reduction reaction on the in situ assembled LSM cathode on GDC electrolyte appear to be different from that on pre-sintered LSM/GDC interface, as shown in Fig. 7.The size of the impedance arcs decreases with the cathodic polarization current passage time, but the reduction in the R E is gradual and substantially slower than that observed on pre-sintered LSM/GDC interface.For example, the initial R E for the reaction on an in situ assembled LSM/GDC is 3 cm 2 and decreases to 2.3 cm 2 after polarization at 100 mA cm −2 for 5 min.The final R E is 1.3 cm 2 after polarization for 3 h, a 43% reduction in R E .In the case of pre-sintered LSM/GDC interface, the initial R E for the reaction is 20 cm 2 and decreases to 5.3 and 2.5 cm 2 after polarization at 100 mA cm −2 for 5 and 180 min, a 88% reduction in R E .Similar to that observed on the in situ assembled LSM/YSZ interface, R also decreases with cathodic current passage.For example, in the case of an in situ assembled LSM/GDC interface, the initial R is 1.9 cm 2 and decreases to 1.05 cm 2 after polarization at 1000 mAcm −2 for 3 h.
Figure 8 is the AFM micrographs of the GDC electrolyte surface in contact with the pre-sintered LSM electrodes after cathodic polarization at different current densities and 800 • C.There is a significant change of the contact convex rings on the GDC electrolyte surface, similar to that observed on the pre-sintered LSM/YSZ interface.After polarization treatment, the convex rings are roughened with widened and broken edges.The significant change in the contact convex rings indicates that the O 2 reduction reaction primarily occurs on TPB, consistent with the similar polarization and impedance behavior of the reaction observed on pre-sintered LSM electrodes on both GDC and YSZ electrolytes.
Figure 9 is the AFM micrographs of the GDC electrolyte surface in contact with the in situ assembled LSM electrodes after cathodic polarization at different current densities and 800 • C for 3 h.Similar to that observed on the in situ assembled LSM/YSZ interface, contact marks or clusters are also formed on the GDC electrolyte surface and the number of the contact clusters increase with the polarization current density.However in addition to the formation of the contact marks, there is also formation of nano-indents on the GDC surface (Fig. 9d).Contact clusters and nano-indents appear to spread uniformly to the whole GDC electrolyte surface.This can be seen more clearly for the reaction on the in situ sintered LSM/GDC interface after cathodic current passage at 1000 mAcm −2 (Figs.9c and 9d).The more uniformly formed LSM/GDC interface nano-sized contacts may explain the much slower and gradual reduction of R E as a function of the cathodic polarization time, as compared to that on the in situ assembled LSM/YSZ interface.pre-sintered LSCF electrodes are characterized by a depressed arc and an inductance loop at high frequencies, which is mainly induced by contact Pt wires and the high temperature furnace wires. 47,48The change in the size of impedance arcs is very small with the cathodic polarization time at 800 • C, very different from that observed on presintered LSM/GDC interface (see Fig. 6).As shown in Fig. 10a, R E changes slightly from 0.10 to 0.12 cm 2 , while and R decreases from 1.54 to 1.20 cm 2 after polarized at 100 mAcm −2 for 3 h.The variation of the R value with the cathodic polarization conditions has been commonly observed for the pre-sintered LSCF electrodes. 49imilar to that observed for the pre-sintered LSM/GDC interface, contact convex rings were also observed on GDC electrolyte surface after removal of the pre-sintered LSCF electrodes (Figs.11d and 11e).The size of the convex contact rings is in the range of 0.2 -0.5 μm, smaller than that obtained on pre-sintered LSM/GDC interface.The smaller contact ring is due to the fine LSCF electrode powders.There are some changes in the morphologies of the convex contact rings on the GDC electrolyte surface after the cathodic polarization treatment, however, the change in the morphology and topography is minor and very small, and the convex rings are essentially intact, very different from the broken rings as observed on the pre-sintered LSM/GDC interface under identical polarization conditions.1][52][53][54][55][56] This in turn indicates that O 2 reduction reaction would occur mainly in the bulk of the electrodes away from the electrode/electrolyte interface.
Figure 12a is the impedance responses of the O 2 reduction on the in situ assembled LSCF electrode on GDC electrolyte under a cathodic current of 1000 mAcm −2 at 800 • C. Despite the high polarization current, the electrode impedance of the in situ assembled LSCF behaves very similarly to that of the pre-sintered LSCF.R E remains more or less the same, while R is reduced from 1.48 to 1.07 cm 2 after polarized for 3 h.Initial R E is 0.13 cm 2 and decreases slightly to 0.10 cm 2 after polarization at 1000 mAcm −2 for 3 h.This is very close to 0.12 cm 2 observed on the pre-sintered LSCF/GDC interface (Fig. 10), indicating that the electrochemical activity of the in situ assembled LSCF/GDC interface is almost the same as that of the pre-sintered LSCF/GDC interface.The contact marks or clusters are also formed on the GDC electrolyte surface after polarization at 1000 mA cm −2 (Figs.12b and 12c).The size of contact marks is ∼0.6 μm and distributed on the GDC electrolyte surface.However, the number and density of the contact marks appear to be much lower than that observed on the in situ assembled LSM/GDC interface under similar polarization conditions (see Figs. 9c and 9d).
Thermally vs electrochemically induced interfaces.-Theobservation of convex contact rings for the pre-sintered LSM/YSZ, LSM/GDC and LSCF/GDC electrode/electrolyte systems as shown in this study indicates the formation of electrode/electrolyte interface induced under the high temperature sintering process.A study by Horita et al. showed that convex rings on the YSZ electrolyte surface have significantly higher concentrations of manganese as compared to that of the flat boundary between LaMnO 3 and YSZ and an amorphous layer exists at the LaMnO 3 film/YSZ interface due to cation interdiffusion of La, Y, Zr and Mn. 57Tan et al. prepared La 0.9 Ba 0.1 MnO 3 film on single crystal YSZ substrate by magnetron sputtering and also observed a formation of amorphous intermediate layer with thickness of about 3 nm between the film and YSZ. 58Thus, the formation of convex contact rings at the pre-sintered interfaces can be contributed to the cation interdiffusion between LSM and LSCF perovskite electrodes and YSZ and GDC electrolytes, thermally induced under high temperatures.The size of the contact convex rings is in agreement with the particle size of pre-sintered LSM and LSCF electrodes after the high temperature sintering process.
The significant reduction in R E for the reaction on pre-sintered LSM/YSZ and LSM/GDC and corresponding changes in the morphology and topography of the convex contact rings are consistent with early results that convex contact rings are the TPB for the O 2 reduction reaction. 23Morphology change of the convex rings at the pre-sintered LSM/YSZ and LSM/GDC interfaces is most likely due to the incorporation of oxygen and/or cation interdiffusion such as manganese between LSM and YSZ or GDC as the convex rings at the interface provide the short diffusion paths, as shown by the isotopic oxygen exchange and SIMS studies. 17,59,60On the other hand, the negligible changes in the convex contact rings at the pre-sintered LSCF/GDC interface indicate that O 2 reduction reaction occurs predominantly in the electrode bulk away from the TPB due to its high MIEC properties. 50,61In contrast with the substantial activation effect of the cathodic polarization on the electrochemical activities of pre-sintered LSM electrodes, the changes in R E for the O 2 reduction reaction on pre-sintered LSCF electrodes under the cathodic polarization current passage are negligible (Fig. 10).This is in an excellent agreement with the negligible effect of the polarization on the morphology of the convex contact rings at the pre-sintered LSCF/GDC interface.
Different to the situation in the pre-sintered electrode/electrolyte interfaces, there is no pre-formed interface between the electrode and electrolyte before the cathodic polarization passage in the case of the in situ assembled electrodes.The interfaces could not be thermally induced or formed at the test temperature used in this study, e.g., 800 • C.This is supported by the clean and smooth surface of YSZ and GDC electrolyte prior to the polarization treatment (Figs. 5e and 9e).The gradual increase of the contact marks and clusters on the YSZ and GDC electrolyte surface of the in situ assembled LSM also indicates that the interfaces between the in situ assembled electrodes such as LSM and LSCF and electrolytes such as YSZ and GDC can only be formed or induced by the cathodic polarization conditions, i.e., the electrochemical processes associated with the O 2 reduction reaction.
A recent in operado study by Fang et al. using synchrotron-bases ambient XPS showed that the oxygen-ion incorporation at the surface of ceria for water splitting and hydrogen evolution reactions is fast. 62uch formation of the interface is clearly affected by the oxygen activity at the contact points between the electrode and electrolyte as shown by the increased contact clusters with the increase of the cathodic polarization current (Figs. 5 and 9).The emerging of the contact clusters at the in situ assembled LSM/YSZ, LSM/GDC and LSCF/GDC interfaces could be explained based on the mechanism of the incorporation of oxygen and the cation interdiffusion between electrode and electrolyte, similar to that of the thermally induced interface on pre-sintered electrode/electrolyte systems.Electrochemical polarization can promote the interaction between the electrode and electrolyte in SOFCs and has significant effect on the surface segregation and surface composition of perovskite oxide electrodes.Mutoro depletion of the LSM surface in both Sr and Mn under cathodic polarization, but also found that cathodic polarization promotes the spreading or diffusion of segregated Sr and Mn onto the electrolyte. 33The diffusion of manganese oxide out of the LSM electrode onto the YSZ electrolyte surface under the influence of cathodic polarization has also been reported by Backhaus-Ricoult et al. 63 We studied the effect of polarization on the chemical reactivity between Ba 0.5 Sr 0.5 Co 0.8 Fe 0.2 O 3-δ (BSCF) and GDC and observed that under cathodic polarization conditions, the reaction between BSCF and GDC occurs at temperatures as low as 700 • C, forming Ba-containing particles at the interface. 64n the case of in situ assembled LSM electrodes, O 2 reduction reaction can only take place at the TPB, similar to pre-sintered LSM electrodes.Thus, the incorporation of oxygen and the cation interdiffusion between La, Y, Zr, Mn and Ce would occur at the contact points between the in situ assembled fine LSM particles and YSZ or GDC electrolyte, which in turn induces the morphology change at the interface, similar to the formation of convex contact rings in the case of pre-sintered LSM/YSZ and LSM/GDC interface.The formation of contact clusters is clearly due to the much smaller particle size of the in situ assembled LSM electrode coatings, as compared to the agglomerated LSM grains of the pre-sintered one.Very different to the in situ assembled LSM electrodes, the number and distribution of the contact cluster on the in situ assembled LSCF electrode on GDC electrolyte are much smaller (Fig. 12).This is most likely due to the fact that O 2 reduction reaction on LSCF perovskites is not limited to the TPB.Consequently, the driving force for the oxygen incorporation and in particular the cation interdiffusion between La, Sr, Co and Ce at the interface contact points is much smaller under the cathodic polarization conditions, despite that it is well known that both Sr and Co are segregated to the LSCF surface. 65The substantial differences in the formation of the contact clusters on the GDC electrolyte for the reaction on the in situ assembled LSM and LSCF electrodes demonstrate that electrochemically induced interface formation is very much dependent on the nature of the electrode materials.
A most important observation from the present study is that despite the significant differences in the microstructure of the interfaces, the electrochemically induced interfaces behave very much like the thermally induced interfaces for the O 2 reduction reaction.This is evidently supported by the similar polarization behavior between the pre-sintered LSM/YSZ, LSM/GDC and LSCF/GDC and the in situ assembled counterparts.For example, in the case of in situ assembled LSM/YSZ electrodes, the cathodic polarization current shows a significant activation effect on the electrocatalytic activity for O 2 reduction reaction (Fig. 2), identical to the reaction on pre-sintered LSM/YSZ electrodes (Fig. 1).R E of the in situ assembled LSM/YSZ interface after the polarization at 500 mAcm −2 for 3 h is 0.4 cm 2 , which is actually lower than 0.8 cm 2 for the O 2 reduction reaction on the pre-sintered LSM/YSZ interface polarized under the same conditions.Similar electrochemical performance was also observed between pre-sintered and in situ assembled LSM/GDC and LSCF/GDC interfaces.This in turn implies that electrochemically induced interfaces and their profound effect on the electrocatalytic activities are important considerations in the fundamental understanding of the microstructure and electrochemical performance durability of the SOFC cathodes.
Conclusions
In this study, thermally and electrochemically induced electrode/electrolyte interfaces were investigated on pre-sintered and in situ assembled LSM and LSCF electrodes on YSZ and GDC electrolytes.The results indicate that thermally induced interface is characterized by convex contact rings with depth of 100-400 nm and diameters in agreement with the particle size of pre-sintered LSM and LSCF electrodes.On the other hand, under cathodic polarization conditions, electrode/electrolyte interfaces can also be formed on the in situ assembled electrodes.Different to that of the pre-sintered electrode/electrolyte interfaces, the electrochemically induced interfaces are characterized by particle-shaped contact marks or clusters and the number and distribution of contact clusters depend on the cathoidc current density as well as the electrode and electrolyte materials.For example, the contact NP clusters on the in situ assembled LSCF/GDC interface are substantially smaller than that on the in situ assembled LSM/GDC interface due to the high MIEC properties of LSCF materials.The results indicate that the electrochemically induced interface is most likely caused by the incorporation of oxygen species and cation interdiffusion between electrode and electrolyte under the influence of cathodic polarization.The electrocatalytic activities of the electrochemically and thermally induced electrode/electrolyte interfaces for the O 2 reduction reaction are comparable and similar, despite the substantial differences in the topography and microstructure of the interface.
Figure 1 .
Figure 1.(a) Impedance curves of O 2 reduction reaction on pre-sintered LSM cathodes on YSZ electrolyte as a function of cathodic polarization currents of 500 mAcm −2 at 800 • C and (b) the change of R E as a function of polarization time, measured at different current densities and 800 • C.
Figure 2 .
Figure 2. Impedance curves of O 2 reduction reaction on in-situ sintered LSM cathodes on YSZ electrolyte as a function of cathodic polarization currents: (a) 100 mAcm −2 , (b) 500 mAcm −2 , and (c) 1000 mAcm −2 at 800 • C. The change of R E as a function of polarization time is shown in (d).
Figure 3 .
Figure 3. Polarization curves of O 2 reduction reaction on (a) pre-sintered and (b) in situ sintered LSM cathodes on YSZ electrolyte, measured at 500 mA cm −2 and 800 • C in air.
Figure 4 .
Figure 4. AFM micrographs of YSZ electrolyte surface in contact with presintered LSM electrodes after polarized at (a) 100 mAcm −2 ; (b) 500 mAcm −2 ; (c) 1000 mAcm −2 and 800 • C for 3 h.LSM electrode was pre-sintered at 1150 • C in air before the test and was removed by HCl treatment.YSZ surface in contact with a pre-sintered LSM electrode prior to the polarization treatment is given in (d, e).
2 Figure 6 .
Figure 6.Impedance curves of oxygen reduction on pre-sintered LSM cathodes on GDC electrolyte as a function of cathodic polarization currents: (a) 100 mAcm −2 , (b) 200 mAcm −2 , and (c) 500 mAcm −2 at 800 • C. The change of R E as a function of polarization time is shown in (d).
2 Figure 7 .Figure 8 .
Figure 7. Impedance curves of oxygen reduction on in situ sintered LSM cathodes on GDC electrolyte as a function of cathodic polarization currents: (a) 100 mAcm −2 , (b) 500 mAcm −2 , and (c) 1000 mAcm −2 at 800 • C. The change of R E as a function of polarization time is shown in (d).
Figure 11 .
Figure 11.AFM micrographs of the pre-sintered LSCF/GDC interfaces after polarized at (a) 100 mAcm −2 ; (b) 200 mAcm −2 ; (c) 500 mAcm −2 and 800 • C for 3 h.LSCF electrode was sintered at high temperatures and was removed by HCl treatment after polarization treatment.GDC in contact with pre-sintered LSCF electrode before the polarization treatment was given in (d, e).
Figure 12 .
Figure 12.(a) Impedance curves of oxygen reduction on the in situ sintered LSCF cathodes on GDC electrolyte as a function of cathodic polarization currents at 1000 mAcm −2 and 800 • C, and (b, c) Corresponding AFM micrographs after the polarization treatment.LSCF electrode was removed by HCl treatment. | 7,891.8 | 2015-01-01T00:00:00.000 | [
"Materials Science"
] |
Calpain-5 gene expression in the mouse eye and brain
Objective Our objective was to characterize CAPN5 gene expression in the mouse central nervous system. Mouse brain and eye sections were probed with two high-affinity RNA oligonucleotide analogs designed to bind CAPN5 RNA and one scramble, control oligonucleotide. Images were captured in brightfield. Results CAPN5 RNA probes were validated on mouse breast cancer tumor tissue. In the eye, CAPN5 was expressed in the ganglion cell, inner nuclear and outer nuclear layers of the retina. Signal could not be detected in the ciliary body or the iris because of the high density of melanin. In the brain, CAPN5 was expressed in the granule cell layers of the hippocampus and cerebellum. There was scattered expression in pons. The visual cortex showed faint signal. Most signal in the brain was in a punctate pattern.
An important question to understanding how CAPN5 leads to disease is identifying which tissues CAPN5 is expressed in and the levels of CAPN5 in those tissues. Previous studies have used RT-PCR to detect relative levels of CAPN5 in human and rat brains, with results showing a wide expression profile in rat and human brains [23,24]. Others have used immunohistochemistry (IHC) to detect CAPN5 in the retina of mice, which showed expression varied based on the antibody used [20,22,25]. Although these were important initial studies, a more complete picture is needed to understand CAPN5 expression in the central nervous system. In situ hybridization (ISH) offers some advantages over RT-PCR and IHC. ISH allows for detecting mRNA levels in an intact tissue, something RT-PCR does not. This is more specific because it produces an image that differentiates between specific cell types within a tissue and even cellular compartments within a cell type. ISH can also complement IHC data by detecting mRNA expression whereas IHC detects protein expression. Additionally, while multiple antibodies are available for CAPN5, they have previously been shown to give different expression patterns [22,25].
For this reason, mRNA in situ hybridization experiments were performed to identify the expression of CAPN5 in the mouse eye and brain.
Animal Care and euthanasia
Ten-week-old C57BL/6 mice were procured from the Toronto Centre for Phenogenomics. Mice were housed in a standard 12 h light/dark cycle. Healthy mice were euthanized by lethal intraperitoneal injection of sodium pentobarbital. Once animals were deeply anesthetized, the thoracic cavity was opened by a ventral midline incision and a small cut in the right atrium was made for blood outflow. 10 mL of PBS was perfused by a 25 g needle through the left ventricle. Then perfusion of 10% buffered formalin was performed. Organs were harvested and fixed overnight at 4 °C.
Tissue description and treatment
Tissue sections were cut on a Microm HM 355S microtome (ThermoFisher, Waltham, MA) to a thickness of 10 microns. Formalin-fixed paraffin-embedded sections from C57Bl/6 mice were deparaffinized for 5 min in xylene, immersed in 100% ethanol for 5 min then air-dried. Treatment was with Bond Epitope Retrieval Solution 2 (AR9640, Leica Biosystems, Buffalo Grove, IL) for 30 min.
In situ hybridization with LNA probes
High-affinity RNA oligonucleotide analogs (Locked Nucleic Acid, LNA ™ , Exiquon, Denmark) were designed to bind CAPN5 RNA. The proprietary Exiquon LNA ™ probe designer software was used to design custom probes to target CAPN5, while limiting non-specific binding. A probe cocktail of calpain-5 probe-1 (5DigN/ TGATACACAGCGGAAGTGGT) and calpain-5 probe-2 (5DigN/ACCAGAGGCAGAGTGTAACAGT) (probe cocktail), and a scramble-miR (5DigN/GTG-TAACACGTCTATACGCCCA), were prepared according to the manufacturer's recommended conditions (Exiqon, Denmark), and each was labeled at the 5′ end with digoxigenin [26]. All experimental tissue sections were probed with a cocktail of both probes 1 and 2.
Hybridization and washing procedures
Probes were resuspended in 10 μl then diluted 1:25 in Enzo hybridization buffer (ENZ-33808, Enzo Life Sciences, Farmingdale, NY), placed on tissue sections, covered with polypropylene coverslips and heated to 60 °C for 5 min, followed by hybridization at 37 °C overnight. Sections were washed in intermediate stringency solution (0.2× SSC with 2% bovine serum albumin) at 55 °C for 10 min.
Immunohistochemistry and color development
Sections were treated with anti-digoxigenin-alkaline phosphatase conjugate (1:150 dilution in pH 7 Tris buffer; Roche, Switzerland) at 37 °C for 30 min. Development was carried out with NBT/BCIP (34042, ThermoFisher, Waltham, MA), closely monitored and stopped when the control sections appeared light blue. Development time with the chromogen was between 15 and 30 min. Sections were counterstained with nuclear fast red (N3020, Sigma-Aldrich, St. Louis, MO) for 3-5 min, rinsed and mounted with coverslips.
Imaging
Brightfield images for Figs. 1 and 2a, b, f-i were extracted from Leica ScanScope XT slide scans (Leica Biosystems, Buffalo Grove, IL) or on Zeiss Axio A1 (Zeiss Microscopy, Germany). Images for Fig. 2c-e, j-m were taken on a brightfield microscope (Zeiss Microscopy, Germany). All images were saved in jpeg format.
Results
To better understand CAPN5 expression in the central nervous system, an in situ hybridization assay was developed. Two CAPN5 oligonucleotide probes were designed along with a negative control scramble oligonucleotide probe (Fig. 3a). To validate the assay, a cocktail of CAPN5 probes 1 and 2 were first applied to mouse breast cancer sections (determined histologically from a tumor), since calpain expression is linked to a variety of cancers [27][28][29]. CAPN5 expression was detected in the cancerous tissue but not the normal tissue (Fig. 3b). No signal was observed using the scramble probe (data not shown).
Next, we studied CAPN5 expression in the retina (Fig. 1). Previously, we reported that protein expression detection varied depending on which antibody was used [20,22,25]. CAPN5 mRNA signal was detected in the ganglion cell, inner nuclear and outer nuclear layers of the retina. This corroborated with our most recent IHC data indicating that CAPN5 is expressed in all layers of the retina, even though the phenotype is largely restricted to the photoreceptors [25]. Signal was not detected in the lens or cornea (data not shown). Because of the high density of melanin in the ciliary body, iris, and RPE, the in situ probes signal could not be ascertained, but previous IHC studies did not identify CAPN5 protein in these structures.
Punctate signal was seen in the hippocampus and cerebellum. In each of these sites, signal was localized to the granule cell layers. Specifically, CAPN5 signal was detected in CA3 and the dentate gyrus of the hippocampus. Scattered large neurons in pons also showed punctate signal. There appeared to be some faint signal in the visual cortex, and there was no signal detected in the auditory cortex, hypothalamus or striatum (data not shown).
Discussion
Identifying target cells and cellular localization is important for gaining better insights into protein function when considering therapeutic intervention. Previous studies have reported a number of tissue types and cell types in the central nervous system to contain CAPN5 depending on the method used to detect it. RT-PCR showed CAPN5 is present at relatively high levels in the rat and human brain. It is the second highest expressed calpain in the rat brain, following CAPN2. Additionally, CAPN5 mRNA was detected ubiquitously in rat brain, but was only found in the frontal lobe, cerebellum, medulla, hypothalamus and thalamus in human brain. IHC studies revealed CAPN5 in the inner and outer segments (IS and OS), the inner and outer plexiform layers (IPL and OPL), inner nuclear layer (INL) and retinal ganglion cells (RGC), depending on the antibody used [20,22,25].
Studies in the EMBL-EBI gene expression database report relative tissue expression data collected from RNA-seq and microarray experiments [30,31]. In mice, twelve studies investigating CAPN5 tissue expression were found on the EMBL-EBI database. In contrast to other data, these studies found a stark predominance of CAPN5 expression in brain tissue. The EMBL-EBI database included eight experimental datasets for human CAPN5 expression. Brain CAPN5 expression was also found in human datasets on EMBL-EBI, though at levels much more modest than in mice relative to other tissues investigated.
The results obtained in the current study are in agreement to previous IHC experiments. Both methods detected CAPN5 in retinal ganglion cells and the INL. However, our study also detected CAPN5 in the ONL, but not the IS and OS or the IPL and OPL. with RT-PCR experiments, CAPN5 was detected in the cerebellum, specifically the granular cell layer. We also detected expression in the hippocampus and pons, but did not detect significant expression in any other regions. Our results support our previous finding that CAPN5 may be playing a role in phototransduction. CAPN5 was detected in the INL, ONL and retinal ganglion cells. The INL is made up of the cell bodies of horizontal cells, amacrine cells and bipolar cells. The ONL contains the cell bodies of the two types of photoreceptors, rods and cones. Although we previously reported CAPN5 at the photoreceptor synapses, because in situ detects the mRNA of the protein, it's possible once CAPN5 is translated in the cell bodies, it is transported to the synapses. This hypothesis could also be true for CAPN5 expression in retinal ganglion cells, the cell type which forms the optic nerve.
The role of CAPN5 in the brain is less clear. The pons serves as a bridge between the cerebellum and forebrain. It controls many basic functions of the body including the basic senses, sleep, respiration and posture. The hippocampus is involved in memory, specifically longterm memory, while the cerebellum regulates motor movements, balance and speech. One hypothesis, based off the apparent role of CAPN5 in the eye, is that CAPN5 may play a role in signal transduction in the brain as well. This is supported by the fact that CAPN5 is seen at incredibly high levels in granule cells of the cerebellum, the most numerous type of neurons. Additionally, granule cells are believed to fine-tune inputs from the brain by allowing for combinatorial coding [32]. Further support for this hypothesis is the presence of CAPN5 in the dentate gyrus and CA3 subfield, which is the most interconnected region of the hippocampus. Nonetheless, patients with CAPN5 mutations to date do not display brain-related phenotypes, suggesting the effect of hyperactive CAPN5 is nondetrimental in the brain.
Although this study reported CAPN5 expression in the cerebellum, pons and hippocampus, ADNIV patient phenotypes are restricted to the eye. There may be excess calpain activity in the retina, due to the high level of calcium required for phototransduction; and the brain may be more resistant to CAPN5 damage since there are comparatively fewer cells in the granule cell layer expressing CAPN5. Another possibility is that CAPN5 has different substrates in the brain and eye, and one or more of the Scramble-miR (control) in a section adjacent to a. c Higher magnification of a. d Higher magnification of c. Note the punctate signal (arrow). e Scramble-miR (control) corresponding to c. f-k CAPN5 mRNA expression in the hippocampus. f CAPN5 signal is concentrated in the granule cell layer in the hippocampus. g Scramble-miR (control) in a section adjacent to f. h Higher magnification of f. i Scramble-miR (control) corresponding to h. j Calpain 5 signal seen in the granule cell layer at high magnification. Note the punctate signal (arrows). k Scramble-miR (control) in a section adjacent to j. l and m CAPN5 mRNA expression in the pons. l CAPN5 signal seen in larger neurons at high magnification. Note the punctate signal (arrows). m Scramble-miR (control) in a section adjacent to l eye substrates may have a more detrimental effect when misregulated by a hyperactive CAPN5. Additional studies will need to be performed to determine the substrates of CAPN5.
Although some studies suggest ISH may give similar results to IHC, it seems to depend on the antibody used and is mainly analyzed in a present-or-not manner rather than in a qualitative manner [33][34][35]. Additionally, it is important to note that there may be differences in mRNA expression versus protein expression depending on the tissue. Therefore, in analyzing expression throughout a whole tissue, ISH in conjunction with IHC may give the most complete picture.
Conclusions
In the eye, CAPN5 mRNA was seen in ganglion cell, inner nuclear and outer nuclear layers of the retina. In the brain, signal was evident in the granule layers of the cerebellum and hippocampus and in scattered large neurons in pons. Our findings support the concept that CAPN5 may be playing a role in signal transduction in both the brain and eye. With this expression data in mind, future studies may begin to narrow down potential substrates of CAPN5, and better understand the underlying mechanism of its pathology. Ultimately, a better understanding of CAPN5 may provide useful drug targets as treatments for the many pathologies related to CAPN5.
Limitations
This study is limited to expression of CAPN5 in mice. There may be minor differences in human expression. This data and its relationship to CAPN5-associated diseases may be further interpreted once the substrates of CAPN5 have been identified. Future studies can examine other calpains in regions of the brain including a more detailed look at the cerebral cortex. | 3,017 | 2017-11-21T00:00:00.000 | [
"Biology"
] |
Chemical “Butterfly Effect” Explaining the Coordination Chemistry and Antimicrobial Properties of Clavanin Complexes
Can a minor difference in the nonmetal binding sequence of antimicrobial clavanins explain the drastic change in the coordination environment and antimicrobial efficiency? This study answers the question with a definite “yes”, showing the details of the bioinorganic chemistry of Zn(II) and Cu(II) complexes with clavanins, histidine-rich, antimicrobial peptides from hemocytes of the tunicate Styela clava.
T he Zn(II)-clavanin C complex, although its coordination sites are similar to those of other clavanins, has the longest metal−ligand interactions, caused by the presence of the peptide OC−N−H fragment, which pushes the Zn(II) ion out of its binding pocket. Presumably, this difference is due to a prefolding of the peptide that takes place before Zn(II) binding, and such a structural rearrangement of the metal binding site leads to a remarkable enhancement of the microbiological properties of the Zn(II)-clavanin C complex against a variety of pathogens.
Antimicrobial peptides (AMPs) have recently become a scientifically "hot" topic, appearing as a natural part of the innate immune system to which, with few exceptions, pathogens have developed little resistance compared to traditional antibiotics 1−7 and often showing synergistic properties to other drugs. 8 They occur in a variety of organisms, and also the nonmammalian ones show very low toxicity toward mammalian cells. 9,10 Clavanins, one of the families of AMPs, are 23-amino acid, histidine-rich, cationic peptides 11 that have a random-coiled conformation in water but show an α-helical structure in membrane-mimicking environments. 12 There are six types of clavanins (clavanin A−E and clavaspirin), which occur naturally in hemocytes of the tunicate Styela clava. 13,14 At pH 5.5, they inhibit the growth of Gram-positive (Listeria monocytogenes, MRSA), Gram-negative bacteria (Escherichia coli and Klebsiella pneumoniae), and fungi (Candida albicans), 11,15 triggering ongoing studies focused on their use as biofilm-preventing agents 16 or in bacterial biosensors. 17 During the over 20-year study on clavanins (mainly clavanin A), various modes of action were proposed. 15,18,19 Currently, these doubts have been dispelled by Juliano et al., 20 who showed three different, pH-dependent mechanisms for clavanin A. The first one is a nonspecific membrane disruption that occurs at neutral pH (7.4). 20 The second is observed at acidic pH (5.5) when clavanin A binds to DNA and disrupts DNA synthesis, similarly to indolicidin. 20 The third mechanism, which also occurs at pH 5.5, is assigned not to the "single" clavanin A but to its Zn(II) complex, which cleaves DNA. 20 Moreover, in the experiment with E. coli at pH 5.5, the addition of Zn(II) ions improved clavanin A minimum inhibitory concentration (MIC) from 64 to 4 μM. 15 It was also emphasized that in the case of clavanin A at pH 5.5, His17 is crucial for both the peptide's antimicrobial activity and its zinc(II) binding ability; in the α-helical conformation, His17 and His21 are expected to be present on the same side of the helix (i and i + 4 sites), and this HXXXH motif was suggested to be the primary Zn(II) anchoring site. 15 Such a motif is also typical for Zn(II)-based nucleases, 21 and among the six different clavanin sequences (Figure 1), only four (clavanins A, B, and E and clavaspirin) contain the HXXXH pattern.
It is quite well established that, for some AMPs, metal ions act as activity boosters, affecting their charge and/or structure. 22−24 Interestingly, in the hemocytes of some aquatic invertebrates, quite large amounts of metal ions are found: up to 400 mM Cu(II) and up to 1.2 M Zn(II). This lets us hypothesize that hemocytes of S. clava can probably reach similar metal concentrations. 25,26 On the basis of the clavanin sequential differences, we aimed to establish a coordination pattern, necessary for the biological action of Zn(II)-and Cu(II)-clavanin complexes, and point out the relationship between their metal coordination ability, structure, and antimicrobial mode of action.
Because the available literature data describe C-amidated clavanins at pH 5.5, we decided to focus on the influence of Cterminal deamidation [the free carboxylate group could be an additional Zn(II) binding site; also studies find that the presence of the C-terminal amide group in an AMP can sometimes reduce its antimicrobial properties 27 ] on the antimicrobial activity of clavanins and their metal complexes and to perform the studies at physiological pH (7.4), which may be most interesting for possible future applications.
The coordination chemistry of Zn(II) and Cu(II) complexes of clavanins A−E was studied by mass spectrometry (which confirmed the 1:1 stoichiometry of all of the formed complexes; Figures S2A−J and S3A−J), potentiometry, UV− vis, circular dichroism, and NMR spectroscopies and verified by density functional theory (DFT) calculations. Antimicrobial assays showed the effect of the addition of Zn(II) and Cu(II) on the activity of clavanins, and liposome leakage experiments allowed one to suggest whether their mode of action is membrane-disrupting.
The species distributions, as well as pK a values and overall stabilities for Zn(II)-clavanin complexes, are very similar (Table S1 and Figure S4A−E); that is why here we discuss only the Zn(II)-clavanin A complex as a representative example.
One, two, and three imidazoles are involved in the binding of Zn(II) in the ZnH 4 L, ZnH 3 L, and ZnH 2 L forms, respectively. The ZnHL species most probably come from deprotonation of the N-terminal amine, which does not take part in the coordination (which was directly confirmed for the Zn(II)-clavanin D complex, where signals from the N-terminal alanine were unaltered in the complex spectra at pH 7.4 with respect to those of the free ligand; Figure S5A). In the ZnL complex, a lysine side chain deprotonates without taking part in the coordination ( Figure S4A).
DFT calculations further confirm the 3N-type interactions with imidazole rings for all five Zn(II) complexes at pH 7.4 (Table S2). The complexes of clavanins A, B, and E engage His10, His11, and His17 imidazoles in Zn(II) coordination, while clavanins C and D build complexes using His10, His11, and His21 imidazole rings. This binding mode differs from that previously found for clavanin A at pH 5.5, which engages His17 and His21 in binding. 15 A change of the coordinating donors with a change of the pH is quite possible and often occurs in the so-called polymorphic binding sites, in which metal ions "move along" the chain of imidazoles involved in binding. 28 Most interestingly, the Zn(II)-clavanin C complex has the longest metal−ligand bond set, which suggests a rather weak metal−ligand interaction in the series. Such bond elongation can be caused by a unique structure of the binding site; directly below the Zn(II) ion, the OC−N−H fragment of the peptide backbone aims its H atom almost directly at the metal cation; the H···Zn(II) distance is 2.608 Å, and the N−H·· Zn(II) angle is close to linear (160.8°; Figure 2). Such an arrangement of the OC−N−H fragment close to the positively charged metal results in the longest and weakest metal−ligand bonds and can make the metal dissociation easy in comparison to that in the rest of the complexes in the series. It is noteworth that this kind of interaction, which "pushes" Zn(II) out of its coordination environment, is not observed for the Zn(II)-clavanin D complex, which has the same binding donors as Zn(II)-clavanin C (His10, His11, and His21). The different organization of the binding pocket is most likely due to the prefolding of the clavanin C peptide before the addition of Zn(II) ions. We suggest that these kinds of interactions are also responsible for the (later discussed) impressive microbiological properties of the Zn(II)-clavanin C complex ( Figure 3B).
Detailed descriptions of the pH-dependent distribution forms of Cu(II) complexes are given in Figure S6A−E, supported by Table S1, and spectroscopic data are given in Figures (Table S3). Clavanin D binds to His10, His11, and His21 and the H11 amide N, and clavanin C (the only clavanin with a histidine in the third position of the peptide sequence) forms a typical albumin-like complex, in which the NH 2 -Xaa-Yaa-His pattern (the ATCUN motif) allows very stable, square-planar To compare the clavanins' affinity toward Zn(II) and Cu(II) ions, competition diagrams were prepared (based on the binding constants from Table S1). In the case of Zn(II) complexes, all of them show similar binding affinities ( Figure S7A). At pH 5.5, the stabilities of all Cu(II)-clavanin complexes are comparable; the situation changes dramatically at a pH above 6, when Cu(II)-clavanin C (with the previously described so-called albumin-like binding mode) becomes the most stable complex ( Figure S7B).
Antimicrobial susceptibility testing was performed on two Gram-negative (Escherichia coli ATCC 25922 and Pseudomonas aeruginosa ATCC 27853), two Gram-positive (MRSA Staphylococcus aureus ATCC 43300 and Enterococcus faecalis ATCC 29212), and one fungal strain (Candida albicans ATCC 10231). Substantial differences in the MIC values between specific clavanins exist (Table S4). Strikingly, the coordination of Zn(II) strongly enhances the antimicrobial properties of clavanin C against the studied microbes. In the case of other clavanins, most often the presence of metal ions enhances their antimicrobial properties, although this is not a general dependence. The antimicrobial efficiency of Zn(II)-clavanin C is both unexpected and impressive; the MIC obtained against E. coli (MIC = 16 μg/mL; Table S4) was lower than the EUCAST breakpoints for amoxicillin−clavulanic acid (penicillins), fosfomycin p.o. and i.v., and nitrofurantoin used in standard treatment and equal to those of cefadroxil, cephalexin, and nitroxioline (Table S5). The complex is also active against several clinical E. coli strains (Table S6), presents satisfactory results against E. faecalis, with MIC equal to that of nitrofurantoin, and is also more potent than fosofmycin i.v. and nitrofurantoin against S. aureus (MIC = 16 μg/mL; Tables S4 and S5). 30 A selective metal-enhanced trend is also observed for the activity of both Zn(II) and Cu(II)-clavanin B complexes against E. faecalis, with a MIC = 8 μg/mL, which is only twice those of ampicillin and amoxicillin and 8 times less than that of nitrofurantoin. This result is remarkable at least for two reasons: (i) its selectivity toward this pathogen only and (ii) the role of Arg7 in the biological activity of the complex. Although the sequential difference between clavanins A and B is truly minor (a K7R substitution), it leads to changes in the coordination environment (most likely due to prefolding of the peptide before the addition of metal ions). The remarkable antimicrobial effect of the presence of Arg has also previously been described in the literature. 31 Zn(II)-and Cu(II)-clavanin B complexes were also active against several clinical strains, but no considerably impressive MIC values were obtained (Table S6).
It is noteworthy that no significant cytotoxicity was found against a RPTEC cell line from ECACC collection for any of the studied ligands and their complexes (Table S7).
In conclusion, at physiological pH, clavanins coordinate Zn(II) by three imidazole groups ( Figure 3A), always involving His10 and His11 in binding (Table S2). His10 and His11 also participate in the coordination of Cu(II), with the exception of clavanin C, which uses its so-called ATCUN motif ( Figure 3C). In the rest of the studied clavanins, at pH 7.4, three imidazoles and one amide group take part in Cu(II) binding.
All Zn(II)-clavanin complexes show similar stabilities, while in the case of the Cu(II) ones, clavanin C is the most potent binding agent. Establishing a direct connection between the bioinorganic chemistry of the metal-clavanin complexes and their antimicrobial activity is far from trivial, and it certainly does not depend on the thermodynamic stability of the complexes (and therefore is not based on nutritional immunity). Liposome experiments confirmed that both the free peptides and their complexes show average membrane disrupting ability, further suggesting their mode of action to be intracellular ( Figure S10A−F).
Good or moderate antifungal activity of the whole clavanin family is observed, which is often metal-enhanced. The most spectacular metal enhancement of the antimicrobial properties is seen for the Zn(II)-clavanin C complex, which is quite surprising. On the basis of the literature data, 32,33 we would have expected such an effect for the ATCUN-bound Cu(II) but not Zn(II) complexes. DFT calculations came with a possible explanation of this phenomenon: in the case of the Zn(II)-clavanin C complex, a unique structure of the binding site is observed in which the OC−N−H fragment of the peptide backbone is present directly below the Zn(II) ion, with the H atom "pushing" the Zn(II) ion out of its binding pocket, resulting in a long metal−ligand bond that can make Zn(II) dissociation from the complex easier than that of the rest of the complexes. The different organization of the binding pocket is most likely due to the prefolding of clavanin C, which takes place before addition of the Zn(II) ions, acting as a "butterfly effect" for the later Zn(II) complex structure and its surprisingly enhanced antimicrobial properties.
Author Contributions
The manuscript was written through contributions of all authors. All authors have given approval to the final version of the manuscript.
Notes
The authors declare no competing financial interest. | 3,007.2 | 2021-08-12T00:00:00.000 | [
"Biology",
"Chemistry"
] |
A Hybrid Tabu Search and 2-opt Path Programming for Mission Route Planning of Multiple Robots under Range Limitations
: The application of an unmanned vehicle system allows for accelerating the performance of various tasks. Due to limited capacities, such as battery power, it is almost impossible for a single unmanned vehicle to complete a large-scale mission area. An unmanned vehicle swarm has the potential to distribute tasks and coordinate the operations of many robots / drones with very little operator intervention. Therefore, multiple unmanned vehicles are required to execute a set of well-planned mission routes, in order to minimize time and energy consumption. A two-phase heuristic algorithm was used to pursue this goal. In the first phase, a tabu search and the 2-opt node exchange method were used to generate a single optimal path for all target nodes; the solution was then split into multiple clusters according to vehicle numbers as an initial solution for each. In the second phase, a tabu algorithm combined with a 2-opt path exchange was used to further improve the in-route and cross-route solutions for each route. This diversification strategy allowed for approaching the global optimal solution, rather than a regional one with less CPU time. After these algorithms were coded, a group of three robot cars was used to validate this hybrid path programming algorithm.
Introduction
Mainstream applications are currently focused on unmanned vehicle robots used in manufacturing; unmanned air vehicles (UAVs) in monitoring the earth's surface; emergency aid and disaster control and prevention efforts; commercial aerial photography; logistics; and unmanned combat air vehicle operations (UCAVs) [1,2]. When the scope of tasks and the areas involved are expanding, a system consisting of multiple unmanned vehicles agents is required to complete a mission with a very wide area. To complete the tasks more efficiently, well-planned path programming is a must, so as to minimize time and energy consumption by shortening the overall distances of the routes.
Facing this problem of a large area multi-waypoints mission dealing with a multi-agent system, we designed a hybrid dynamic path programming algorithm to help us achieve the goals of saving time and energy, with shorter and more efficient routes so that the robot cars were not running redundant paths. The maximum travelable distance (limited by the battery energy capacity) was used as one of the constraints during the algorithm iterations.
Unlike other path-programming works, this study contributes to the field with a hybrid path programming algorithm involving a combined tabu search and a 2-opt swap under the maximum Electronics 2020, 9, 534 3 of 18 In fact, the tabu search algorithm not only adapted in path programming, but also in many other fields, such as those involving the economic dispatch of electric generators [16].
Sariel-Talay et al. [6] studied MTRP. They mentioned that their system was able to obtain real-time, optimal paths for traveling among multiple target points on their own platform with a robot swarm, and the assignment was completed successfully. However, a single robot car's maximum traveling capacity was not considered.
There are many published articles on multi-vehicle path programming, but only a few involve on-site experiments with cars to verify the effectiveness of the programming. In mTSP, the maximum and minimum number of cities that each salesman should visit are defined and limited, but the tasks may not get completed due to the limited energy capacity of one car. Therefore, we focused on the "maximum travelable distance" for this study, and also adopted a two-phased module as our solving module. In the first phase, a tabu search combined with a 2-opt swap method was used to program a single optimal path, and then to get the initial solution by splitting the path into multiple sub-paths. In the second phase, Huang and Liao's solving module was adopted-a tabu search combined with a 2-opt swap method to improve the initial paths, and to determine whether the result exceeded the maximum distance limit.
Additionally, a diversification strategy similar to GA was implemented for in-route path improvements. The use of this strategy resulted in the recording and selection of the optimal solution or second-best solution as the initial solution for the in-route path improvement later on, so that an unwanted regional optimal solution could be avoided. In this research, the 2-opt swap method was adopted to improve in-route paths. Once it finished, the calculation continued to improve cross-route paths until the conditions of termination were reached. In the last stage, this solution was verified by our experiments with real robot cars running the programmed paths. From the experiment, we looked into the distance designed for the path programming, and the trajectory difference between the actual paths and theoretical ones.
Research Highlights
The objective of the path programming problem of this research was to assign several series of target points to multi-robots to maximally reduce total energy consumption. The robots had limited onboard (fuel or battery) energy, so a maximum travelable distance constraint was expected to be satisfied. Highlights of this research speak to these issues:
1.
We developed an optimal route planning algorithm for multi-robots' application by using an innovative, hybrid, two-phase tabu and 2-opt search.
2.
In previous research involving salesman problems, constraints in VRP, mTSP and MTRP were mainly located on maximum reachable target points for single vehicle problems, or on maximum cargo capacity for multiple vehicles problems. In our study, we made this problem closer to an unmanned system's reality by building a new model with maximum range constraint. 3.
Unlike most previous research in VRP and mTSP, in which problems end with a computational result, outdoor field tests were conducted to validate the algorithm developed in this project.
Design of the Two-Phase Path Programming
This research was focused on the dynamic path programming of multi-robots for achieving missions that lowered the energy costs in each group. The problem defined in this research is similar to the mTSP problems of visiting n targets with a shortest total route-distance by using m vehicles, with every target being visited only once.
However, the service range of unmanned robot vehicle systems is strictly limited by its onboard energy capacity (such as fuel or battery); hence the major difference in this research, as compared to traditional mTSP projects, is consideration of the range-limitation as a constraint during the Electronics 2020, 9, 534 4 of 18 programming loops. According to the previous statement, the problem definition for this research was modified from a standard module of SD-mTSP [17]. The object and subject were as follows: the k path goes from city i to city j otherwise (1) Object: Subject to: where c ij is the distance array of A; m is the number of robot cars; D lmt is the value of maximum distance limit; n is the number of target points; and u i is the number of target points visited on a car's path from the original point to target I. Equation (1) is variable integer, determining whether target i to j has been traveled by k car; with Equation (2), A in the formula is the set of all specified paths; Equation (3) is the object function of this problem; in Equation (4) the value of subject D lmt should not be exceeded by all of the cars; Equations (5) and (6) ensure that cars start at the original point and come back to the same point; Equations (7) and (8) ensure that all of the targets have been entered and exited by the car; Equation (9) indicates that each car must visit at least one target; and Equation (10) prevents the problem of the sub-path not including the original point.
There are many kinds of heuristic algorithms to solve this problem, including ACO, Particle Swarm Optimization (PSO), GA and tabu search. In this research we mainly adopted the tabu search of a heuristic algorithm that can imitate human beings' memory, which keeps experiences in the past to prevent roundabout searches and to create a tabu list. It can learn from past solutions to avoid any regional optimal solution being seen as a global optimal solution. Tabu search is known for its ability to quickly converge iterations. In addition, its optimal solution and the number of iterations of convergence are more stable than what GA can achieve.
As GA randomly generates paths by mating, the solution could have turned out to be a suboptimal rather than an optimal solution, so we needed to repeatedly verify the result for the ultimate optimal solution. We knew that as long as the initial solution and tabu list for tabu search were well set, Electronics 2020, 9, 534 5 of 18 we could quickly complete a convergence and get the more stable, optimal solution; that being the main reason we adopted it for this study [2].
With tabu search as the core technology and involving the 2-opt swap method, a two-phase path programming algorithm model was established for this research. As shown in Figure 1, the initial solution established in the first phase was processed in two steps.
In the second phase, we needed to obtain the global optimal solution by improving the initial one we realized in the first phase. As in the first phase, three steps were used. Our first was the cross-route improvement by tabu search combined with 2-opt to exchange different target points within different paths. Then, a diversification strategy that recorded the optimal and suboptimal solutions from last computation was implemented to avoid the result falling into a regional optional solution.
Step 3 was to improve each in-route path with tabu search combined with 2-opt. In other words, it was a refined solution exchanged from the first step, and we needed to repeat the process we carried out in step 2 until a termination condition was reached.
Electronics 2020, 9, x FOR PEER REVIEW 5 of 18 were well set, we could quickly complete a convergence and get the more stable, optimal solution; that being the main reason we adopted it for this study [2]. With tabu search as the core technology and involving the 2-opt swap method, a two-phase path programming algorithm model was established for this research. As shown in Figure 1, the initial solution established in the first phase was processed in two steps.
In the second phase, we needed to obtain the global optimal solution by improving the initial one we realized in the first phase. As in the first phase, three steps were used. Our first was the crossroute improvement by tabu search combined with 2-opt to exchange different target points within different paths. Then, a diversification strategy that recorded the optimal and suboptimal solutions from last computation was implemented to avoid the result falling into a regional optional solution.
Step 3 was to improve each in-route path with tabu search combined with 2-opt. In other words, it was a refined solution exchanged from the first step, and we needed to repeat the process we carried out in step 2 until a termination condition was reached.
Tabu Search
Tabu search is a global search method. First, it establishes an initial solution, and then finds the neighborhood optimal solution, or accords the solution of aspiration criterion as the base for moving. That means, searching for solutions in the neighborhood domain of the current solution. Among them, the tabu list memory mechanism is noticeably important. It records the solutions which have been searched already to prevent any useless or redundant searching. Once the search on all neighborhood domains is completed, the optimal solution is selected. If any solution that be selected is found to be better than the current optimal solution, the optimal solution is updated until the termination condition is reached [18].
2-opt Swap Method
The tabu search combined with the 2-opt nodal line swap method that was proposed by Lin (1965) [19] was adopted in this research. It is a method that we used to change the order of the path to expand the current solution. Initially it was designed on TSP, and now it is widely used in solving path problems (TSP, VRP, VRPTW, and so on). Its swap concept is shown in Figure 2. If (1, 3) and (2, 4) nodal lines are replaced, (1,2) and (3,4) could be connected to change its path.
The method for the cross-route path swap is different from the one we used for the single one path (see Figure 3 for the swap concept). If the nodal lines of (5, 6) and (1, 2) are exchanged, (5,2) and (1,6) as well as (5, 1) and (2,6) are two possible paths that will be changed, respectively. Compared to the original path, the direction of the latter one will change. Thus, the use of 2-opt would be possible for reversing the directions of the paths.
Tabu Search
Tabu search is a global search method. First, it establishes an initial solution, and then finds the neighborhood optimal solution, or accords the solution of aspiration criterion as the base for moving. That means, searching for solutions in the neighborhood domain of the current solution. Among them, the tabu list memory mechanism is noticeably important. It records the solutions which have been searched already to prevent any useless or redundant searching. Once the search on all neighborhood domains is completed, the optimal solution is selected. If any solution that be selected is found to be better than the current optimal solution, the optimal solution is updated until the termination condition is reached [18].
2-opt Swap Method
The tabu search combined with the 2-opt nodal line swap method that was proposed by Lin (1965) [19] was adopted in this research. It is a method that we used to change the order of the path to expand the current solution. Initially it was designed on TSP, and now it is widely used in solving path problems (TSP, VRP, VRPTW, and so on). Its swap concept is shown in Figure 2. If (1, 3) and (2,4) nodal lines are replaced, (1,2) and (3,4) could be connected to change its path.
The method for the cross-route path swap is different from the one we used for the single one path (see Figure 3 for the swap concept). If the nodal lines of (5, 6) and (1, 2) are exchanged, (5,2) and (1,6) as well as (5, 1) and (2,6) are two possible paths that will be changed, respectively. Compared to the original path, the direction of the latter one will change. Thus, the use of 2-opt would be possible for reversing the directions of the paths.
Establishment of the Initial Solution
This phase was carried out in three steps, as shown in Figure 4. In step 1, tabu search and 2-opt were used to optimize the single path. In step 2, the single path was being divided into multiple subpaths as the initial solution for our second phase.
Use of Tabu Search to Program a Single Path
We used Nearest Distance Method to establish a rough single path as an initial solution for the tabu search. First, we established a distance array , and let the zeroth row of = ∞. Next, we set the point in which the row of had a minimum value as the first point, and let the value of this row of be infinity. Then, we repeated the same procedure to search and set the second point, the third point, and so on. The initial solution was generated until all of the points were searched. Following this, the tabu search and 2-opt were used to optimize the single path. The 2-opt was used as a move method to solve the problem of the single optimal path. We also set up the length of the tabu list, with the condition of stopping the search and using the solution of the Nearest Distance Method as the initial solution 0 at that time. We then executed a tabu search to get the optimal single path.
Establishment of the Initial Solution
This phase was carried out in three steps, as shown in Figure 4. In step 1, tabu search and 2-opt were used to optimize the single path. In step 2, the single path was being divided into multiple subpaths as the initial solution for our second phase.
Use of Tabu Search to Program a Single Path
We used Nearest Distance Method to establish a rough single path as an initial solution for the tabu search. First, we established a distance array , and let the zeroth row of = ∞. Next, we set the point in which the row of had a minimum value as the first point, and let the value of this row of be infinity. Then, we repeated the same procedure to search and set the second point, the third point, and so on. The initial solution was generated until all of the points were searched. Following this, the tabu search and 2-opt were used to optimize the single path. The 2-opt was used as a move method to solve the problem of the single optimal path. We also set up the length of the tabu list, with the condition of stopping the search and using the solution of the Nearest Distance Method as the initial solution 0 at that time. We then executed a tabu search to get the optimal single path.
Establishment of the Initial Solution
This phase was carried out in three steps, as shown in Figure 4. In step 1, tabu search and 2-opt were used to optimize the single path. In step 2, the single path was being divided into multiple sub-paths as the initial solution for our second phase.
Establishment of the Initial Solution
This phase was carried out in three steps, as shown in Figure 4. In step 1, tabu search and 2-opt were used to optimize the single path. In step 2, the single path was being divided into multiple subpaths as the initial solution for our second phase.
Use of Tabu Search to Program a Single Path
We used Nearest Distance Method to establish a rough single path as an initial solution for the tabu search. First, we established a distance array , and let the zeroth row of = ∞. Next, we set the point in which the row of had a minimum value as the first point, and let the value of this row of be infinity. Then, we repeated the same procedure to search and set the second point, the third point, and so on. The initial solution was generated until all of the points were searched. Following this, the tabu search and 2-opt were used to optimize the single path. The 2-opt was used as a move method to solve the problem of the single optimal path. We also set up the length of the tabu list, with the condition of stopping the search and using the solution of the Nearest Distance Method as the initial solution 0 at that time. We then executed a tabu search to get the optimal single path.
Use of Tabu Search to Program a Single Path
We used Nearest Distance Method to establish a rough single path as an initial solution for the tabu search. First, we established a distance array c ij , and let the zeroth row of c ij = ∞. Next, we set the point in which the row of c ij had a minimum value as the first point, and let the value of this row of c ij be infinity. Then, we repeated the same procedure to search and set the second point, the third point, and so on. The initial solution was generated until all of the points were searched. Following this, the tabu search and 2-opt were used to optimize the single path. The 2-opt was used as a move method to solve the problem of the single optimal path. We also set up the length of the tabu list, with the condition of stopping the search and using the solution of the Nearest Distance Method as the initial solution x 0 at that time. We then executed a tabu search to get the optimal single path.
Path Split
We roughly split the single path program into multiple sub-paths, and then split all the target points into m sub-paths. We also made the closed sub-paths link to the original point as much as Electronics 2020, 9, 534 7 of 18 possible, less than the maximum distance limit. At worst, no more than one route exceeded the maximum distance limit. The split path was not necessarily the optimal path, but it was good enough to be the initial solution for the next phase.
Improvement of Path to Obtain an Optimal Solution
In this phase, the rules of 2-opt were followed to swap the node lines in in-route and cross-route paths for moving. The solutions were gradually converged into the optimal solution. At this point, we decided to minimize the distance summation of all paths, and make sure the assigned distance of each robot car did not exceed the maximum distance limit. The flow chart is shown in Figure 5.
Path Split
We roughly split the single path program into multiple sub-paths, and then split all the target points into m sub-paths. We also made the closed sub-paths link to the original point as much as possible, less than the maximum distance limit. At worst, no more than one route exceeded the maximum distance limit. The split path was not necessarily the optimal path, but it was good enough to be the initial solution for the next phase.
Improvement of Path to Obtain an Optimal Solution
In this phase, the rules of 2-opt were followed to swap the node lines in in-route and cross-route paths for moving. The solutions were gradually converged into the optimal solution. At this point, we decided to minimize the distance summation of all paths, and make sure the assigned distance of each robot car did not exceed the maximum distance limit. The flow chart is shown in Figure 5.
Modified Tabu Search for Improving Cross-route Switch
Compared to a general tabu search, the major difference of this improved search is that , which contains solutions exceeding the maximum distance limitation, were removed from the Candidate List to ensure that all path solution values were within range. However, this limitation resulted in an empty set for the Candidate List, thereby blocking all solutions from entering the next generation. When this occurred, an original tabu list was adopted with , the solution set retaining all solutions, even with values exceeding the maximum distance limitation. With loosened criteria, the search also selected "the distance of the longest traveling path" as the solution of minimum value.
A tabu search combined with 2-opt was used in this step. By moving based on 2-opt, swapping the nodal lines among different paths was used to improve each of the current paths. So, 2-opt was applied to improve cross-route paths. At that time, we set up the length of the tabu list, the condition of stopping the search, and the initial solution as 0 . Then we executed the flow of the improved tabu search to get the optimal paths as follows (its pseudo code is shown in Algorithm 1): Step 1.
First, an initial solution 0 was set as the current optimal solution; the suboptimal solution was * , + . Then we set the current optimal and suboptimal distance function values as ( * ), ( + ) ; with the initial generation of = 0, the number of optimal solutions was not improved ( = 0) and tabu list T was the empty set ( = ∅).
Modified Tabu Search for Improving Cross-Route Switch
Compared to a general tabu search, the major difference of this improved search is that S lmt , which contains solutions exceeding the maximum distance limitation, were removed from the Candidate List to ensure that all path solution values were within range. However, this limitation resulted in an empty set for the Candidate List, thereby blocking all solutions from entering the next generation. When this occurred, an original tabu list was adopted with S lmt , the solution set retaining all solutions, even with values exceeding the maximum distance limitation. With loosened criteria, the search also selected "the distance of the longest traveling path" as the solution of minimum value.
A tabu search combined with 2-opt was used in this step. By moving based on 2-opt, swapping the nodal lines among different paths was used to improve each of the current paths. So, 2-opt was applied to improve cross-route paths. At that time, we set up the length of the tabu list, the condition of stopping the search, and the initial solution as x 0 . Then we executed the flow of the improved tabu search to get the optimal paths as follows (its pseudo code is shown in Algorithm 1): Step 1.
First, an initial solution x 0 was set as the current optimal solution; the suboptimal solution was x * , x + . Then we set the current optimal and suboptimal distance function values as d(x * ), d(x + ); with the initial generation of i = 0, the number of optimal solutions was not improved (k = 0) and tabu list T was the empty set (T = ∅).
Electronics 2020, 9, 534 8 of 18 According to characteristics of 2-opt, it extends all feasible neighborhood solutions of x i as a neighborhood solution set N(x i ). The candidate list included an N(x i ) deduction T, and a solution of the maximum distance limit S lmt [N(x i ) − T − S lmt ]. We selected the optimal neighborhood solution x i+1 which had an optimal total distance function value of d( Step 3. We had to determine whether d(x i+1 ) was less than d(x * ). If it was, x * had to be replaced by x i+1 (x * = x i ). Then k = 0. Otherwise, the current optimal solution was preserved, and k = k + 1. Then, we determined whether d(x i+1 ) was less than d(x + ). If it was, x + was replaced with x i+1 (x + = x i ) Otherwise, we proceeded to the next step.
We determined whether the condition of stopping the search was reached (stopping the search and setting the current optimal solution as the global optimal solution when k reached the preset). Otherwise, we updated the tabu list. That is, we added x i+1 to T. If T was full, according to rule of first-in, first-out, to evict the solution which is the earliest one that enters T. Then we let i = i + 1 and returned to Step 2 to continue the operation.
Diversification Strategy
When this phase proceeded to the second generation, part of the cross-route improvements had solutions that were not improved. It was possible for a solution to be identical to the one obtained in the first generation. If this solution was sent to the calculation in the next step, it still came out as a useless local optimal solution without any further improvements. In order to avoid this situation, we had to determine whether the solution was improved before going to the second generation. If not, the suboptimal solution from the cross-route path exchange had to be implemented for improvement, to get rid of the circle of the local optimal solution and to obtain the global optimal solution.
Improvements for Each Single Path
In this part, the path solution given by the diversification strategy was adopted as the initial solution. We improved the single path of each group by using a tabu search in combination with 2-opt to be the same as in the first stage. Thus, we used each single path as an initial solution, and we used 2-opt as a move method to make improvements among different paths. Additionally, we set up the length of the tabu list with the condition of stop search. Then we executed the tabu search that was the same as that of the first stage. We did this until each group had been improved after the end of the sequence.
System Architecture
As we aimed to complete the task with multiple target points by multiple vehicles in the shortest possible time, a multi-agent system (MAS) under central control was set up for this research. The so-called "central control" was enlisted to allocate assignments to each individual robot, so that each of them could complete the tasks independently, rather than controlling the cars throughout the procedure. Through cooperation among multiple agents, the system was able to complete a task of a larger scale. From an MAS viewpoint on task allocation, the system was seen as having a Single-robot Task, Single-task Robots, and an Instantaneous Allocation of Task (ST-SR-IA) [20].
Various assignments at different levels of difficulty were randomly allocated by the system, meaning that each robot car may have received any assignment. Thus, each car in the robot swarm was equipped with identical specifications to ensure their performance and capacity to cope with various assignments.
From the viewpoint of MAS heterogeneity, the degree of similarity among individual robots within a collection Het(R) can be expressed as follows: where Caste represents types of robots, and p i is the decimal percent of robots belonging to any caste. Since all specifications of this system are the same (caste = 1, p i = 1), Het(R) = 0 (Equation (11)) [21], which indicates a homogeneity system of a robot swarm. Figure 6 illustrates the system architecture. This system included a Remote Monitoring Station, a Multi-Vehicle Paths Programming System, and Robot Cars. The Remote Monitoring Station acted as a central controller capable of programming the paths and distributing paths to the robot swarm, designed to complete the work together. From the viewpoint of network topology, the system was a sort of star topology. Each robot car was able to communicate with the Remote Monitoring Station, but they were not able to communicate with each other. The system mainly used LabVIEW to develop the Dynamic Remote Monitoring Station interface. By sending a URL to obtain a Google map web page as a display interface, it was used as a man-machine interface (MMI), in which each vehicle's location feedback and initial path programming could be displayed and managed. By clicking the target points displayed on the MMI, the location could be sent to the Remote Monitoring Station, where the algorithm was run for the path programming. Then, the planned paths were passed to each car via XBee. Once the cars received the data, they cross-compared their locations against the target points designated by the central monitoring station. Next they came out at an angle between the car and target point, and then headed out of the electronic compass. With these results, the robot car was able to drive the DC motor forward and control the servo motor differential to successfully complete the assignment.
Experimental Vehicle
The robot car was used as an experimental vehicle (see Figure 7). Its function was to receive the data of target points from the Remote Monitoring Station and to establish a database for traveling. Once the car arrived at a designated target point, its real-time location was immediately sent to the Remote Monitoring Station.
This experimental vehicle was modified from a 1/10 scale Shot Course Truck remote control car. We removed the car's shell and related remote control devices, then mounted additional off-the-shelf electronic components, including an Arduino Mega 2560 to act as an onboard computer; a U-blox NEO-7M Global Positioning System (GPS) module to provide position information for navigation; an HMC5883L Electronic Compass (E-Compass) to indicate heading; a 915 Mhz Xbee PRO S3B wireless module for duplex communication with the Remote Monitoring Station; a Liquid Crystal Display (LCD) to act as a health information indicator for field debugging; and a memory card (SD Module) to function as an integral data logger to include trajectory history. All the embedded automotive electronics are shown in Figure 8.
Experimental Vehicle
The robot car was used as an experimental vehicle (see Figure 7). Its function was to receive the data of target points from the Remote Monitoring Station and to establish a database for traveling. Once the car arrived at a designated target point, its real-time location was immediately sent to the Remote Monitoring Station.
Experimental Vehicle
The robot car was used as an experimental vehicle (see Figure 7). Its function was to receive the data of target points from the Remote Monitoring Station and to establish a database for traveling. Once the car arrived at a designated target point, its real-time location was immediately sent to the Remote Monitoring Station.
This experimental vehicle was modified from a 1/10 scale Shot Course Truck remote control car. We removed the car's shell and related remote control devices, then mounted additional off-the-shelf electronic components, including an Arduino Mega 2560 to act as an onboard computer; a U-blox NEO-7M Global Positioning System (GPS) module to provide position information for navigation; an HMC5883L Electronic Compass (E-Compass) to indicate heading; a 915 Mhz Xbee PRO S3B wireless module for duplex communication with the Remote Monitoring Station; a Liquid Crystal Display (LCD) to act as a health information indicator for field debugging; and a memory card (SD Module) to function as an integral data logger to include trajectory history. All the embedded automotive electronics are shown in Figure 8. This experimental vehicle was modified from a 1/10 scale Shot Course Truck remote control car. We removed the car's shell and related remote control devices, then mounted additional off-the-shelf electronic components, including an Arduino Mega 2560 to act as an onboard computer; a U-blox NEO-7M Global Positioning System (GPS) module to provide position information for navigation; an HMC5883L Electronic Compass (E-Compass) to indicate heading; a 915 Mhz Xbee PRO S3B wireless module for duplex communication with the Remote Monitoring Station; a Liquid Crystal Display (LCD) to act as a health information indicator for field debugging; and a memory card (SD Module) to function as an integral data logger to include trajectory history. All the embedded automotive electronics are shown in Figure 8. The robot car control principle was constructed from the latitude and longitude data of each target point stored in the database. Once the tasks assigned by the Remote Monitoring station were received by the robot car, the geographical data of each target point was extracted from the database for cars to travel among the points. On the other hand, a set of embedded steering strategies was also required for the car to automatically move along the planned paths and directions. In order to calculate the car's relative distance and angle against the target point, an electronic compass H was used. Also, the heading angle of the servo motor was set as θ s. When θ s = 0, the robot car would go straight. When the θ s value was positive, the robot car would turn right. When the θ s became negative, the robot car would turn left. The maximum range of the angle was set at positive/negative 180°.
The input latitude and longitude data of the robot car ( , ) and the target point ( , ), which were received from the GPS, were used in equation 12 to get ∅ (the angle of the target point against the exact north). Then we deducted H, the heading direction (see equation 13), to get the direction error , which was the angle between the current direction and the target point. We then used the proportional control to multiply (the gain used for servo proportional control; p = 0.161) after adding c (the center angle of the servo motor for the steering) to obtain θ (that is, the command sent to the servo motor for moving). In addition, the maximum range for the motor to move, ±15°, was set to prevent the cars from rolling over under high-speed turning. The θ was determined with the understanding it should not exceed ±15° (Equation 14).
In addition to steering, determining whether to reach the target point was equally important. Once the car reached the target point, only then was it allowed to move to the next one. The distance between the car and the target point was the key. We set the target point radius R. If the robot car was entering the range of R (the distance between the robot car and the target point < R), the target point hit would be determined. In this case, R was set with an appropriate value, understanding that it would be difficult for the car to hit the target points with too small or too large of an R value. The R value, then, was acknowledged as affecting the accuracy of the experiment [22]. The robot car control principle was constructed from the latitude and longitude data of each target point stored in the database. Once the tasks assigned by the Remote Monitoring station were received by the robot car, the geographical data of each target point was extracted from the database for cars to travel among the points. On the other hand, a set of embedded steering strategies was also required for the car to automatically move along the planned paths and directions. In order to calculate the car's relative distance and angle against the target point, an electronic compass H was used. Also, the heading angle of the servo motor was set as θ s . When θ s = 0, the robot car would go straight. When the θ s value was positive, the robot car would turn right. When the θ s became negative, the robot car would turn left. The maximum range of the angle was set at positive/negative 180 • .
The input latitude and longitude data of the robot car (lat r , lng r ) and the target point (lat g , lng g ), which were received from the GPS, were used in Equation (12) to get ∅ (the angle of the target point against the exact north). Then we deducted H, the heading direction (see Equation (13)), to get the direction error θ t , which was the angle between the current direction and the target point. We then used the proportional control θ t to multiply K P (the gain used for servo proportional control; K P = 0.161) after adding θ c (the center angle of the servo motor for the steering) to obtain θ s (that is, the command sent to the servo motor for moving). In addition, the maximum range for the motor to move, ±15 • , was set to prevent the cars from rolling over under high-speed turning. The K P θ t was determined with the understanding it should not exceed ±15 • (Equation (14)).
In addition to steering, determining whether to reach the target point was equally important. Once the car reached the target point, only then was it allowed to move to the next one. The distance between the car and the target point was the key. We set the target point radius R. If the robot car was entering the range of R (the distance between the robot car and the target point < R), the target point hit would be determined. In this case, R was set with an appropriate value, understanding that it would be difficult for the car to hit the target points with too small or too large of an R value. The R value, then, was acknowledged as affecting the accuracy of the experiment [22].
Tests and Results
This section describes the tests for the proposed hybrid path programming method with maximum range constraint for the mission planning of multi-robot cars. Three types of tests were performed:
1.
Convergence Test-This was conducted to confirm the convergence of the hybrid programming algorithm under range limitation.
2.
Bench Test-This was performed to verify competitiveness by comparing with existing public TSPLIB instances.
3.
Field Test-This was used to validate the practicality of the proposed algorithm by a group of three robot cars deployed on a field test.
In all tests, the proposed hybrid path programming computer code was processed on the remote control station based on a laptop PC with Intel Core i7 2.4 GHz CPU and 8 GB RAM.
Convergence Test
To see whether this hybrid path programming algorithm could successfully converge a solution set to optimize the route of each robot car group under their maximum travelable distance limitation, a series of tests were conducted. First, we ensured the algorithm convergence status by a simple 22-point test. As shown in Figure 9, a solution was converged at the 63rd generation. However, the solution was not immediately improved at the first generation. The solution exceeded the maximum distance limit as well as the Candidate List = N(x i ) − T − S lmt = ∅, so we followed the improved tabu search and liberalized the Candidate List = N(x i ) − T. We compared the maximum distance of each of the paths and selected the solution with the minimum value in the Candidate List. Due to the maximum distance limit, there was a function to restrict the maximum distance of the feasible solution, which was not allowed to exceed this threshold. Starting from the maximum distance, we picked the minimum maximum distance of the neighborhood solution, and then made the current solution gradually close to the threshold (maximum distance limit). This eventually sufficed for meeting the threshold. Finally, the current solution was eligible (see the green line in Figure 9-the distance variation of the current solution). In this way, the current solution was improved and gradually got close to the feasible solution.
Tests and Results
This section describes the tests for the proposed hybrid path programming method with maximum range constraint for the mission planning of multi-robot cars. Three types of tests were performed: 1. Convergence Test-This was conducted to confirm the convergence of the hybrid programming algorithm under range limitation. 2. Bench Test-This was performed to verify competitiveness by comparing with existing public TSPLIB instances. 3. Field Test-This was used to validate the practicality of the proposed algorithm by a group of three robot cars deployed on a field test.
In all tests, the proposed hybrid path programming computer code was processed on the remote control station based on a laptop PC with Intel Core i7 2.4GHz CPU and 8GB RAM.
Convergence Test
To see whether this hybrid path programming algorithm could successfully converge a solution set to optimize the route of each robot car group under their maximum travelable distance limitation, a series of tests were conducted. First, we ensured the algorithm convergence status by a simple 22point test. As shown in Figure 9, a solution was converged at the 63rd generation. However, the solution was not immediately improved at the first generation. The solution exceeded the maximum distance limit as well as the Candidate List = ( ) − − = ∅, so we followed the improved tabu search and liberalized the Candidate List = ( ) − . We compared the maximum distance of each of the paths and selected the solution with the minimum value in the Candidate List. Due to the maximum distance limit, there was a function to restrict the maximum distance of the feasible solution, which was not allowed to exceed this threshold. Starting from the maximum distance, we picked the minimum maximum distance of the neighborhood solution, and then made the current solution gradually close to the threshold (maximum distance limit). This eventually sufficed for meeting the threshold. Finally, the current solution was eligible (see the green line in Figure 9-the distance variation of the current solution). In this way, the current solution was improved and gradually got close to the feasible solution.
Bench Test
The problem definition of this research is similar to mTSP. The difference, in comparison to mTSP, is that this research is limited to the travel distance of each robot car. Since mTSP instances are easy to obtain, we used the innovative algorithm developed in this research to solve the same mTSP problem for comparing.
Most scholars have modified TSPLIB instances to test mTSP, because mTSP does not have public instances. In this research, Pr76, Pr152, Pr226, Pr299 and Pr439 were tested. The mTSP rule that each salesman must visit more than two targets was used. We then compared with MGA [23], MACO [24],
Bench Test
The problem definition of this research is similar to mTSP. The difference, in comparison to mTSP, is that this research is limited to the travel distance of each robot car. Since mTSP instances are easy to obtain, we used the innovative algorithm developed in this research to solve the same mTSP problem for comparing.
Most scholars have modified TSPLIB instances to test mTSP, because mTSP does not have public instances. In this research, Pr76, Pr152, Pr226, Pr299 and Pr439 were tested. The mTSP rule that each salesman must visit more than two targets was used. We then compared with MGA [23], MACO [24], NMACO [11] and SA+EAS [25]. For establishing the initial solution, the tabu list length was set to 30, and the condition set for stopping was that the optimal solution was not updated, and it improved in 50 generations. The tabu list length of Improvement of Each Single Path was set to 50. The tabu list length of Improvement between Different Paths was 50 as well. As well as this, the condition set for stopping calculations for programming both paths was that the solution had neither been updated nor improved in the most recent 10 generations. The condition set for stopping the path improvement part was that the optimal solution had not been updated or improved in the second iteration. Table 1 is the comparison result of our algorithm in TSPLIB instances with each other. The 2TS+2OPT was the hybrid algorithm developed in this research. The number of the target is expressed as n. The number of salesmen is shown as m. The maximum number of waypoints (cities) that each vehicle (salesman) could visit is denoted by u. As a result, the distance values of 2TS+2OPT are better than the others, and most of the CPU Times are less than the others as well.
Field Test with Multiple Robot Cars
In this study we set out to solve the path programming of a multi-target wide area. Subject to vehicle ability constraints, cars were not able to travel to all target points. Therefore, we sent multiple vehicles to respectively travel to target points and complete tasks. We selected a site in Yunlin, Taiwan, and set up several target points on it. With a maximum distance limit set up, the shortest total distance paths and the total distance limits for each robot car were set, so they did not exceed this limit. A hybrid tabu search combined with a 2-opt swap method was adopted to program the optimal path in the Remote Monitoring Station, which was a laptop with CI7, 2.4 GHz, and 8 GB RAM. For establishing the initial solution, the tabu list length was set to 30. The condition set for stopping was that the optimal solution had not been updated nor improved in 50 generations. The tabu list length of Improvement of Each Single Path was set to 30. The tabu list length of Improvement between Different Path parts was 30. Additionally, the condition set for stopping calculations for the programming of both paths was that the solution had not been updated or improved in the most recent 50 generations. The condition set for stopping the path improvement part was that the optimal solution had not been updated nor improved in the second iteration. After all of these settings were in place, the paths were assigned to robot cars for them to run on the designated site with the paths programmed.
Maximum Distance Limit = 170 m
The test started by randomly setting up 15 target points on the e-map of the Remote Monitoring Station. We assumed that the maximum travelable distance limit was 170 m. If we were sending a single robot car to travel all target points, the optimal (shortest) path was 270.6 m (as shown in Figure 10). But this path obviously had exceeded the maximum distance limit that a single car can handle for completing a task; hence three cars were sent for this test. The system soon provided new paths (as shown in Figure 11), with the shortest distance by using the algorithm developed in this research. In this test, the CPU time of the Remote Monitoring Station computer was only 0.79 s. With this solution, the A-path was 81.3 m; the B-path was 164.7 m; and the C-path was 139.9 m. None of the paths exceeded the maximum distance limit. The shortest total distance was 385.9 m. Then the system automatically dispatched those three routes to robot cars to visit their assigned target points. We recorded the path trajectory of each robot car, as shown in Figure 12. The actual total distance was 401.1 m. The actual distance of the A path was 86.9 m; for the B path it was 167.9 m; and for the C path it was 146.3 m. The maximum error of all robot cars was 6.8% of car A. The error of total distance was 3.9% (Table 2). This error was mainly caused by GPS drifting, a bumpy surface and steering center offset factors.
Electronics 2020, 9, x FOR PEER REVIEW 14 of 18 (as shown in Figure 11), with the shortest distance by using the algorithm developed in this research. In this test, the CPU time of the Remote Monitoring Station computer was only 0.79 s. With this solution, the A-path was 81.3 m; the B-path was 164.7 m; and the C-path was 139.9 m. None of the paths exceeded the maximum distance limit. The shortest total distance was 385.9 m. Then the system automatically dispatched those three routes to robot cars to visit their assigned target points. We recorded the path trajectory of each robot car, as shown in Figure 12. The actual total distance was 401.1 m. The actual distance of the A path was 86.9 m; for the B path it was 167.9 m; and for the C path it was 146.3 m. The maximum error of all robot cars was 6.8% of car A. The error of total distance was 3.9% (Table 2). This error was mainly caused by GPS drifting, a bumpy surface and steering center offset factors. (as shown in Figure 11), with the shortest distance by using the algorithm developed in this research. In this test, the CPU time of the Remote Monitoring Station computer was only 0.79 s. With this solution, the A-path was 81.3 m; the B-path was 164.7 m; and the C-path was 139.9 m. None of the paths exceeded the maximum distance limit. The shortest total distance was 385.9 m. Then the system automatically dispatched those three routes to robot cars to visit their assigned target points. We recorded the path trajectory of each robot car, as shown in Figure 12. The actual total distance was 401.1 m. The actual distance of the A path was 86.9 m; for the B path it was 167.9 m; and for the C path it was 146.3 m. The maximum error of all robot cars was 6.8% of car A. The error of total distance was 3.9% (Table 2). This error was mainly caused by GPS drifting, a bumpy surface and steering center offset factors. With the same location target points set up, we tuned down the maximum distance limit as 164 m. The algorithm converged a new solution set (as shown in Figure 13) in a very short (0.75 s) CPU time. For this solution, the A path was 105.2 m; the B path was 159.9 m; and the C path was 139.9 m. None of paths exceeded the maximum distance limit. The total distance, however, increased to 405 m, because the maximum travelable distance of each car was compressed, which made the solution more "load-balanced". The completion time was relatively less. After the solution was converted, the system immediately dispatched those three routes to robot cars to visit their assigned target points. We recorded the path trajectory of each robot car, as shown in Figure 14. The actual total distance was 412.5 m. The actual distance of the A path was 106.8 m; for the B path it was 160.9 m; and for the C path it was 144.8 m. The maximum error of all robot cars was 3.5% of car C. The error of total distance was 1.8% (see Table 3). It is worth mentioning that none of the robots exceeded the range limitation. With the same location target points set up, we tuned down the maximum distance limit as 164 m. The algorithm converged a new solution set (as shown in Figure 13) in a very short (0.75 s) CPU time. For this solution, the A path was 105.2 m; the B path was 159.9 m; and the C path was 139.9 m. None of paths exceeded the maximum distance limit. The total distance, however, increased to 405 m, because the maximum travelable distance of each car was compressed, which made the solution more "load-balanced." The completion time was relatively less. After the solution was converted, the system immediately dispatched those three routes to robot cars to visit their assigned target points. We recorded the path trajectory of each robot car, as shown in Figure 14. The actual total distance was 412.5 m. The actual distance of the A path was 106.8 m; for the B path it was 160.9 m; and for the C path it was 144.8 m. The maximum error of all robot cars was 3.5% of car C. The error of total distance was 1.8% (see Table 3). It is worth mentioning that none of the robots exceeded the range limitation.
Discussion
From the bench test, compared to other algorithms, the 2TS+2OPT hybrid algorithm proposed in this research has very high advantages in both path optimization and CPU time, which are crucial for the practical applications of unmanned system. Besides, when examining the results generated from this hybrid algorithm of path programming, it was found that the total distance was inversely proportional to the maximum distance limit value. Thus, the lower the maximum distance limit value was, the longer the total distance would be. Relatively speaking, the lower the maximum distance limit value, the shorter the mission time. In general, a tighter margin in onboard energy capacity yielded a "load-balanced" situation for each robot in the group, which means every robot had equal loading.
From the field experiments, we obtained the error of maximum total distance at 3.9%, and the maximum error of each robot car at 6.8%. These are both minor and representative of a rather satisfying result, as we expected. In addition, we also found a few minor errors in Algorithm 1; Table 1, which were due to the following three factors: 1. GPS drifting The GPS device adopted in this research was designed for general commercial purposes, with couple meters measuring error. This GPS drifting may be easily fixed with higher-level equipment, such as that of a real-time, kinematic GPS.
2. Pavement condition The paths programmed by the system were generated based on a smooth ground surface for car operation. But in fact, there some unexpected surface conditions, like tiny pebbles on the pavement that may have caused an offset to the route.
3. Offset of steering center The steering mounted on the robot car consisted of a servo motor and a steering mechanism; this unit may have been offset while it was traveling among target points during a long run. This offset might have somehow led to the car moving off the path, and thus resulted in a few minor errors at the end. In this research, we tried to minimize the effect of this factor by regularly correcting the steering.
In terms of the case that set the maximum distance limit at 170 m, since the longest path was 164.7 m (which was very close to the maximum distance limit), the robot car could run and exceed the limit as long as any error occurred. For example, the car with a maximum error of 6.8% in this experiment was obviously not able to complete the task. Thus, reserving 10% as an allowance margin when setting the maximum distance limit is recommended (maximum distance limit: 10%).
Conclusions
In this research, a hybrid 2TS+2OPT algorithm with limited range constraints for the path programming of multiple robot vehicles was successfully developed. This innovative algorithm is superior in both better path optimization and shorter CPU time, compared to other mTSP algorithm solutions. In the last stage of this research, a general scenario was presented to show the whole process of the multi-robot mission planning, in which three robots were deployed in a field to complete a series of wide area multi-waypoint tasks which were far beyond a single car's endurance capabilities. Those tests validated that the algorithm can successfully optimize robots' routes to visit assigned target points within their range limitations.
In real-world, unmanned vehicles practices, optimized paths based on onboard energy capacity (fuel or battery) constraints are critical to multiple-agent system applications of this type, including large area multi-points surveillance exercises, robot swarm deliveries, multi-drone attacks, and so on. This research could contribute to those types of instances. | 13,455.4 | 2020-03-24T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Nonlinear GLR-MQ evolution equation and Q^2-evolution of gluon distribution function
In this paper we have solved the nonlinear Gribov-Levin-Ryskin-Mueller-Qiu (GLR-MQ) evolution equation for gluon distribution function G(x,Q^2) and studied the effects of the nonlinear GLR-MQ corrections to the Leading Order (LO) Dokshitzer-Gribov-Lipatov-Altarelli-Parisi (DGLAP) evolution equations. Here we incorporate a Regge like behaviour of gluon distribution function to obtain the solution of GLR-MQ evolution equation. We have also investigated the Q^2-dependence of gluon distribution function from the solution of GLR-MQ evolution equation. Moreover it is interesting to observe from our results that nonlinearities increase with decreasing correlation radius (R) between two interacting gluons. Results also confirm that the steep behavior of gluon distribution function is observed at R=5 GeV^{-1}, whereas it is lowered at R=2 GeV^{-1} with decreasing x as Q^2 increases. In this work we have also checked the sensitivity of \lambda_G in our calculations. Our computed results are compared with those obtained by the global DGLAP fits to the parton distribution functions viz. GRV, MRST, MSTW and with the EHKQS model.
Introduction
The small-x, where x is the Bjorken scaling variable, behavior of quark and gluon densities is one of the challenging problems of quantum chromodynamics (QCD). The most important phenomena in the region of small-x which determine the physical picture of the parton (quark and gluon) evolution or cascade, are the increase of the parton density at x → 0, the growth of the mean transverse momentum of a parton inside the parton cascade at smallx, and the saturation of the parton density [1]. The parton distributions in hadrons play a key role in understanding the standard model processes and in the predictions for such processes at accelerators. Therefore the determination of parton densities or more importantly the gluon density in the small-x region is particularly interesting because here gluons are expected to dominate the proton structure function. The study of gluon distribution function is also very important because it is the basic ingredient in the calculations of different high-energy hadronic processes like mini jet production, growth of total hadronic processes etc. Moreover precise knowledge of the gluon distribution at small-x is essential for reliable predictions of important p-p, p-A and A-A processes studied at the relativistic heavy-ion collider (RHIC) [2] and at CERN ′ s large hadron collider (LHC) [3]. Knowledge of gluon density is also important for the computation of inclusive cross-sections of hard, collinearly factorizable, processes in hadronic collisions.
The most precise determinations of the gluon momentum distribution in the proton can be obtained from a measurement of the deep inelastic scattering (DIS) proton structure function F 2 (x, Q 2 ) and its scaling violation.
The measurement of the proton structure function F 2 (x, Q 2 ) by H1 [4] and ZEUS [5] at HERA over a broad kinematic region has made it possible to know about the gluon in the formerly unexplored region of x and Q 2 where, Q 2 is the virtuality of the exchanged virtual photon. This method is however indirect because F 2 (x, Q 2 ) at low values of x actually probes the sea quark distributions which are related via the QCD evolution equations to the gluon distribution. More direct determinations of the gluon distribution can be obtained by reconstruction of the kinematics of the interacting partons from the measurement of the hadronic final state in gluon induced processes. They are subject to different systematic effects and provide an substantive test of perturbative QCD. Direct gluon density determinations have been carried out using events with J/ψ mesons in the final state [6] and dijet events [7].
In perturbative QCD, the high-Q 2 behavior of DIS is given by the linear Dokshitzer-Gribov-Lipatov-Altarelli-Parisi (DGLAP) evolution equations [8]. The number density of gluons, G(x, Q 2 ), and quarks, q(x, Q 2 ), in a hadron can be evaluated at large-Q 2 by solving the linear DGLAP equation to calculate the emission of additional quarks and gluons compared to some given initial distributions. The results are adjusted to fit the experimental data (mainly at small-x) for the proton structure function F 2 (x, Q 2 ) measured in DIS, over a large domain of values of x and Q 2 by adjusting the parameters in the initial parton distributions. Consecuently, the approximate analytical solutions of DGLAP evolution equations have been reported in recent years with significant phenomenological success [9][10][11].
DGLAP equation predicts a sharp growth of the gluon distribution function as x grows smaller which is also clearly observed in DIS experiments at HERA. This sharp growth of the gluon distribution function will have to eventually slow down in order to not violate unitarity bound [12] on physical cross sections. It is a known fact that the hadronic cross sections comply with the Froissart bound [12] which derives from the general assumptions of the analyticity and unitarity of the scattering amplitude. The Froissart bound indicates that the total cross section does not grow faster than the logarithm squared of the energy i.e., σ total = π m 2 π (lns) 2 , where, m π is the scale of the range of the strong force [13]. Gluon recombination is commonly believed to provide the mechanism responsible for the unitarization of the cross section at high energies or a possible saturation of the gluon distribution function at small-x. In other words, the number of gluons at small-x will be so large that they will spatially overlap and therefore, gluon recombination will be as important as gluon splitting. In the derivation of the linear DGLAP The proton structure function F 2 (x, Q 2 ) has been measured down to x ∼ 10 −5 but still in the perturbatively accessible region by the H1 Collaboration at HERA [4]. These data have been included in the recent global analyzes by the MRST [14] and CTEQ [15] collaborations. DGLAP evolution equations can describe the available experimental data quite well in a fairly broad range of x and Q 2 with appropriate parameterizations. But DGLAP approach cannot provide a good description while trying to fit the H1 data simultaneously in the region of large-Q 2 (Q 2 > 4 GeV 2 ) and in the region of small-Q 2 (1.5 GeV 2 < Q 2 < 4 GeV 2 ) [4,16]. This implies that towards smaller values of x and (or) Q 2 (but still Q 2 ≥ Λ 2 , Λ being the QCD cut off papameter) it is possible to observe gluon recombination effects which lead to nonlinear power corrections to the DGLAP equations. These nonlinear terms lower the growth of the gluon distribution in this kinematic region where α S is still small but the density of partons becomes very large.
Therefore, the corrections of the higher order QCD effects, which suppress or shadow the growth of parton densities, become a center of intensive study in the last few years.
Gribov, Levin, Ryskin, Mueller and Qiu (GLRMQ) performed a detailed study of this region in their pioneering papers and they suggested that these shadowing corrections could be expressed in a new evolution equation known as the GLR-MQ equation [17][18]. This equation, involves a new quantity, G 2 (x, Q 2 ), the two-gluon distribution per unit area of the hadron. The main features of this equation are that it predicts a saturation of the gluon distribution at very small-x, it predicts a critical line separating the perturbative regime from the saturation regime and it is only valid in the border of this critical line [16,19]. It is an amazing property of GLR-MQ equation is that it introduces a characteristic momentum scale Q 2 s , which is a measure of the density of the saturated gluons. It grows rapidly with energy, and it is proportional to 1/x λ with λ = 0.2 [20]. Gribov, Levin and Ryskin first suggest a nonlinear evolution equation, in which the evolution kernels, which they called as the gluon recombination functions, are constructed by the fan diagrams [17]. Later Mueller and Qiu calculated the gluon recombination functions at the double leading logarithmic approximation (DLLA) in a covariant perturbation framework [18].
The GLR-MQ equation is broadly regarded as a key link from perturbation region to non-perturbation region. There has been much work inspired by the approach of GLR-MQ which show that gluon recombination leads to saturation of gluon density at small-x [21][22]. The predictions of the GLR-MQ equation for the gluon saturation scale were studied in Ref. [16]. A new evolution equation named as modified DGLAP equation is derived by Zhu and Ruan [23] where the applications of the AGK (Abramovsky-Gribov-Kancheli) cutting rule [24] in the GLR-MQ equation was argued in a more general consideration. Here the Feynman diagrams are summed in a quantum field theory framework instead of the AGK cutting rule. In Ref. [25] parton distribution functions in the small-x region are numerically predicted by using a modified DGLAP equation with the GRV-like input distributions.
Moreover, some studies of the GLR-MQ terms in the framework of extracting the PDFs of the free proton can be found in Ref. [26]. Also other nonlinear evolution equations relevant at high gluon densities have been derived in the recent years, and the structure functions from DIS have been analyzed in the context of saturation models [27][28][29].
The solution of the GLR-MQ equation is particularly important for understanding the nonlinear effects of gluon-gluon fusion due to the high gluon density at small enough x. The solution of nonlinear evolution equations also provides the determination of the saturation momentum that incorporates physics in addition to that of the linear evolution equations commonly used to fit DIS data. Various studies on the solutions and viable generalizations of the GLR-MQ equation have been done in great detail in the last few years [30][31][32]. In the present work we intend to obtain a solution of the nonlinear GLR-MQ evolution equation for the calculation of gluon distribution function in leading order. This paper addresses interesting questions about validity of the well known Regee like parametrization in the region of moderate virtuality of photon. Here we have also calculated the Q 2 -evolution of gluon distribution function and the results are compared with the predictions of different paramerizations like GRV1998LO [33], MRST2001LO [14], MSTW2008LO [34] and EHKQS molel [16]. Finally, we present our conclusions.
Theory
The GLR-MQ equation is based on two processes in the parton cascade: the emission induced by the QCD vertex g→g + g with a probability which is proportional to α S ρ and the annihilation of a gluon by the same vertex g + g→g with a probability which is proportional to is the density of the gluon in the transverse plane, πR 2 is the target area, and R is the correlation radius between two interacting gluons. Normally, this radius should be smaller than the radius of a hadron. It is worthwhile to mention that R is non-perturbative in nature and therefore all physics that happens at distance scales larger than R is non-perturbative [30]. Here, r is the size of the parton (gluon) produced in the annihilation process. For DIS r∝ 1 Q 2 . Clearly, at x ∼ 1 only the production of new partons (emission) is essential because ρ≪1 , but at x→0 the value of ρ becomes so large that the annihilation of partons becomes important.
To take interaction and recombination of partons (mainly gluons) into account, a small parameter is introduced which enables us to estimate the accuracy of the calculation, given as, In terms of gluon distribution function this equation can be expressed as which is named as the GLR-MQ evolution equation. The factor γ is found to be γ = 81 16 for N c = 3, as calculated by Mueller and Qiu [18]. Now to study the Q 2 -evolution of gluon distribution function, we can rewrite Eq. (3) in a convenient form [31] ∂G(x, Q 2 ) where the first term in the r.h.s. is the usual linear DGLAP term in the double leading logarithmic approximation and the second term is nonlinear in gluon density.
Here, the representation for the gluon distribution G(x, To simplify our calculations we consider a variable t such that t = ln Q 2 Λ 2 , where Λ is the QCD cut off parameter. Then Eq. (4) becomes As gluons are the dominant parton at small-x, therefore, ignoring the quark contribution to the gluon distribution function the first term in the r.h.s. of Eq. (5) can be expressed as [35] ∂G(x, t) ∂t DGLAP = α S (t) 11 12 The strong coupling constant α S (t) in leading order has the form [35] α S (t) = 4π where, is the one-loop corrections to the QCD β-function and N f being the number of quark flavor. Here we consider N c = 3, and T f = 1 2 N f and N f = 4.
At small-x, the behavior of structure functions is well explained in terms of Regge-like behavior [36,37]. The small-x behaviour of structure functions for fixed Q 2 reflects the high-energy behavior of the total cross section with increasing total CM energy squared s 2 , since s 2 = Q 2 ( 1 x − 1) [38]. The Regge pole exchange picture [37] would therefore appear quite appropriate for the theoretical description of this behaviour. The Regge behavior of the sea-quark and antiquark distribution for small-x is given by q sea (x) ∼ x −α P corresponding to a pomeron exchange with an intercept of α P = 1. But the valence-quark distribution for small x given by q val (x) ∼ x −α R corresponds to a reggeon exchange with an intercept of α R = 0.5. The x dependence of the parton densities is often assumed at moderate Q 2 and thus the leading order calculations in ln(1/x) with fixed α S predict a steep power-law behavior of xg(x, Q 2 ) ∼ x −λ G , where λ G = (3α S /π)4ln2 ≃ 0.5 for α S ≃ 0.2, as appropriate for Q 2 ∼ 4GeV 2 .
Moreover the Regge theory provides extremely naive and frugal parameterization of all total cross sections [39,40]. It is suggested in Refs. [41,42] that is feasible to use Regge theory for the study of DGLAP evolution equations. The tactics for the determination of the gluon distribution function with the nonlinear correction is also based on the Regge-like behavior [43].
The Regee behavior is believed to be valid at small-x and at some intermediate Q 2 , where Q 2 must be small, but not so small that α S (Q 2 ) is too large [44,45]. Moreover, as discussed in [40] the Regge theory is supposed to be applicable if W 2 is much greater than all the other variables and so, models based upon this idea have been successful in describing the DIS cross-section when x is small enough (x < 0.01), whatever be the value of Q 2 . [20,46]. Therefore, to solve the GLR-MQ equation, we consider a simple form of Regee like behavior for the determination of the gluon distribution function at small-x given as and where M(t) is a function of t and λ G is the Regge intercept for gluon distribution function. This form of Regge behaviour is well supported by the work of the authors in Refs. [40,47,48]. According to Regge theory, the high energy i.e. small-x behaviour of both gluons and sea quarks are controlled by the same singularity factor in the complex angular momentum plane [37]. Moreover, as the values of Regge intercepts for all the spin-independent singlet, non-singlet and gluon structure functions should be close to 0.5 in quite a broad range of small-x [48], we would also expect that our theoretical results are best fitted to those of the experimental data and parameterization at λ G ≈ 0.5, where λ G is the Regge intercepts for gluon distribution function.
Substituting Eqs. (6), (10) and (11) in Eq. (5) we get Performing the integrations and rearranging the terms, Eq. (12) takes the with, and Eq. (13) is a partial differential equation which can be solved as where Γ is the incomplete gamma function and C is a constant. Although Regge behavior is not in agreement with the double-leading-logarithmic solution, namely, G(x, t) ∝ exp[Cln(t)ln(1/x)] 1/2 , but, the range where x is small and Q 2 is not very large is actually the Regge regime. Accordingly However, in the region where t is not very large the corrections for the nonlinear term in Eq. (16) can not be neglected and therefore Eq. (16) does not reduce to Eq. (17). Thus we can expect that the solution given by Eq. (16) is only valid in the region of small-x and intermediate values of Q 2 (or t).
Now, to determine the Q 2 -dependence of G(x, Q 2 ), we apply initial con- from which we obtain the value of the constant C as, From this equation the constant C can be evaluated by considering an appropriate input distribution G(x, t 0 ) at a given value of Q 2 0 . Now substituting C from Eq. (19) in Eq. (16) we obtain the Q 2 -evolution of gluon distribution function for fixed x in leading order as .
Thus we have obtained an expression for the Q 2 -evolution of gluon distribution function G(x, t) in leading order by solving the nonlinear GLR-MQ evolution equation semi-numerically. From the final expression given by Eq.
(20) we can easily calculate the Q 2 -evolution of G(x, Q 2 ) for a particular value of x by taking an appropiate input distribution at a given value of Q 2 0 .
Result and discussion
In this paper we have solved the nonlinear GLR-MQ evolution equation in order to determine the Q 2 -dependence of gluon distribution function include the new precise data on DIS from HERA together with constraints from hard scattering data. MSTW2008 presented an updated parton distribution functions determined from global analysis of hard-scattering data within the standard framework of leading-twist fixed-order collinear factorisation in the MS scheme. These parton distributions supersede the previously available MRST sets and can be used for the first LHC data taking and for the associated theoretical calculations. In EHKQS model [16] the effects of the first nonlinear corrections to the DGLAP evolution equations are studied by using the recent HERA data for the structure function F 2 (x, Q 2 ) of the free proton and the parton distributions from CTEQ5L [14] and CTEQ6L [14] as a baseline. By requiring a good fit to the H1 data, they determine initial parton distributions at Q 2 0 = 1.4 GeV 2 for the nonlinear scale evolution. In Ref. [16] it is shown that the nonlinear corrections enhance the agreement with the F 2 (x, Q 2 ) data in the region of x ∼ 3x10 −5 and Q 2 ∼ 1.5 GeV 2 . for x = 10 −2 , 10 −3 , 10 −4 and 10 −5 respectively. In all graphs the input distribution G(x, t 0 ) at a given value of Q 2 0 is taken from the GRV1998LO to test the Q 2 -evolution of G(x, Q 2 ). In our analysis we consider the kinematic range 2 GeV 2 ≤ Q 2 ≤ 20 GeV 2 , where we expect our solution to be valid.
The average value of Λ in our calcultaion is taken to be 0.192 GeV. It is observed from the figures that our results show almost similar behaviour with those obtained from different global parametrizations and also with EHKQS model.
We have also investigated the effect of nonlinearity in our results for R = 2 GeV −1 and R = 5 GeV −1 respectively. For this analysis our computed values of G(x, Q 2 ) for R = 2 GeV −1 and R = 5 GeV −1 respectively from Eq. (20) are plotted against Q 2 in Fig. 2(a) for x = 10 −2 , 10 −3 , 10 −4 and 10 −5 respectively. Here, the input distribution is taken from MSTW2008 global parametrization for a given value of Q 2 0 . We have also performed an analysis to check the sensitivity of the free parameter λ G in our results. Fig. 2(b) represents the results for the Q 2 -dependence of G(x, Q 2 ) obtained from the solution of nonlinear GLR-MQ equation given by Eq. (20) for three different values of λ G and we observed that results are very sensitive to λ G as x decreases.
Conclusion
We solve the nonlinear GLR-MQ evolution equation by considering the Refs. [49 -51]. We are also interested to obtain a solution of the nonlinear GLR-MQ equation in this form and planning to produce in a future paper.
We observe that the gluon distribution function increases with increasing Q 2 as usual which is in agreement with perturbative QCD fits at small-x, but with the inclusion of the nonlinear terms, Q 2 -evolution of G(x, Q 2 ) is slowed down relative to DGLAP gluon distribution. For the gluon distribution the nonlinear effects are found to play an increasingly important role at x ≤ 10 −3 .
The nonlinearities, however, vanish rapidly at larger values of x. It is also interesting to observe that nonlinearity increases with decreasing value of R as expected. The differences between the data at R = 2 GeV −1 and at R = 5 GeV −1 increase as x decreases which is very clear from Fig. 2(a).
Results also confirm that the steep behavior of gluon distribution function is observed at R = 5 GeV −1 , whereas it is lowered at R = 2 GeV −1 with decreasing x as Q 2 increases. We have also investigated the sensitivity of λ G in our calculations and found that results are highly sensitive to λ G as x goes on decreasing. | 5,139.4 | 2014-01-03T00:00:00.000 | [
"Physics"
] |
Photo-Ionization of Noble Gases : A Demonstration of Hybrid Coupled Channels Approach
We present here an application of the recently developed hybrid coupled channels approach to study photo-ionization of noble gas atoms: Neon and Argon. We first compute multi-photon ionization rates and cross-sections for these inert gas atoms with our approach and compare them with reliable data available from R-matrix Floquet theory. The good agreement between coupled channels and R-matrix Floquet theory show that our method treats multi-electron systems on par with the well established R-matrix theory. We then apply the time dependent surface flux (tSURFF) method with our approach to compute total and angle resolved photo-electron spectra from Argon with linearly and circularly polarized 12 nm wavelength laser fields, a typical wavelength available from Free Electron Lasers (FELs).
Introduction
Photo-ionization has been a useful tool in understanding electronic structure of materials for several decades.The availability of highly tunable, high photon flux sources like FELs and synchrotron has deepened our dependence on photo-ionization experiments by providing very accurate structural information [1].Noble gas atoms are chemically inert due to their closed shell electronic configuration.This makes them attractive systems for experimental studies.In the field of strong field physics, they have been extensively used to study ionization properties and core-hole dynamics, and they were used in proof of principle experiments to demonstrate time resolved electron spectroscopy.Krypton atoms were used in [2] to demonstrate that attosecond transient absorption spectroscopy can be used to observe real time motion of valence electrons.The presence of cooper minimum in high harmonic spectra from Argon is considered as a proof that high harmonic generation carries the electronic structural information in it and has attracted many photo-ionization studies, for example [3,4].Inert gas atoms like helium and neon have also been widely used to investigate double ionization [1], a process that can be used to understand electron correlation in photo-emission.Using attosecond streaking, time delays in photo-emission from Neon were measured in [5].It was experimentally found that the 2s and 2p electrons are emitted with a relative time delay of 20 as.This remains an unexplained result to date.The closest theoretical estimate so far has been from the R-matrix theory that predicts around 10 as [6] for the time delay.The difficulty in producing accurate theoretical estimates stems from the difficulties in numerical treatment of many body problem.
In the theoretical domain, the major road block in understanding these photo-ionization processes is the multi-dimensionality of the wavefunction which leads to a very unfavorable scaling of numerical solvers for the time dependent Schrödinger equation (TDSE).In the weak field regime, it may be possible to use perturbation theory to compute the ionization properties.In [7], multi-photon perturbation theory was used to compute two, three and four photon ionization cross-sections of helium.However, in [8] it has been shown that even in the "perturbative" regime, resonances in helium can lead to non-perturbative effects in photo-ionization, pointing to the limits of applicability of multi-photon perturbation theory.Multi-photon perturbation theory is also limited in its application to multi-electron systems as computing the multi-electron scattering states and the whole set of intermediate states involved in a multi-photon ionization process can be an impractical task.Therefore, one resorts to numerical solutions of the TDSE even in the perturbative regime.
As a full dimensional numerical solution for multi-electron TDSE is not feasible, several methods have been developed in the past decade that only use a part of the Hilbert space that is seemingly important for the ionization process.Some of them include multi-configuration time dependent hartree-fock method [9], time dependent Configuration Interaction method [10], time dependent restricted-active-space configuration-interaction method [11], time dependent R-matrix method [12] and coupled channels method [13].However, in terms of multi-photon ionization of atoms, R-matrix theory is the main source of available theoretical data.There have been many studies on multi-photon ionization of noble gas atoms performed using R-matrix theory, for example [12,14,15].
We recently developed a hybrid coupled channels method [16] to study photo-ionization of multi-electron systems.The method combines multi-electron bound states from quantum chemistry and one-electron numerical basis sets to construct N-electron wavefunctions that are used as basis functions to solve the TDSE.This method in conjunction with the time dependent surface flux method [17,18] can compute accurate single photo-electron spectra.We present here an application of our method to study photo-ionization from noble gas atoms-Neon and Argon.We compute multi-photon ionization rates and cross-sections and compare them with reliable data available from the R-matrix Floquet approach.We find that our results are in good agreement with the R-matrix Floquet (RMF) calculations.This shows that our method treats ionization of multi-electron systems on par with the well established R-matrix theory.We then compute photo-electron spectra from Argon with linearly and circularly polarized 12 nm wavelength laser fields.The results presented here are the first steps to computing photo-electron spectra at long wavelengths that are currently inaccessible from any theoretical approach that considers multi-electron effects.
Hybrid Coupled Channels Method
We solve the N-electron TDSE: in dipole approximation using a hybrid anti-symmetrized coupled channels (haCC) basis composed of multi-electron states from quantum chemistry and a numerical one-electron basis.We refer the reader to [16] for an elaborate description of the approach and present here the salient features of the method.We discretize the N-electron wavefunction as: where Here, A indicates anti-symmetrization, C G and C I are the time dependent coefficients, |i represents a numerical one-electron basis and |I are (N-1) electron wavefunctions which are chosen to be the eigen states of the single ionic hamiltonian obtained from the Multi-Reference Configuration Interaction Singles Doubles (MR-CISD) [19] level of quantum chemistry.|G is chosen as the ground state of the N-electron system, also obtained from MR-CISD level of quantum chemistry.As correlated states need many ionic states to be correctly represented, we include the ground state explicitly in the basis for the sake of efficiency.We use COLUMBUS [19] quantum chemistry code to compute these states.The basis is suitable to study single ionization problems, and it can represent an active electron in a polarizable core.By active electron we mean, the basis set representing this electron is flexible enough to represent bound as well as continuum states.The active electron is represented using a high order finite element basis, |f i (r) , for the radial coordinate and spherical harmonics, Y l i m i , for the angular coordinates.
Using basis Equation ( 2) with TDSE Equation (1) leads to a set of coupled ordinary differential equations for the time dependent coefficients: We solve them with an explicit fourth order Runge-Kutta solver with an automatic step size controller.A mixed gauge representation of the dipole operator is used for the reasons discussed in [20].To absorb the wavefunction at the box boundaries we use infinite range Exterior Complex scaling (irECS) [21].Finally, we employ the time dependent surface flux (tSURFF) method [17,18] to compute photo-electron spectra.
One of the main advantages of a coupled channels ansatz is that the time propagation scales quadratically with the number of ionic channels included and is independent of the number of electrons.In our haCC scheme, the ionic states are directly read from the output of a quantum chemistry calculation.This gives us the flexibility to treat ionic states at different levels of quantum chemistry.Any coupled channels scheme based only on ionic bound state channels also suffers from several limitations.The description of polarization of the ionic core is incomplete without the ionic continuum.The quantum chemistry ionic states based on gaussian orbitals will not have the exact asymptotic behavior.These limitations can lead to certain inaccuracies in our calculations.However, the high dimensionality of the multi-electron wavefunction limits us to go beyond these kind of approximations and all multi-electron TDSE solvers suffer from these kind of limitations.
One-and Two-Photon Cross-Sections of Neon
In this section, we compute the one-and the two-photon ionization cross-sections of Neon and compare them with the results from experiments and the R-matrix theory.We use in our Neon basis four ionic states-the three fold degenerate 1s 2 2s 2 2p 5 state and the 1s 2 2s2p 6 state.This implies that we have four possible ionization channels: The configurations used to represent the states are only symbolic and as we compute them using Configuration Interaction theory [19], each multi-electron state is composed of several configurations.In our time dependent approach we compute cross-sections using Equation (51) in [12]: where σ (n) is the n photon ionization cross-section in units cm 2n /s n−1 , I is the intensity in W/cm 2 , ω is the laser frequency in a.u, α is the fine structure constant and a 0 , t 0 are atomic units of length and time respectively in cms.Γ is the total ionization rate in a.u.which is computed in a time dependent approach by monitoring the rate at which the norm of the wavefunction in a certain inner region drops.We use for our computations a 150-cycle continuous wave laser pulse with a 3-cycle cos 2 ramp up and ramp down and with an intensity of 10 12 W/cm 2 .Calculations were performed with a simulation volume radius of up to 100 a.u. and an angular momentum expansion of upto L max = 6 for the active electron basis.
Figure 1 shows one-photon ionization cross-sections from Neon in the photon energy range 50-125 eV with haCC and from experimental results published in [22].We find a very good agreement between the experimental results and our calculations.One-photon ionization cross-sections of Neon as a function of photon energy from haCC and from experiments [22].haCC(4): Ionic basis consists of both 1s 2 2s 2 2p 5 states and 1s 2 2s2p 6 state.
Figure 2 shows two-photon cross-sections from Neon with haCC, RMF and time dependent R-matrix (TDRM) methods.haCC(3) indicates computations with only the 1s 2 2s 2 2p 5 ionic states and haCC(4) indicates computations including both the 1s 2 2s 2 2p 5 states and the 1s 2 2s2p 6 state.Firstly, we find that haCC(3) and haCC(4) calculations give identical results.This is consistent with the knowledge that the 1s 2 2s2p 6 ionization channel is strongly closed [14].Hence, there is no influence of this state on the two-photon cross-sections.The R-matrix calculations [12] and haCC calculations have an overall good agreement.The resonance structure at 16.83 eV photon energy corresponds to the 1s 2 2s 2 2p 5 3s state [12].The peak heights of the resonant structure in all the computations agree very well.The peak is broader in the haCC and TDRM results compared to the RMF results.A contribution to this width is from the finite bandwidth of the laser pulse.In principle, the RMF results are exactly comparable to a result from a time dependent method only in the continuous wave limit.There is also an additional oscillation in the haCC cross-sections which is not present in the R-matrix results.This oscillation is stable with respect to the variation of the active electron discretization parameters.By construction haCC does not include any double continuum, which, if in turn included in R-matrix, could be one possible source of the differences.Other possible sources may be in the description of the atomic structure.The TDRM calculations in [12] were performed with a 20 a.u.inner region, L max = 5 angular momentum expansion and 60 continuum functions per each angular momentum of the continuum electron.In general, a numerical discretization of the continuum as used in haCC yields more accurate results compared to the spectral descritization of the continuum used in [12].A more exact definition of the discretization used for the calculations in [12] would be needed for an analysis of these differences.Apart from these minor differences, it should be emphasized that this agreement is achieved without any adjustment of parameters, which provides for a quantitative confirmation of all the results.
Five-Photon Ionization Rates from Argon
In this section, we compute the five-photon ionization rates from Argon and compare them with RMF calculations at laser intensity 10 13 W/cm 2 .We use in our Argon basis four ionic states-the three fold degenerate [N e]3s 2 3p 5 state and the [N e]3s3p 6 state.This implies we have four possible ionization channels: Again here, the configurations used to represent the states are only symbolic and in practice we use configuration interaction theory to treat them.
Figure 3 shows the five-photon ionization rates from haCC computations and RMF theory [14].We use a simulation volume radius of 40 a.u. and an angular momentum expansion up to L max = 9 for the active electron basis.The ionization rates are computed by monitoring the rate at which the norm of the wavefunction in the simulation box drops.We use continuous wave laser pulses with ramp up and ramp down for our calculations.Hence, the rate at which the norm of the wavefunction drops reaches a steady state for any given simulation box size.We find that our haCC computations are in very good agreement with the RMF results.Both the approaches produce the two resonances 3p 5 4p 1 S at 364 nm and 3p 5 4p 1 D at 370 nm.The resonant structures are broader with the haCC method due to the finite bandwidth of the laser pulse.Five-photon ionization rates as a function of wavelength.The peak intensity of the laser fields used is 10 13 W/cm 2 .The RMF results are from [14].
Photo-Electron Spectra from Argon with 12 nm Wavelength Laser Fields
As photo-electron spectra is a typical quantity measured in photo-ionization experiments such as with FELs, we present as a demonstration, photo-electron spectra from Argon at a typical wavelength produced at FELs, 12 nm (hω ≈ 105 eV).This wavelength has been of experimental interest and also attracted theoretical attention recently [23].
Figure 4 shows total photo-electron spectra from Argon with linearly and circularly polarized 12 nm wavelength laser pulses.The exact pulse parameters are in the figure caption.The pulse shape used is where A 0z/x is the peak vector potential of the z component or the x component, T is the single cycle duration, c is the number of laser cycles and β is the carrier envelope phase.Here, the xz plane is the polarization plane for the circularly polarized laser pulses.
Figure 4 shows the one-and two-photon ionization peaks.The two peak structure in the spectrum is a result of ionization to two different channels.Single photon ionization to [N e]3s 2 3p 5 is the dominant ionization process with these pulse parameters.Single photon ionization is a linear process and ionization with circular polarization can be understood as a simple sum of ionization from two perpendicular linear polarized laser fields.The single photon peaks with circular polarization are twice as large as the single photon peaks with linear polarization, supporting this fact.The figure shows the one-and two-photon ionization peaks.
Figure 5 shows the partial wave decomposition and angle resolved spectra corresponding to the single photon ionization peaks with linear polarization.The partial wave decomposition shows the typical dipole selection rules.The spectra corresponding to the [N e]3s3p 6 ionization channel, which is the inner structure in the angle resolved spectra, has a node in the plane perpendicular to the laser polarization.In order to ionize into this channel, the s electron is ionized to a l = 1 continuum, resulting in the node.The outer structure, corresponding to ionization to [N e]3s 2 3p 5 channels, is a superposition of s and d waves.With circular polarization, the photo-emission is nearly uniform in all the directions in the plane of laser polarization and its a sum of dipole emissions into all the directions.
Conclusions
The hybrid coupled channels technique has been shown to be a promising tool in studying single ionization dynamics of multi-electron systems in [16].The applications of this method presented here strengthens this observation.The applications considered here are computation of multi-photon cross-sections, ionization rates and fully differential photo-electron spectra of inert gas atoms.We computed one-and two-photon cross-sections from Neon and five-photon ionization rates from Argon.The good agreement between the haCC results and RMF results shows that haCC can treat multi-electron systems on par with the well established multi-electron theories.However, the haCC approach promises to reach a step ahead of the other multi-electron theories in terms of flexibility that it possesses due to a direct interface to state of the art quantum chemistry and its compatibility with the efficient tSURFF spectra method.haCC can be used to compute photo-electron spectra from multi-electron systems at long wavelengths which has not been accessible from any multi-electron methods so far.As a first step in this direction, we presented total and angle resolved photo-electron spectra from Argon at an XUV wavelength.
Figure 3 .
Figure3.Five-photon ionization rates as a function of wavelength.The peak intensity of the laser fields used is 10 13 W/cm 2 .The RMF results are from[14].
Figure 4 .
Figure 4. Total photo-electron spectra from Argon with linearly and circulary polarized 15 cycle, 12 nm wavelength, cos 2 envelope laser pulses with a peak intensity of 9×10 13 W/cm 2 .The figure shows the one-and two-photon ionization peaks. | 3,907.4 | 2015-01-16T00:00:00.000 | [
"Chemistry",
"Physics"
] |
Development and validation of two analytical strategies for the determination of glucosides of acidic herbicides in cereals and oilseed matrices
The aim of the present research was the development and validation of a selective and reliable method for the indirect and direct determination of acidic herbicide glucosides. Enzymatic deconjugation was investigated as a mild alternative to harsh alkaline hydrolysis. Various enzymatic options for deconjugation were exploited. One out of nine tested specific enzymes proved to be practical and repeatable for different matrices and concentration ranges, leading to the complete deconjugation of the glucosides. The method was validated according to the SANTE/11312/2021 guideline for cereals and oilseeds and for a rice-based infant formula. Additionally, for four acidic herbicide glucosides available on the market, a quantitative method for direct determination of the intact glucosides was optimized and validated. In both methods, the average recoveries were within 70–120%. The limits of quantification (LOQ) achieved were 10 µg kg−1 and 2.5 µg kg−1 for the intact glucosides and the free acids in cereal and oilseeds. For the rice-based infant formula, the LOQ was 1 µg kg−1 (3 µg kg−1 for dichlorprop). To confirm its applicability, the deconjugation approach was tested for fifteen samples (cereals, oilseeds, and citrus) with incurred residues. Comparisons were made between the method without deconjugation, and two methods with deconjugation, the here proposed enzymatic deconjugation and the more commonly used alkaline hydrolysis. The inclusion of enzymatic deconjugation during sample preparation led to an increase up to 2.7-fold compared to analysis without deconjugation. Enzymatic deconjugation resulted in comparable results to alkaline hydrolysis for 13 out of 15 samples. Graphical Abstract Supplementary information The online version contains supplementary material available at 10.1007/s00216-023-04898-y.
Introduction
Pesticides can be absorbed by the plants, becoming a target of primary or secondary metabolism depending on the molecular structures.The molecular size determines its phyto-availability and also the main diffusion process [1].Phenoxy acidic herbicides such as 2,4-dichlorophenoxyacetic acid (2,4-D), 2-methyl-4-chlorophenoxyacetic acid (MCPA), dichlorprop, and haloxyfop can be applied as free acid, salt, or ester.Esters (more lipophilic) tend to better penetrate the leaves, after which they are hydrolyzed into the free acids responsible for the herbicidal activity.
After being absorbed and distributed, there are three main metabolic phases through which a pesticide (or any other xenobiotic) can be bio transformed in plants, and within these phases, several are the enzymes and co-factors that play a crucial role [2].Glucosylation of xenobiotics is one of the main phase II metabolism routes, and it plays an essential function as a defense mechanism.This process occurs by the addition of a sugar molecule (primarily glucose) to the native compound or to the phase I metabolite if the primary compound does not intrinsically contain a functional group in its chemical structure.Other possible types of conjugation take place with the addition of amino acids, fatty acids, and alcohol molecules.Nevertheless, the main phase II metabolic pathway in plants results on the generation of the respective glucoside metabolite.
Although the conjugates are not easily absorbed by the animals or humans (due to their high polarity), it can be assumed that the chance to be hydrolyzed in the gastrointestinal tract, thus releasing the native active compounds, is quite high.For this reason, during the last decade, the residue definitions (RDs) and the maximum residue levels (MRLs) of many pesticides were re-evaluated in accordance with Article 12 of Regulation 396/2005/EC [3], and in a number of cases include conjugates and esters besides the parent compound.The majority of pesticides with a carboxy-or phenolic substituent include conjugates in the RD [4].Inclusion of the glucoside metabolites as such is only possible for a limited number of compounds for which the analytical reference standards are available.In other cases, only indirect determination is possible after deconjugation and determination of total free acid content.For this purpose, the most common procedure is based on the use of strong acidic and/or alkaline conditions to release the sugar from the parent pesticide.However, in some cases, this approach can generate a further degradation of the molecule, leading to the inability to correctly quantify the parent compound.The use of milder conditions, such as enzymatic deconjugation, can avoid this.Despite the fact that esters are relevant from a regulation point of view, there are two main reasons why they are often not included in existing methods.Firstly, the number of esters per single pesticide can be very high.For instance, there are 23 different esters of the herbicide 2,4-D commercially available.Thus, the development of a method capable of detecting all the different esters of several pesticides results in a very complex task.Additionally, after the application in the field, esters are rapidly hydrolyzed to the free and active form, reason why their content into the final product is usually negligible.The rapid conversion of esters into the free acidic forms and the importance of monitoring the glucoside conjugates are detailed described by the WHO/FAO Joint Meeting on Pesticide Residue (JMPR) reports for 2,4-D, MCPA, and haloxyfop [5][6][7].Although there is no report available for dichlorprop, the same conclusion can be applied based on the chemical structure it has in common with 2,4-D and MCPA.These reports not only confirm the low stability of the esters as such, but also emphasize the need for methods capable of detecting conjugates.
In the past, several studies have been published, in which the deconjugation step was tested for the evaluation of the total content of acidic pesticides in cereal-based commodities.Most of these investigations described a comparison of extraction techniques with and without the deconjugation process.
One of the first studies was reported in 1971 by Chow et al. [8].The investigation was focused on the determination of the presence of conjugates of MCPA in treated wheat, and on the evaluation of alkaline hydrolysis (before the addition of extraction solvent) for the quantification of the total herbicide residue.The authors reported an increase in the MCPA content when alkaline hydrolysis was included during the sample preparation.
Løkke proposed a procedure for the analysis of 2,4-D and dichlorprop in cereals by the inclusion of a chemical hydrolysis step followed by an enzymatic deconjugation step [9].This sample preparation procedure was compared with other two approaches not entailing any hydrolytic steps.The final results shown approx.ten and five times higher detection of dichlorprop and 2,4-D, respectively, when the hydrolytic process was included during the sample preparation.
An extensive research was carried out by Chkanikov et al. [10].The main objectives were the investigation of the different metabolic pathways of 2,4-D in several plants (including cereals) and the use of an acidic protocol for the hydrolysis of the metabolites and the consequent release of the free 2,4-D.The concentration of 2,4-D increased approx.three times when hydrolysis was performed.
Two different extraction procedures including a hydrolysis step for the determination of mecoprop residues in barely were compared by Cessna [11].In one of the two procedures, the alkaline deconjugation was carried out before the extraction and in the second one after the extraction.The latter protocol showed better repeatability results.Nevertheless, there was no increase in the free acid signal reported by the author.
In 2007, a standardized method for the analysis of acidic pesticides in wheat flower, including the option for alkaline hydrolysis, was delivered to the participants of the European Proficiency Test for Single Residue Method (EUPT-SRM2) [12].As reported in [4], the wheat test material was cultivated applying MCPA in the field, thus containing the relative conjugated residue.The same approach was followed in 2009 for the EUPT-SRM4.Oat entailing incurred residues of dicamba was delivered to the participants.Based on the final results, both EUPTs showed a tangible raise of MCPA (7.1-fold increase) and dicamba (2.5-fold increase) concentrations after the alkaline hydrolysis.
In 2017, an analytical approach for the determination of acidic pesticides together with their esters and conjugates was reported [13].This procedure is based on an alkaline hydrolysis step (30 min at 40 °C) followed by the Quick Easy Cheap Effective Rugged Safe (QuEChERS) extraction, for the residue determination of 2,4-D, dichlorprop, fluazifop, haloxyfop, MCPA, and 4-(4-chloro-2-methylphenoxy)butanoic acid (MCPB).Due to the unavailability of acidic pesticides' conjugates as standards, their method development was carried out by the use of esters for the evaluation of the hydrolysis step.Nevertheless, the method was applied in twenty food samples characterized by the presence of incurred residues of acidic pesticides.The application of an alkaline hydrolysis step always showed an increase in the final concentration of the respective free compounds.Additionally, the method was tested in six different German laboratories (not involved during the method development) and a residue amount up to six times higher was detected if the hydrolysis step was included in the analytical procedure.
Recently, analytical standards for several phenoxyacid glucoside conjugates have become available.In the present research, an analytical method for the intact content of acidic herbicides' glucosides and a second method for the analysis of their respective free acids after enzymatic deconjugation were developed and validated.The latter one can be considered as a selective manner to obtain the deconjugation of glucoside metabolites.Moreover, the enzymatic deconjugation is milder compared to the harsh chemical deconjugation methods which may result in degradation of certain parent pesticides.In the analytical observational report from the EU Reference Laboratory for Pesticides Requiring Single Residue Methods, poor recoveries of the free acid fenoxaprop were reported after the alkaline hydrolysis of its ester [4].The poor stability of this pesticide under alkaline conditions was also confirmed conducting the hydrolysis directly to a mixture of free acids.
In this work, various enzymes and conditions for deconjugation were investigated to achieve quantitative conversion into the free acids.For the optimum enzyme/conditions, the method was validated for cereals (wheat), oilseeds (linseed), and a baby food product (rice-based).In addition, a QuECh-ERS-based method for direct determination of the available four glucoside conjugates was also validated for cereals and oilseeds.Finally, the suitability of the enzymatic deconjugation strategy was tested in fifteen samples with incurred residues, including a comparison with the acetate-buffered QuEChERS extraction technique with and without alkaline hydrolysis.
Samples
Wheat flour, linseed, and rice-based infant formula samples were purchased in a local organic shop and were employed for validation purpose.Wheat flour and rice-based infant formula were not subjected to any pre-treatment.The linseed sample was homogenized by a conventional milling procedure at ambient temperature.Samples were stored at − 20 °C until use.
Sample preparation
Sample preparation was based on acetate-buffered QuECh-ERS [14] with some modifications reported below.In general, 2.50 ± 0.05 g of sample was weighted in a 50-mL centrifuge tube, 7.5 mL of water was added, and 30 s of vortex mix was carried out.For the recovery studies, samples were fortified at this stage.Afterwards, 10 mL of ACN (1% AA) extraction solvent was added and the tubes were agitated in an automatic axial extractor (Agytax®) during 3 min.Next, a salt mixture of magnesium sulphate (4 g) and sodium acetate (1 g) was added, and the tube was immediately shaken manually and then for 1 min in the agytax machine, to induce phase separation.The tubes were then centrifuged at 4000 r.p.m. for 5 min at 10 °C.Finally, 500 µL of extract was pipetted into the mini-uniprep PTFE filer vial (45 µm), whereupon they were ready to be injected in the UHPLC-MS/MS system.
Intact glucosides: direct analysis
For the analysis of intact glucoside content in wheat, the procedure reported above was used with one modification: the volume of ACN (1% AA) and the 7.5 mL of water were added at the same time.This decision was initially based on low recoveries % obtained for MCPA-glucoside (< 40%) in the early stages of method development.The presence of intrinsic enzymes capable to partially hydrolyze glucosides (undesirable within this approach) was supposed and then confirmed.The prompt addition of the acidified acetonitrile leads to the denaturation of proteins, causing the loss of enzymes' biological activity ("Method development" section).
Intact glucosides: indirect analysis (after enzymatic conversion into the free acid)
Regarding the sample preparation procedure involving the enzymatic deconjugation, 7.5 mL of acetate-buffered water (0.25 M, pH 4.0) containing 1 U mL −1 of enzyme was added to the 2.50 g of sample, together with the glucosides' standards and the free acids' ILISs.Subsequently, 24-h deconjugation at 37 °C in a water bath was accomplished.At the end of the incubation period, 10 mL of ACN (1% AA) was added and all the following extraction steps were carried out as reported above.Figure 1 illustrates a simplification scheme of the sample preparation.
Alkaline hydrolysis of samples with incurred residues
Deconjugation by alkaline hydrolysis was based on [4].For this, 2.50 ± 0.05 g of sample was weighed in a 50-mL centrifuge tube; then, 7.5 mL of water was added together with the free acids' ILISs, followed by 30 s of vortex mix.Afterwards, 10 mL of ACN (1% AA) and 2 mL of NaOH (5N) were added and the tubes were agitated in the agytax machine during 3 min.The tubes were placed in a water bath for 120 min at 40 °C.After the cooling down of the tubes, 2 mL of H 2 SO 4 (5N) was added and the tube was shaken vigorously.Finally, all the following extraction steps were carried out as reported above.
Instrumental conditions
The analyses were carried out by the use of a Nexera X2 LC-system (Shimadzu, Kyoto, Japan).The system was equipped with two LC-30AD pumps, a DGU-20A 5R degassing unit, a CTO-20AC oven, and a SIL-30AC autosampler.The UHPLC system was coupled to a hybrid quadrupole/ linear ion trap mass spectrometer 6500 + QTRAP (Sciex Instruments, Concord, Ontario, Canada).An electrospray ion source (ESI) was employed for the ionization purpose.An acquity UPLC BEH C18 column (130 Å, 1.7 µm, 2.1 mm × 100 mm) from Waters Corp. (Milford, MA, USA) was employed as separation column and it was maintained at a constant temperature of 35 °C.Water with 0.1% of formic acid and ACN with 0.1% of formic acid were elution solvents A and B, respectively.The chromatographic run was carried out by applying the following gradient: 0-7 min, 10-90% B, hold for 2 min, 9-9.3 min, 90-10% B. The injection volume, the flow rate, and the injector temperature were 2 μL, 0.3 mL min −1 , and 15 °C, respectively.The total run time was 12 min, including the re-equilibration period.
The QTRAP-MS system was employed in the multiple reaction monitoring mode (MRM) applying unit mass resolution for both Q1 and Q3.The ESI source was operated in negative ionization mode.The following parameters were applied to the MS system: ion spray voltage: − 4500 V; curtain gas flow: 30 L min −1 ; interface temperature: 400 °C; ion spray gasses were maintained at a pressure value of 40 and 70, respectively.For quantitative purposes, two selected reaction monitoring transitions for each compound were acquired by the application of a dwell time of 20 ms.Formic acid adducts were monitored for all glucoside compounds.All the MS/MS transitions together with the relevant parameters (declustering potential, entrance potential, collision energy, and cell exit potential) are reported in Table S2.Data were processed through the use of SCIEX OS software (v.2.2).
Methods validation parameters
The validation of the methods was performed according to the SANTE/11312/2021 guideline [15].Briefly, the linearity of each analyte was assessed by performing six-point calibration (for details, see the supplementary material).The deviation of back-calculated concentration from the true concentration should be ≤ 20%.The matrix effect was assessed by comparing the response of the standards in matrix with the one of standards prepared in solvent.Recovery experiments were performed fortifying the matrices at different levels (n = 6 for each level).The average recoveries should be between 70 and 120%.
The precision (or repeatability) of the method was assessed by calculating the relative standard deviation (RSD%) of each analyte in the three matrices spiked at the levels reported in the respective tables.According to SANTE/11312/2021, the RSD% of each level spiked should be ≤ 20%.The limit of quantification (LOQ) was defined as the lowest concentration of the analyte that has been validated with acceptable trueness (recovery) and precision (RSD %) by applying the complete analytical method and identification criteria (± 0.1 min of retention time shift and ion ratio within ± 30% of average of calibration standards from the same analytical sequence).
Method development
During the method development, the intact glucosides recoveries at three fortification levels (n = 18) in wheat appeared to be good for haloxyfop glucoside (avr.90%), fairly good for 2,4-D glucoside and dichlorprop glucoside (avr.74% and avr.72%, respectively), but unacceptable for MCPA-glucoside (avr.36%).Nevertheless, the RSD% was always < 20%.Besides, satisfactory recoveries percentages were obtained for all the four glucosides in linseed (ranging between 76 and 103%).An hypothesis was firstly assumed and then confirmed, focused on the presence, in wheat sample, of intrinsic enzymes capable to partially hydrolyze glucosides after water was added to the wheat flour.For the determination of the intact glucosides, hydrolysis should be prevented, and the intrinsic enzymes needed to be deactivated.To this end, the addition of acidic ACN was done together with the water, rather than first soaking with water and then adding the acetonitrile for extraction.The effectiveness of this approach was confirmed by comparing three slightly different sample preparation approaches, herein briefly described: (I) addition of water (5 min soak) followed by addition of acidic ACN; (II) addition of water (30 min soak) followed by addition of acidic ACN; (III) addition of water in conjunction with acidic ACN.The confirmatory experiment outcomes are illustrated in Fig. 2. The blue bars show the average recoveries (%) by the use of the conventional QuEChERS procedure (I): initial homogenization of the sample with water, followed by the analytes fortification, and the addition of the organic solvent.As described at the beginning of the paragraph, MCPA-glucoside was the compound with the Fig. 2 Confirmatory experiment for the evaluation of intrinsic enzymatic activity in wheat.Recoveries % are relative to the average values obtained at the three fortification levels.Blue bars = addition of water (5 min soak) followed by addition of acidic ACN (conventional procedure); red bars = addition of water (30 min soak) followed by addition of acidic ACN; green bars = addition of water in conjunction with acidic ACN lower recovery.The red bars showed the results if a longer time (30 min) was used for soaking after the fortification with glucosides (to the aqueous solution) and before the addition of the organic solvent (II).A severe reduction in the recovery % for all the compounds is evident.The average recovery for MCPA-glucoside decreased to 11%, and up to 28, 39, and 63% for 2,4-D glucoside, dichlorprop glucoside, and haloxyfop glucoside, respectively.If water and acidic ACN were added at the same time (III) and before the metabolite standards (green bars), all the recovery values improved to approx.70% for MCPA-glucoside and > 90% for the other three compounds.
MCPA-glucoside was found to be the compound most susceptible to the enzymatic activity.Of the four glucosides tested, the phenoxyacid glucosides were more prone to the deconjugation by wheat enzymes than the aryloxyphenoxyacid-glucoside (haloxyfop glucoside).This might be caused by the presence of an additional aromatic ring (the pyridine substituent) inside the haloxyfop glucoside structure, which can probably play a crucial role in the bonding interaction between the active sites of both enzyme and metabolite.
Method validation
The validation was carried out for both wheat and linseed matrices.To compensate for matrix effects, quantitation was performed by the use of matrix-matched calibration line in a range of 1.25-25 ng mL −1 (6 calibration levels) in triplicate.The linearity results are reported in Table S3.The deviation of the back-calculated concentration percentage (BCC%) was generally within ± 10%.
Performance criteria for the four glucoside were assessed at 10, 20, and 50 µg kg −1 (six replicates each) in both matrices (Table 1).The recovery results were similar in both matrices.The lower recovery values were 67% and 68% at a level of 50 µg kg −1 for MCPA-glucoside in wheat and linseed, respectively.The RSD% was ≤ 12% and ≤ 10% for wheat and linseed, respectively.
Matrix effects were calculated by comparing the response of the analytes in matrix and in solvent at the same concentration level (25 ng mL −1 ).Detailed data are provided in Table S4.Significant suppression (T-test, α = 0.05) was observed in most cases, between − 18 and − 25% for wheat, and between − 22 and − 31% for linseed.Ion ratios were within ± 30% of the average value of calibration standards.
For all the four glucosides and for both matrices, an LOQ of 10 µg kg −1 was set, according to the SANTE/11312/2021 guideline.
Method development
The aim of this analytical approach was to optimize a procedure for the full and selective deconjugation of the metabolites followed by QuEChERS extraction, and to quantify the respective free acid by UHPLC-MS/MS.In order to properly quantify the free acid content starting from the metabolites, the differences in molecular weights were taken into account.
The glucosidase activity of several enzymes was investigated.Eight enzymes and a mixture of two out of the eight have been tested to evaluate their activity towards the glucosides of acidic herbicides.Preliminary tests were performed in acetate-buffered water solution (0.23 M) of a mixture of 2,4-D glucoside and haloxyfop glucoside at an initial concentration of 25 ng mL −1 .The enzyme concentration was arbitrarily set at 1 U mL −1 , following the supplier recommendations for the optimum temperature and pH (Table S1).The incubation time was set at 1, 4, 16, and 24 h. Figure 3 different time intervals obtained by the use of nine different enzymes for the 2,4-D and haloxyfop glucosides.The decrease of the glucosides concentration and the increase of the respective free acidic herbicide (data herein not reported) were monitored.After 24 h of incubation, most of the enzymes tested did not lead to quantitative deconjugation.Two enzymes, α-and β-glucosidase (fungus Aspergillus niger) and β-glucosidase (Thermotoga maritima), showed promising results in standard solutions, leading to a full conversion of the two glucosides into their respective native herbicides after 16 h of incubation.The enzyme αand β-glucosidase from fungus Aspergillus niger proved to be more practical, with an incubation temperature of 37 °C compared to the 90 °C for the β-glucosidase from Thermotoga maritima.Therefore, the enzyme from fungus Aspergillus niger was selected for testing the deconjugation in presence of matrix and for the further validation experiments.
illustrates the decrease in [c]% at
The presence of matrix and the concentration of the enzyme on the deconjugation were also evaluated (Table S5).The experiment was carried out by fortifying each glucoside at 20 ng mL −1 in water solution and three different commodities, namely wheat, linseed, and dry peas, as an additional matrix in which acid herbicides have been found in our laboratory.These samples were incubated for 24 h at 37 °C together with 0.1 U mL −1 and 1 U mL −1 of enzyme.Afterwards, acetate-buffered QuEChERS extraction was performed before sample analysis.The % decrease of each glucoside was monitored.As reported in Table S5, the use of 10 times lower enzyme [c] resulted in an insufficient activity for the full conversion of all the glucosides.The presence of matrix revealed an increase in the deconjugation efficiency compared to water for all the glucosides in the three matrices, except for haloxyfop glucoside in wheat.Furthermore, the use of 1 U mL −1 is necessary to obtain the full conversion of glucosides into the respective free acidic compounds.Only in wheat the [c] % of dichlorprop glucoside and haloxyfop glucoside, using 1 U mL −1 of enzyme at 24 h of incubation, resulted to be 10% and 3.4% respectively.
Based on these results, the α-and β-glucosidase from fungus Aspergillus niger (enzyme n˚2) at a concentration of 1 U mL −1 was selected for the validation of the method.The enzymatic deconjugation reaction was conducted at 37 °C for 24 h.
Method validation (wheat and linseed)
The developed method was validated for wheat and linseed by spiking with the acidic herbicides' glucosides and quantitative determination of the generated free acidic compounds.The calibration line was prepared in solvent in a range from 0.25 up to 20 ng mL −1 (six calibration levels) in triplicate.The ILISs of the four free acidic herbicides were added (at a concentration of 10 ng mL −1 ) to the matrix before the deconjugation step to compensate for matrix effect, and for any losses during the incubation and extraction.Hence, the recoveries as determined here are apparent recoveries [15], rather than recoveries as defined in SANTE/11312/2021 [16].The linearity results obtained for both matrices are reported in Table S6.The BCC% was ≤ ± 10% in all cases.The RSD% of the triplicate injections of calibrants was ≤ 11%, except for 2,4-D (25% at 0.25 ng mL −1 ).
The trueness of the method has been assessed at five levels in the range 2.5 to 38 µg kg −1 (n = 6) and the results are reported in Table 2.All the recovery results were between 70 and 120%.The lowest recovery was obtained for dichlorprop (76%) at 7 and 14 µg kg −1 , while values above 100% were observed for haloxyfop (max 117% at 19 µg kg −1 ) in linseed matrix.All the recovery values calculated in wheat matrix were in a range between 89 and 98%.The total average recovery was 92% and 97% in wheat and linseed, respectively.The RSD% was ≤ 13% (avr.7%) and ≤ 9% (avr.5%)for wheat and linseed, respectively.Matrix effects were not assessed, as correction was made by the use of ILISs.Ion
HALOXYFOP GLUCOSIDE
Fig. 3 Evaluation of the enzymatic activity.The number preceding the enzyme name refers to Table S1 ratios were within ± 30% of the average value of calibration standards.For all the four acidic herbicides' conjugates, an LOQ of 2.5 µg kg −1 was applicable in both matrices, according to the SANTE/11312/2021 guideline.
Method validation (rice-based infant formula)
The enzymatic deconjugation approach, developed and validated for cereal-based matrices, was also applied to a rice-based infant formula.In accordance with Article 4 of Regulation (EU) 2016/127, the total residue of haloxyfop in infant formulas should not exceed a concentration of 3 µg kg −1 [17Due to the low concentration requested by the Regulation reported above, the validation of a method for the intact glucoside content in rice-based infant formula was not included in the validation plan, inasmuch the LOQ obtained for wheat and linseeds (10 µg kg −1 ) exceed the MRL set for haloxyfop.The sample preparation and the analytical approach were the same as reported in the "Method validation (wheat and linseed)" section, but lower calibration and fortification levels were tested.The calibration line was prepared in solvent in a range from 0.1 up to 5 ng mL −1 (six calibration levels) in triplicates.The ILISs of the four free acidic herbicides were added at a concentration of 2 ng mL −1 to compensate for matrix effect, and for any potential sample preparation losses.In Table S7 are shown the linearity results for the free acidic herbicides.
In order to fulfill the requirements of the EU Regulation, the method validation criteria were assessed fortifying the rice-based infant formula at two very low concentrations: 1 and 3 µg kg −1 .In Table 3 are reported the trueness results based on six replicates for each level.The recoveries at 3 µg kg −1 ranging between 50% (dichlorprop) and 84% (2,4-D), while the RSD% was always ≤ 10%.At the lowest fortification level, dichlorporp was not detected, MCPA and 2,4-D recoveries were ≥ 70%, and haloxyfop recovery was slightly below 70%.Also in this case matrix effect was not assessed, because of the use of ILISs.The extracted ion chromatograms of the free acidic herbicides' signals at the lowest achievable fortified levels are shown in Fig. 4. The peaks of 2,4-D (Fig. 4a and b), MCPA (Fig. 4c and d), and haloxyfop (Fig. 4e and f) are relative to the 1 µg kg −1 fortification level, while the peak of dichlorprop (Fig. 4g and h) is relative to the 3 µg kg −1 fortification level.The ion ratios were within the ± 30% of the average of calibration standards with the exeption of haloxyfop at the 1 µg kg −1 level, where the signal for the qualifier was really close to the detection limit.The LOQ values were 3 µg kg −1 for dichlorprop and 1 µg kg −1 for 2,4-D, MCPA, and haloxyfop.Consequently, the method herein proposed is suitable for the determination of concentrations ≥ 1 µg kg −1 of the sum of free and glucoside of haloxyfop content in rice-based infant formulas.
Analysis of real samples with incurred residues
Fifteen matrices with incurred residues were subjected to analysis in order to test the applicability of the proposed enzymatic deconjugation strategy, and to compare the method to (a) a method without deconjugation, and (b) an existing method that uses alkaline hydrolysis.The analyzed matrices belong to three different commodity groups according to SANTE/11312/2021 guideline, namely high oil content and very low water content, high starch and/or protein content and low water and fat content, and high acid content and high water content.Two out of the four free acidic herbicides were detected in the real samples, namely 2,4-D and haloxyfop.Three samples were characterized by the presence of both pesticides, while exclusive 2,4-D was detected in twelve matrices.
The analysis of incurred samples was carried out applying three different approaches: acetate-buffered QuEChERS, acetate-buffered QuEChERS including alkaline hydrolysis (considered to convert "any" conjugate and esters into free acids), and acetate-buffered QuEChERS after the enzymatic deconjugation of glucosides developed and validated in this work.The comparison of the three different approaches helps to confirm the applicability of the enzymatic method compared to the two most common used methods of analysis.
The results are summarized in Table 4.
In most of the cases, the inclusion of the deconjugation/ hydrolysis steps led to an increase of the concentration of the total free acid content.For haloxyfop in oil seeds, this was a factor 1.4-1.7.For 2,4-D, this ranged from no increase to a factor 1.6 in dry matrices (oil seed, cereals, peas), and a factor 2.1-2.7 in citrus.The differences in total free acids between the two deconjugation methods were minor for 13 of the 15 samples.The exceptions concerned haloxyfop in sunflower seed meals S1 and S2: no increase with the enzymatic method, 1.7-fold increase for the alkaline hydrolysis.This could indicate that for this particular pesticide/matrix combination other conjugates and/or esters may have been present in the sample material.The highest concentration's increase was observed for 2,4-D in grapefruit S2 (sample n˚12) for which a 2.7-fold increase was found if enzymatic or alkaline hydrolysis was included in the sample preparation.
Noteworthy is the detection of the 2,4-D glucoside's peak in all the three grapefruit samples, in lemon and in mandarin.The peak identification was confirmed against the solvent standard and further by over spiking the extract with the analytical standard (illustrated in Fig. 5, and more details in Figure S1).
Although in citrus fruits the application of the pesticide takes place in the form of esters relatively short before harvest to reduce preharvest drop of mature fruit, our results confirm that esters in this case are less or not relevant for the total 2,4-D residue.This is in line with the reported rapid hydrolysis of esters after application [5].
Conclusion
In the present research, a novel method for the total free acidic herbicides content, after enzymatic deconjugation, was proposed and validated in three relevant matrices.This approach can be considered reliable and specific for releasing the parent pesticide from the glucoside metabolite(s), without risking any further hydrolysis as may happen with alkaline hydrolysis.The enzyme α-and β-glucosidase (fungus Aspergillus niger) demonstrated satisfactory deconjugation power together with high repeatability in the presence of all the three matrices included in this study.
A method for the direct determination of the intact glucoside content was also optimized and validated.The method demonstrated good extraction recoveries and repeatability, although the few analytical standards on the market make it applicable to real samples only for a limited number of pesticide conjugates.
In both cases, method validation was successful and LOQs down to 1 µg kg −1 were achieved.
Based on the results obtained for the analysis of samples with incurred residues, the use of the enzymatic deconjugation is comparable to the existing alkaline hydrolysis method with the benefit of preventing hydrolysis of other functional groups present in the parent pesticides.On the other hand, the enzymatic approach resulted to be more time consuming as it requires a 24-h incubation.
Fig. 1
Fig.1Combined sample preparation scheme for (in)direct analysis of phenoxy acid glycosides.*For the evaluation of intact glucosides in wheat, ACN (1% AA) has been added together with the water in order to inactivate intrinsic enzyme activity, and the deconjugation step is omitted
Fig. 5
Fig. 5 Extracted ion chromatogram (EIC) for 2,4-D glucoside in mandarin sample using conventional QuEChERS extraction without deconjugation.Extract of the sample without (blue) and with (pink) addition of the standard of 2,4-D glucoside
Table 3
Spiked levels of glucoside conjugates, expressed as free acids, apparent recoveries %, and relative standard deviation % (RSD%) obtained for the analysis of free acidic herbicides after the deconjugation of their respective glucosides in the rice-based infant formula
Table 4
Results relative to the quantification of 2,4-D and haloxyfop in samples with incurred residues extracted by the three different strategies | 7,670.6 | 2023-08-14T00:00:00.000 | [
"Chemistry",
"Environmental Science"
] |
THE MAIN FEATURES OF THE EVOLUTION OF THE HORIZONTAL BAR IN THE FIRST HALF OF THE 19 TH CENTURY
Th e horizontal bar, as a gymnastic apparatus, was invented by the founder of the German gymnastic system Ludwig Jahn in 1811 and aft erwards it was introduced and developed by him and his students. Jahn’s primitive horizontal wooden bar was nothing more than a bar pinned-fi xed between two trees. Later on, construct manufactured horizontal bars fi xed on the ground, with two pilasters and with various arrangements and sizes. Th e exercises were mostly static, dynamic and close to the bar, since these wooden horizontal bars did not allow large swings. At the same time a special elaboration was established for the bar, in order to make it endurable and easy to use. In the middle of the 19th century the metal bars began to appear and infl uenced favourably the quality and the quantity of the exercises. At the same time this apparatus and the parallel bars were established as the dominant apparatuses of the German gymnastic system. Th e purpose of this study was to research and demonstrate the main features of the evolution of the horizontal bar (exercises, rules and apparatus) during the fi rst half of the 19th century.
INTRODUCTION
Th e horizontal bar is called the "king" of apparatuses of gymnastics, since the alternations of the various exercises and grips, the speed of the giant swings and particularly the spectacular aerial phases in the program, as well as in the exits, make it exciting and popular.Moreover, the fact that this apparatus is the last one in the Olympic series of apparatuses, it is considered to be the crown of the apparatuses by spectators and athletes.It is also important to note that several apparatuses (for women and men) have been enriched with many elements from the horizontal bar (Kaimakamis, 2001).Th e fact is that a new apparatus was invented and it won a dominant position in gymnastics and in the German gymnastic system.Th is is an important reason for somebody to research the historical development and its position in physical education and sports in a critical period for the sports, such as in the fi rst half of the 19th century.
Th e horizontal bar and the parallel bars were developed and established as the representative apparatuses of the German gymnastic system, since they were connected with a national and political -social ideology.Th at is why the enemies of these political and social ideologies had under persecution these ap-paratuses (Krüger, 1993;Schmidt, 2011).However, these apparatuses were established, as mentioned above, as the basic gymnastic apparatuses, not only in the German gymnastic system, but also in the competitive and school gymnastics.
Th is study attempts to research, record and show off the invention and evolution of the horizontal bar in the fi rst half of the 19 th century.Th ere was also a study for the position of this apparatus in physical education, in gymnastic systems and in sport generally.
Th e method used is that of the research fi eld, and the data collection was based on archival and historical research, focused mainly on the forms and technique of construction of the horizontal bar and the type of exercises performed on it.Th e research, in order to provide fair and objective interpretations, includes a brief reference to the German gymnastic system and its founder in the introductory part.
BRIEF REFERENCE TO GUTS MUTHS'S AND LUDWIG JAHN'S PRIMITIVE HORIZONTAL BARS
Guts Muths (1759-1839), was the most important person of the Humanism.In his world-wide famous book "Gymnastik für die Jugend (1793)" (gymnastics for young people), he presented a kind of horizontal bar, that looked like the goal -post of football.Apart from the description and the method of manufacture of this apparatus, the author proposes numerous exercises (mainly from hung), but he does not name or classify it in a particular category of gymnastic apparatuses (Guts-Muths, 1793).His compatriot Ludwig Jahn (1778-1852), who is considered to be the "father "of the horizontal bar, did it two decades later.
In the fi rst outdoor, public gymnasium, that opened near Berlin in 1811, Ludwig Jahn, among others, invented, named (Reck), classifi ed and established the horizontal bar as an apparatus.He also pro-posed and classifi ed the various exercises that could be performed in this new apparatus.Th is Jahn's fi rst horizontal bar was nothing more than a thick wooden bar, fi xed-pinned between two trees (Fig. 1, 2).Later on, the apparatus was improved and evolved, by Jahn and his students.Th e above apparatus became popularly known through the book titled "Die Deutsche Turnkunst" by Jahn and Eiselen, published in 1816 (Jahn, & Eiselen, 1816).
Jahn (1816) describes this fi rst horizontal bar as follows: "It is an endurable, round bar, horizontal to the ground, which is fi xed on pilasters.Between these pilasters there is a distance greater than the height of a man.For beginners the height of the apparatus must be up to the shoulders or the head of trainees and for advanced ones higher, in order to catch it with bouncing" (p.147).During the next years the horizontal bar was established, as and the parallel bars, whose founder is also considered to be Jahn, as the dominant apparatus of the German gymnastic system (Borrman, 1978;Krüger, 1993).
THE EVOLUTION OF THE HORIZONTAL BAR IN THE DECADES OF 1830 AND 1840
As the years were passing trainers and trainees, having the collaboration of carpenters constructed more and more stable and easy to use horizontal bars of diff erent heights.In the decades of 1830 and 1840, various forms -variations of wooden horizontal bars (fi xed to the ground) were constructed and used.Th e horizontal bar was stable and with a thinner and stronger bar, and some more complicated exercises were performed in it.More specifi cally, in these decades, the way of the construction and elaboration backward (from hung), from removal from support, or with a circle with tuck over the bar.Th e main grips used were those, which are used nowadays, that are "fi rst-over grip", "second-under grip", "third-mixed grip", "cross grip" and "dorsal grip" (Schwobe, 1988).
In his book, published in Athens in 1837 and entitled "Summary of gymnastics", George Pagontas (1837) devotes seven pages for the horizontal bar.Th e author describes the fi rst wooden "fi xed" horizontal bar and then he refers to the kinds of exercises and the way of performance in this apparatus.He divides the exercises into two major categories: those performed from hung (swings) and those performed from support.He suggests numerous exercises from various positions and in various ways (hung, pulls, rotations, etc.), which are mostly static and dynamic, without large amplitude and near the axis of rotation (bar) (Fig. 5) (Pagontas, 1837).sandpaper.A similar elaboration was followed for the pilasters, which were strong and fi xed on the ground.Th e exercises, that were performed in this horizontal bar, were static, dynamic and without amplitude and swing.At the same time horizontal bars were manufactured in a straight, polygonal and circular shape (Fig. 3, 4.) (Gasch, 1920).
of the bar and of the horizontal bar generally passed through the following three phases (Spieht, 1989;Gasch, 1920): First phase: for the elaboration of the bar, they used to cut trunks of tall, straight and young trees (pine, beech, elm, maple, and apple); they peeled them and left them dry.Th en, they cleaned them well, rubbed them with oil and fi nally they scraped them with Second phase: they used to choose trunks of young trees, without knots (predominantly pine, beech, oak) and then they stored them in a dry place for a long time.Aft er the relative elaboration, they stuck two pieces of wood having the so-called "waters" in the same direction.In this way the bar became more endurable.However the bar was still thick and fragile (diameter 6-8cm), so it was diffi cult for trainees to perform exercises with big swings and amplitude (Schwobe, 1988).
Th ird phase: aft er the relevant elaboration of the wood, that was going to be a bar, they used to place in the centre of the bar an iron or steel rod, as in a writing pencil.Th e above bar had more endurance and could be a little thinner, so that it made easy the performance of even more diffi cult and more complicated exercises, since the handle was more convenient.
During this period, numerous exercises were performed from various positions (mainly from support and hung) and in various ways, like turns, static or dynamic, without great amplitude and swing.Gymnasts performed even more exercises (even if there were a little diff erence between them) and they did not care if the exercises were useful or diffi cult.
Th e horizontal bar was improved, so they performed exercises with amplitude and swing, during the decades of 1830 and 1840.Th e exits were proportional to the level of exercises, simple and harmless, so the athletes did not use landing mats, but only soil or sand .Th ey performed them aft er a swing forward or In 1837 the German Ernst Eiselen (Jahn's student and collaborator) publishes 46 "Gymnastics Tables", which include apparatuses, exercises and supports.Th ese tables were used for several years by the pregymnasts and the trainers of that period as a valuable and helpful manual (Pahncke, 1983).
Regarding Eiselen's horizontal bar, it was an improved "fi xed apparatus", whose bar and the pilasters were wooden.Th e improvement of the apparatus is obvious in the exercises, whose the main feature is the stretched body and the relatively large swings from the frontal hung (Fig. 6, 7) (Eiselen, 1861; Pahncke, 1983).
THE INTRODUCTION OF THE METAL BAR
According to the German historian Edmund Neuendorff (1875-1961), the fi rst one, who introduced the iron bar in Germany, and perhaps in the whole world, was J. Carl Lion, in 1850.Another author , Wassmannwdorff , states that a steel bar was also used in Heidelberg, in 1852 (Neuendorff , 1930).It is certain that in the middle of the 19 th century, a rapid development in the horizontal bar exercises is observed, mainly due to the introduction of the metal bar.Th e persons, who are involved in the sport of gymnastics, know that the quality and functionality of the apparatus has a direct impact in both quality and quantity of the exercises, as well as in the protection-safety of the athletes (Spieth, 1989, Kaimakamis, 2001).
As mentioned above, in the sport of gymnastics the exercises and the apparatuses are in a constant interaction (Gross, & Leikov, 1994).So, since the bar became metal that is thinner, more fl exible, more tough and easy to use, the quality of the exercises, the way of performance and the variety of they had changed a lot.
With the introduction of the metal bar, there was not any signifi cant change in the general shape of the horizontal bar.It remained stable on the ground (fi xed), with thick wooden pilasters (Fig. 8), and shortly aft erwards the metal bar appeared with adjustable height (Fig. 9) (Gregenow, & Samel, 1919).Th is change-improvement, from the wooden to a metal bar, created a shift towards the exercises with more amplitude and swing.Th us, the kip and the giant swings, two spectacular and useful exercises ("key" exercises) appeared, and they were performed by athletes in the years that followed.Th e giant swing was well known to acrobats of the past years (Grigoras, 1358/1997; Diem, 1967;Kaimakamis, et al., 2011), while it was also displayed in Eiselen's tables (1837, p. 82), but it was not known, if the gymnasts of that era used to perform it.Aft er 1850 the gymnasts began to perform the above exercise more oft en with A and B grip, that is, forward and backward.
According to the historian of gymnastics Josef Göhler (1987Göhler ( , 1992)), the kip was performed by the trainer Karl Kunz for the fi rst time in Leipzig.During the years that followed, this exercise was performed with diff erent variations, even with one hand (the other hand was catching the forearm, with which the athlete was performing).
Th e new bars had several advantages, but they were still far away from being perfect and eff ective, since the gymnasts used them soon aft er they got them out of the foundry, without making any special elaboration.Th e gymnasts, in order to avoid slipping of the bar, painted it with a special paint, or wrapped it with a special skin (Gasch, 1920;Spith, 1989).Th e bar, that was appropriately elaborated, had a diameter of 6-8 cm.From the decade of 1930 and later on the constructors put a special metal plate on the bar that made it tougher.Since Jahn's era, the athletes began to use more horizontal bars in a row or in a polygonal shape (hexagon, octagon) and at various heights.Th e exercises, especially by the end of the decade of 1930, were static, dynamic, and rotational and near the bar, without amplitude and great swing.
CONCLUSION
Ludwig Jahn is considered to be the "father" of the horizontal bar, since he invented and named it and he also proposed and classifi ed the various exercises that could be performed on the new apparatus.Th e fi rst Jahn's horizontal bar was nothing more than a thick wooden bar fi xed-pinned between two trees.Later, it was pinned on two thick wooden pilasters, which were fi xed to the ground in various heights and arrangements.
In the fi rst half of the 19th century, the horizontal bar was wooden and stable-fi xed on the ground in various forms and dimensions, while the exercises were more static and dynamic, without amplitude and swing, since the use of this apparatus did not allow such exercises.At the same time, various ways of elaborating of the wooden bar were invented, in order to make it strong, fl exible and easy to use.
In the middle of the 19th century the metal bar was widely used, and it had a direct impact in the exercises.So, since then, exercises with great amplitude and swing were performed, together with the static and dynamic ones.
Figure 5 .
Figure 5. Pagontas horizontal bar was stable with a wooden bar and pilasters, without the possibility of adjusting the height.(Pagontas 1837,Table of gymnastic shapes).
Figures 8 . & 9 .
Figures 8. & 9. Stable horizontal bars with wooden pilasters and metal bar.Part of the pilasters was inside the ground.In the second one there is the possibility of adjusting the height of the bar (Gregenow, & Samel, 1919; p. 7). | 3,388 | 2014-01-01T00:00:00.000 | [
"History",
"Engineering"
] |
Stability of black holes with non-minimally coupled scalar hair to the Einstein tensor
General relativity admits a plethora of exact compact object solutions. The augmentation of Einstein's action with non-minimal coupling terms leads to modified theories with rich structure, which, in turn, provide non-trivial solutions with intriguing phenomenology. Thus, assessing their viability under generic fluctuations is of utmost importance for gravity theories. We consider static and spherically-symmetric solutions of a Horndeski subclass which includes a massless scalar field non-minimally coupled to the Einstein tensor. Such theory possesses second-order field equations and admits an exact black hole solution with scalar hair. Here, we study the stability of such solution under axial gravitational perturbations and find that it is linearly stable. The qualitative features of the ringdown waveform depend solely on the ratio of the two available parameters of spacetime, namely the black hole mass $m$ and the non-minimal coupling strength $\ell_\eta$. Finally, we demonstrate the gravitational-wave ringdown transitions between three distinct patterns as the ratio $m/\ell_\eta$ increases; a state which is dominated by photon-sphere excitations and maintains a typical quasinormal ringdown, an intermediate long-lived state which exhibits gravitational-wave echoes and, finally, a state where the ringdown and echoes are depleted rapidly to turn to an exponential tail.
I. Introduction
Compact objects play a decisive role in contemporary astrophysics, as their relativistic collisions may provide crucial information concerning astrophysical processes in extreme-gravity conditions. The latest gravitational-wave (GW) detections by ground-based interferometers [1][2][3][4][5] have provided intelligence regarding the strong-field regime. The early stage of the gravitational ringdown of black hole (BH) mergers, described by quasinormal modes (QNMs) [6][7][8][9], further contributes to the understanding of their relaxation properties, as well as the governing theory of gravity. Nevertheless, a conclusive interpretation of the underlying gravitational theory has not yet been met. Therefore, it is expected that future ground and space-borne detectors will improve our perception of gravitational interactions, and in particular will shed light into the existence of exotic compact objects (ECOs) [10][11][12][13][14][15][16][17][18][19] which may possess unexpected multipolar and near-horizon structures that differ significantly from those of BHs.
ECOs are spacetime solutions of general relativity (GR) and modified gravity theories that describe compact objects with exotic properties and intriguing multipolar structure [20], such as BHs which evade the 'no-hair' theorem and give rise to additional spacetime parameters (besides the mass, spin and charge), wormholes that evade singularities and connect Universes [11] as well as horizonless compact objects which possess unexpected near-horizon structures that expel the event horizon and subsequent singularities through reflective centrifugal barriers [21]. The majority of ECOs, which possess a photon sphere, can naturally mimic BHs when perturbed and produce prompt ringdown waveforms in the time domain which are identical to those of BHs [22,23]. This occurs due to the indifference of photon sphere excitations from external perturbations. The dominant effects of ECO ringdown only appear at late times in the form of successively damped repetitions of subsequent photon sphere excitation, know as echoes, which occur due to the entrapment of perturbations inside potential wells and the formation of quasibound states [24][25][26][27]. These modes represent the actual QNM content of the ECO, which in the frequency domain is dramatically different from the QNMs of BHs [28]. In what follows, we will loosely refer to the spectral content of the prompt ringdown as the QNM spectrum whenever the echo timescales are sufficiently large, even though these to do not necessarily correspond to the actual QNMs of the full eigenvalue problem.
Even though GR has withstood many experimental tests, remaining consistent with plenty of observations so far, modified theories of gravity attempt to describe phenomena where GR seems to fail, such as the construction of viable cosmological models for inflation and dark energy [29]. The most general scalar-tensor theory of gravity in four dimensions whose Lagrangian is constructed by the metric tensor and a non-minimally coupled scalar field is Horndeski's theory [30]. This gravity theory contains subclasses that preserve a classical Galilean symmetry [31,32], leads to second-order field equations and is free of ghost instabilities [33][34][35][36]. The most extensively studied subclass of Horndeski theory is represented by a Lagrangian with a scalar field non-minimally coupled to the Einstein tensor.
On large scales, the non-minimal coupling term of the aforementioned subclass possesses very intriguing effects on inflationary dynamics. From an inflationary model-building point of view, it allows for a very effective implementation of a slow-roll phase, due to the fact that it acts as a friction mechanism [37,38], allowing potentials such as the Standard-Model Higgs [39], thus making it a very attractive term in Horndeski theory. A generalization of the nonminimal kinetic term and its application to inflation was recently analyzed in [40][41][42]. During the inflationary phase of the Universe in GR, scalar and tensor perturbations result in spectra which are red-shifted [43]. One then expects that the presence of a non-minimal kinetic term will magnify the red-shift behavior of the perturbation spectra because of the friction effect, which is subsequently related to the decrease of the Hubble parameter H during inflation (for a review of the effects of the non-minimal kinetic term in inflation see [44]). On the contrary, if the scalar fields are phantom fields with negative kinetic energy non-minimally coupled to the Einstein tensor, then the spectra of scalar and tensor perturbations produced during an inflationary phase are typically blue-shifted [45]. Their dynamics in cosmological setups, as well as the instabilities at which tachyons or ghosts appear in the infrared region around the present Hubble scale were discussed in [46]. Beyond inflation, the non-minimal kinetic coupling has also been utilized to construct cosmological models [47].
Apart from cosmological applications, the particular subclass of Horndeski theory allows the construction of BH solutions with scalar hair [48][49][50][51][52]. Consequently, an important aspect of these compact objects is their stability. Regarding the formation of stable hairy BHs, the 'no-hair' theorem should be evaded [53,54], which translates to the existence of a balance mechanism to outweigh the gravitational force outside the BH event horizon. A typical example is offered by holography. A charged scalar field theory embedded into an anti-de Sitter (AdS) Lagrangian leads to the formation of horizon hair, as a result of the counterbalance between the attractive gravitational and the repulsive electromagnetic force [55,56]. Then, according to the gauge/gravity duality, such mechanism allows a holographic phase transition which results to a conformal field theory describing a holographic superconductor on the AdS boundary [57][58][59], besides other interesting phenomena [60][61][62][63][64].
In the subclass of the Horndeski theory in which a scalar field is coupled kinetically to the Einstein tensor there is a direct coupling of matter to curvature and there exist local solutions in which this coupling appears as a primary charge in the metric functions of the resulting hairy BH, that may play the role of an effective negative cosmological constant, even though the action is absent of any cosmological constant term. An interesting aspect of such objects are their thermodynamical properties as well as their viability as novel compact objects. One of the most important requirements for the viability of these objects is their stability against perturbations. To that end, the stability of the hairy BH solution found in [49], against linear scalar perturbations, was recently assessed [65]. At the linearized level, the non-minimal coupling constant sources an effective asymptotic boundary where the effective potential of the wave equation that governs the propagation of scalar perturbations diverges. Such a boundary serves as a perfect reflector for incident scalar waves and generates a trapping region outside the photon sphere without the need of invoking a negative cosmological constant in the action of the theory. As a result, the ringdown signal of the BH exhibits successively damped echoes. Thus, scalarized compact objects in the particular Horndeski class can serve as alternatives to the standard echo sources which possess trapping regions beyond the photon sphere due to near-horizon structures [15,[21][22][23][66][67][68][69][70][71].
Gravitational perturbations in modified theories of gravity provide information regarding the velocity with which GWs travel. The recent observations of GW170817 and GRB170817A, as well as its electromagnetic counterpart, imply that GWs travel at the speed of light, with deviations smaller than a few 10 −15 . The consequences of this experimental result for models of dark energy and modified gravity theories were discussed in [72,73]. In particular these constraints on the speed of GWs were used to test some classes of Horndeski theory. A detailed discussion of the effects of the kinetic coupling on the speed of the GWs in the subclass of the Horndeski theory, in which the scalar field is coupled to the Einstein tensor, is provided in [74]. It was found that while the kinetic energy of a minimally coupled scalar field does not change under the cosmological evolution, the kinetic energy of the scalar field coupled to the Einstein tensor changes as the Universe expands. At the inflationary epoch it acts as a friction term and drives inflation with steep potentials, while as the Universe expands its contribution to the cosmological evolution is less important and at the late cosmological epoch is negligible, thus GWs propagate at the speed of light at late cosmological times.
In any case, Horndeski theories do predict a modified speed of GW propagation. Even so, recent studies [75][76][77] demonstrate that with analogue versions of Horndenki gravity, which are based on teleparallel gravity constructed with a nonvanishing torsion tensor, one can device a more general Horndeski theory where GWs propagate with the speed of light without eliminating the coupling functions G 4 (Φ, X) and G 5 (Φ, X) that were highly constrained in standard Horndeski theory. Hence, in the teleparallel approach one is able to restore these terms, creating an interesting way to revive this theory of gravity. Even though our analysis still lies in a curvature-based formulation of gravity, it is still very interesting that there are ways of evading the tight constraints of Horndeski theory.
The purpose of this work is twofold. First, we investigate the effect of the kinetic coupling of the scalar field to the Einstein tensor on the stability of local solutions of the particular subclass of Horndeski theories. We will work with the hairy BH solution [49] for which scalar perturbations have been analyzed recently [65,78] for a wide range of the kinetic coupling. Under these analyses the hairy BH was found to be linearly stable with echoes being present at late times on the ringdown waveform. Here, we perform a first step towards gravitational modal stability, thus extending our previous test scalar field analysis to axial gravitational perturbations. Since the kinetic coupling appears as a primary charge in the metric functions, we expect to get a better understanding on the stability of such objects, although a complete picture of gravitational modal stability can only be discerned when one considers not only the axial but also the polar sector of fluctuations which generally couple to the scalar hair present in scalar-tensor theories. Our second goal is to investigate how the kinetic coupling affects the ringdown waveform and attempt to assess the appearance of GW echoes in the parametric space of such geometries.
The work is organized as follows. In Section II we review the BH solution of the Horndeski theory with a scalar field kinetically coupled to the Einstein tensor. In Section III we discuss the general framework of axial gravitational perturbations and we derive the effective potential of the considered BH solution. In Section IV we demonstrate the numerical scheme of time-domain integration. In Section V we study the evolution of the axial gravitational perturbations and finally in Section VI we conclude this work.
II. Black hole solution with a scalar field kinetically coupled to Einstein tensor
In what follows, we will consider static solutions of a scalar-tensor theory in which the scalar field is kinetically coupled to the Einstein tensor. This is part of the most general scalar-tensor theory which yields second-order field equations, namely the Horndeski theory. The full Lagrangian is given by: We consider a particular subset of Horndeski theory with non-trivial L 2 = K(Φ, X) = 2εX and G 4 (Φ, X) = (8π) −1 − ηX terms. The theory is described by the following action, where g µν is the metric tensor, g = det(g µν ), R is the scalar curvature, G µν is the Einstein tensor, Φ is a real massless scalar field and η is the non-minimal kinetic coupling parameter with dimensions of length-squared. The ε parameter equals ±1, where in the case ε = +1 we have a canonical scalar field with positive kinetic term, while the case ε = −1 corresponds to a phantom scalar field with negative kinetic energy. Even though in the original Hordenski theory the kinetic energy of the scalar field is positive, in this work we will also considered the case where the scalar field's kinetic energy is negative. Varying the action (2.2) with respect to the metric tensor g µν and scalar field Φ provides the following field equations
5)
A static and spherically-symmetric BH solution to the aforementioned theory has been found in [49], where the scalar field of the theory depends only on the radial coordinate. The solution yields the constraint εη < 0, which leads to the definition of the following coupling parameter In terms of the line element the BH solution corresponds to g θθ (r) = r 2 with r ∈ (0, +∞) and yields the following metric components while the scalar hair of the theory reads which implies that where r = r h is the event horizon radius. It is important to note that the asymptotic behavior of the lapse function (2.8a) when r → +∞, becomes F (r) ∼ r 2 / 2 η , where the term 2 η assumes a form of an effective cosmological scale, similar to that of an actual cosmological radius, with dimensionality length squared in geometrized units. Even so, we have to stress that the action does not contain any negative cosmological constant term and the emergence of this effective scale is solely due to the non-minimal coupling of the scalar field to the Einstein tensor.
Another important note is that the equations of motion of the scalar field, (2.3b), can be expressed as the conservation of the Noether current that corresponds to the shift symmetry of the Galileon, i.e. Φ → Φ + δΦ, where δΦ is constant. It can straightforwardly be found that the current is defined as The BH solution satisfies the physical requirement that the norm of this current does not diverge at the horizon, by virtue of (2.6). The scalar hair, however, diverges at the horizon as one can readily see from (2.9). One can also deduce from (2.9) that the metric components can be expressed in terms of the scalar hair. As such, the scalar hair of the theory can be understood as an intrinsic part of the geometry. Finally, we note that due to the Bianchi identity, ∇ µ G µν = 0, the equation (2.3a) leads to a differential consequence The substitution of expressions (2.4) and (2.5) into the Bianchi identity yields Eq. (2.3b). In other words, the equation of motion of scalar fields (2.3b) is the differential consequence of the system (2.3a) due to the general covariance and the absence of further degrees of freedom. Let us also note that the solution reproduces the Schwarzschild BH in the limit of η → +∞, therefore the metric can be understood as a hairy BH generalization of the Schwarzschild spacetime with effective AdS-asymptotics, when the spin-0 degree of freedom also acquires dynamics from the kinetic mixing with the graviton, i.e. the G µν ∂ µ Φ∂ ν Φ term.
As one can observe, the metric functions (2.8) of the BH solution [49] depend only on the parameter η , besides the mass parameter m. Therefore, they do not contain the information of whether the produced compact object is made of normal or phantom matter as long as η < 0. Even though in [49] the scalar field is assumed to be canonical ( > 0) with the non-minimal coupling being negative (η < 0), we have checked that the same solution is obtained when the scalar field is phantom ( < 0) and the non-minimal coupling is positive (η > 0).
In finality, let us mention that in [38] it was argued that the subclass of the Horndeski theory under consideration yields wormhole solutions as well. We wish to note here that the wormhole solution derived there is just a coordinate artifact of the BH and does not correspond to a traversable wormhole. Let us consider the BH metric we previously described We perform the following coordinate transformation and the coordinate redefinition Note that this coordinate transformation covers the BH region for r > a twice. Fixing a > r h , the corresponding geometry will cover only the region r > r h twice. Performing the coordinate transformation (2.14), one finds: The corresponding metric is the solution derived in [79], modulo the form of the g tt component, which was left as an indefinite integral. In particular, the integral factor in the g tt component found in [79] can be solved exactly to yield: Obviously, a compact object cannot change nature due to a coordinate transformation, thus, the metric (2.16) is just the BH solution (2.8) written in a "bad" coordinate system and was falsely identified as a wormhole. This result is in accordance with the findings in [80] regarding the absence of static and spherically-symmetric wormhole solutions in the particular subclass of Horndeski theory.
III. Axial gravitational perturbations: general analysis
In this section we will undergo an analysis of axial perturbations of the BH solution using Chandrasekhar's method [81]. The most general metric for an axisymmetric non-stationary spacetime is given by This result is found by use of the Cotton-Darboux theorem, which states that any three-dimensional metric, g = g ij ∂ i ∂ j , can always be brought to a diagonal form by a local coordinate transformation. It is clear that the background metric of our solutions can be described by q i = 0. In this gauge, axial perturbations are described by the non-vanishing values of q i , while polar perturbations are described by f i → f i + δf i and q i = 0.
For the purposes of writing down the explicit form of the equations (2.3a) for the most general form of the metric of (3.1), we shall obtain the components of the curvature tensors via Cartan's structure equations. We choose the following tetrads to work with 0 µ = (e f0 , 0, 0, 0) , Under these tetrads, the basis is found to be The reasoning behind these tetrads is that they associate the perturbations to a single tetrad and thus allow for a decoupling of the equations of motion at first order. The spin connections can be derived from the tetrad postulate, where we associate the zero torsion condition with the Levi-Civita connection as Using Cartan's second structure equation, we can derive the Riemann tensor, and from (3.5) all the necessary tensors for the equations of motion of the underlying gravity theory. From (2.3a), we know that whereT ab = 8π εT ab +ηΘ ab (note that indices a, b are Lorentz and not spacetime indices). However, the Einstein and stress-energy tensors acquire contributions from the perturbations. From the linearization of the equations of motion we find that only the G 03 , G 13 , G 23 terms are important at first order. In fact, equation δG 03 = δT 03 is degenerate, i.e. it is automatically satisfied by the other two equations. In particular, we find the following results Using the redefinition the differential equations we get from δG ab = δT ab , using (3.7a-3.7d), read Differentiating (3.9a) and (3.9b) by θ and r respectively, and adding them together yields the differential equation that governs axial perturbations Using a separation of variables, the angular component of equation (3.10) can be understood as the known ultraspherical differential equation with solutions the Gegenbauer polynomials, i.e. the angular component of the perturbation is the same as in the Schwarzschild case. This is to be expected, since both spacetimes are spherically symmetric. Thus, setting F = R(r, t)S(θ), where in the Appendix A we explicitly show that S(θ) = C −3/2 +2 (θ), (3.10) can be rewritten as where is the angular quantum number of the perturbation. In order to continue, it will prove useful to set thus, simplifying equation (3.11) to By further defining a scalar field and using the tortoise coordinate transformation h = dr/dr * , we find that the equation that governs the axial perturbations reads If we consider a time dependence of the form u(r, t) ∼ u(r) exp(iωt), then (3.15) yields a non-trivial wave equation of the following form As such, the corresponding Regge-Wheeler-like potential reads It is important to note that by fixing the metric components to those of the BH solution, Eq. (3.17) asymptotes to the standard Schwarzschild Regge-Wheeler effective potential in the limit where η → +∞. Another crucial aspect of Eq. (3.16) is the presence of a multiplication factor 1/AB to the gravitational perturbation frequency ω, which signifies the existence of a modified speed of GW propagation [74].
Equation (3.16) demonstrates that one can reduce the problem of axial gravitational perturbations around the compact object under consideration into a single one-dimensional scattering problem with an effective potential. By applying the procedure outlined above, on the BH solution (2.8)-(2.9) we find the corresponding effective potentials a gravitational perturbation induces. An illustration of these potentials for various choices of the angular index of the perturbation is given in Fig. 1. The effective potential possesses a peak for sufficiently large η , at the Schwarzschild limit, which is located arbitrary close to the photon sphere at r = 3m. For a myriad of BH solutions, Ref. [82] demonstrates that the angular frequency and instability timescale of null geodesics that are trapped in unstable circular orbits at the photon sphere are associated with the oscillation frequency and decay rate of eikonal QNMs. In turn, the existence of such centrifugal potential barrier is responsible for the prompt ringdown and photon sphere QNMs found in the response of a plethora of perturbed BHs [83][84][85][86][87][88][89]. In what follows, we will show that the aforementioned analogy only holds at the GR limit and away from it such duality is broken.
Asymptotically, the potential approaches a constant positive value, a behavior very different from the case of scalar perturbations [65], but still, one which encodes the effective non-asymptotically-flat nature of spacetime. A similar behavior was also observed in [90]. More specifically, at spatial infinity, the BH potential at zeroth order yields V (r → ∞) ∼ C + O (1/r) where the constant C depends on the parameters and η and can be calculated only numerically. A similar asymptotic behavior was also observed in [91] for the case of vector perturbations in the scalarized BH considered here.
We must note here the dimensionality of various quantities appearing in the following figures, in order to avoid repetition and cluttering on our discussion. According to the geometrized units utilized here, the BH mass m and coupling η have dimensions of length, while the perturbation u is dimensionless, as well as the ratio m/ η , which makes it an appropriate scale for our analysis. In turn, the effective potential V (r) has dimensionality length to the power of −2, as expected, while the frequency ω has length dimensions.
IV. Time-domain integration scheme
Here, we briefly demonstrate the numerical scheme of time-domain integration, first proposed in [92], which yields the temporal response of a metric perturbation as it propagates on a fixed background spacetime. By defining u(r * , t) = u(i∆r * , j∆t) = u i,j , V (r(r * )) = V (r * ) = V (i∆r * ) = V i , A(r * ) = A(i∆r * ) = A i and B(r * ) = B(i∆r * ) = B i equation (3.16) takes the form Then, by using as initial condition a Gaussian wave-packet of the form ψ(r * , t) = exp −(r * − c) 2 /(2σ 2 ) and ψ(ρ * , t < 0) = 0, where c and σ correspond to the median and width of the wave-packet, we can derive the time evolution of u where the Courant-Friedrichs-Lewy (CFL) condition requires ∆t/∆r * < 1/(A i B i ). To calculate the precise values of the potential V i , we integrate numerically the equation for the tortoise coordinate and then solve with respect to the corresponding radial coordinate. Moreover we require the vanishing of perturbations at radial infinity by imposing reflective boundary conditions u imax,j = 0, since our solution is effectively asymptotically AdS [8,93]. For further details regarding the numerical scheme and its convergence see Appendix B.
V. Evolution of axial gravitational perturbations
Regardless of the fact that the spacetime utilized here does not describe a BH immersed in a Universe with a negative cosmological constant, it is meaningful to compare it with Schwarzschild-AdS BHs, since η introduces an effective cosmological scale to the geometry considered. It what follows, we will adopt the categorization from [94] regarding AdS BHs determined by two dimensionful parameters, namely the AdS radius r = R AdS and the event horizon radius r = r h . The BH solution in the present study depends also on two parameters: m and η . The value of mass controls the position of the event horizon r h (and consequently of the photon sphere) and η dictates the value of the effective cosmological radius r = R eff . However, one key difference between the two solutions is that the first is a 'bald' BH embedded in an AdS Universe, whereas the scalarized solution is 'dressed' with scalar hair whose existence creates the effective AdS-like asymptotics. In this sense, the parameter η controls the strength of the scalar hair and as a consequence the value of R eff . In our case, the effective cosmological radius is given by R eff = √ 3 η . One may categorize BHs in an AdS Universe [94] as (i) small size BHs with r h << R AdS , (ii) intermediate size BHs with r h ∼ R AdS and (iii) large BHs with r h >> R AdS . We utilize a similar classification to distinguish between small (r h << R eff ), intermediate (r h ∼ R eff ) and large hairy BHs (r h >> R eff ) (see Fig. 2).
In what follows, we apply the numerical procedure outlined above, to calculate the temporal response of axial gravitational perturbations on the BHs of the above categories. In the following figures we obtain the perturbation response at a position arbitrarily close to the event horizon r − r h = 10 −5 , though we have checked that the same results are obtained if we calculate the response at any position outside the event horizon. Furthermore, we have performed some typical tests to ensure the validity of the integration method. Specifically, we have calculated the response of gravitational perturbations on the BH considered here, in the limit where η → ∞, where the effect of the scalar hair is suppressed, (we have chosen η = 1000 though even for η = 10 the potential V (r) converges to the Regge-Wheeler one). By using the Prony method [95] we can extract the spectral content from the temporal response at the large coupling limit and investigate if the modes extracted solely from the prompt ringdown agree with the standard axial gravitational QNMs of Schwarzschild BHs [81]. In Fig. 3 we show the prompt ringing of small BHs for = 2 gravitational perturbations. We only consider the case where the BH mass is m = 0.1 in order to obtain a clear ringing phase, since for the range of couplings we considered from 4 to 100 the echo timescales are very large and the extraction of QNMs from the prompt ringdown is possible. Figure 3 indicates that decreasing η has a menial effect on the spectrum while for η = 100 the extracted mode asymptotes to the fundamental Schwarzschild QNM with accuracy ∼ 0.1%.
For completeness, we have further calculated the instability timescale of null geodesics (Lyapunov exponents) at the photon sphere [82] and found that at the GR limit the correspondence between null geodesics and eikonal (large ) QNMs still holds. This is expected since at this limit the BH approaches Schwarzschild and GWs propagate with the speed of light. Therefore this analysis serves as another validity test of our numerical results and justifies the existence of a modified propagation speed of GWs. In fact, for the case with m = 0.1, η = 100 (m/ η = 10 −3 ) the instability timescale of null geodesics and the extracted decay rate of the fundamental QNM for = 10 (approximately eikonal) axial perturbations have a percentage difference of less than 0.5%. On the other hand, when η is not large enough then the propagation speed of GWs is modified, in accordance with Eq. (3.16), and this leads to a significant inconsistency between null geodesics and eikonal fundamental QNMs, as expected. For example, by choosing m = 5, η = 1 (m/ η = 5) the instability timescale of null geodesics and the extracted decay rate of the fundamental QNM for = 10 (approximately eikonal) axial perturbations have a percentage difference ∼ 50%. Therefore we conclude that the fundamental eikonal QNMs are not always associated with null geodesics at the spacetime, as it was also shown in [96]. )). The initial quasinormal ringdown is quite similar to that of a Schwarzschild BH. Such behavior is expected and can be attributed to the high value of η relative to the mass. Perturbations with higher angular number decay faster and with higher frequency since more energy is carried away from the photon sphere. This phenomenon is expected since similar behavior appears for gravitational perturbations and QNMs in Schwarzschild BHs [8]. The late time behavior however, shows that, instead of a power-law cutoff, the field settles to a constant value which is related to the asymptotic value that the effective potential acquires (see Fig. 1) and the expectancy of late-time echoes. The eventual late-time tail should be more evident for large BHs since echoes will be washed out rapidly at the event horizon.
In Fig. 5 the evolution of perturbations for m < η (m/ η ∼ O(10 −1 )) is displayed. The most obvious effect one observes is the emergence of echoes following the initial quasinormal ringdown. In this parametric region the relation between the mass of the BH and the coupling η becomes more transparent. By keeping η fixed and increasing the mass, perturbations will have to travel a shorter distance between the photon sphere and the effective AdS boundary induced by the scalar field leading to repetitions in the signal which appear in shorter timescales. Analogously, similar behavior is obtained when one keeps the mass fixed and decreases the coupling. This pattern was also observed in [65] for the case of scalar perturbations though test scalar fields travel with the speed of light, in contrast to axial gravitational waves in our analysis which have a variable propagation speed (see Eq. (3.16)). This means that a null geodesic analysis, similar to that in [21][22][23] where the echo timescales are approximated by the time that light takes to travel from a boundary to the photon sphere and back, is rendered pointless. Our case is much more intricate since one cannot consider null geodesics anymore but rather has to analyze waves traveling in a dispersive medium with varying propagation speed in different regimes. We have performed a trivial null geodesic analysis and the results we obtained are expected, that is for large η the propagation speed of GW approaches the one of light and the echo timescales can be properly approximated, while as the coupling decreases the echo timescales predicted by null geodesics are completely inconsistent with the actual timescales of echoes obtained by our numerical integration. Nevertheless, such investigation reinforces our discussion regarding the existence of a modified GW speed of propagation. To obtain a complete picture regarding the effect of the ratio m/ η on the BH's response to fluctuations we have plotted in Fig. 6 the time evolution of perturbations for a wide range of masses keeping η fixed. As m grows the echoes are replaced by quasinormal oscillations, while further increment of the mass leads to a single quasinormal ringdown followed by a late-time tail. We conclude that this behavior stems from the shape of the effective potential which decreases in amplitude as m increases. This leads to an increasingly smaller region where trapped modes, which lead to echoes, can occur, and thus the quasinormal ringing of the BH dominates over the echoes which are quickly suppressed.
When the mass becomes proportional (m/ η ∼ O(10 0 )) or significantly larger than η (m/ η ∼ O(10 2 )), negative wells develop in the effective potential in the vicinity of the event horizon (see Figs. 7,8). Despite the negative well formation, the time-domain profiles show an exponential decay of the signal without any indication of a linear instability. On the contrary, more massive objects lead to signals with shorter quasinormal ringing stages, due to the absence of a photon sphere peak, and with faster decay rates even though the corresponding effective potentials develop even deeper negative wells. The exponential nature of the eventual late-time behavior of perturbations is related to the effective AdS asymptotics of our spacetime which requires the imposition of reflective boundary conditions at infinity and is in agreement with what occurs in perturbations of AdS BHs [97]. The tail in these cases appears because echoes are subdominant and vanish very rapidly at the event horizon, thus the asymptotic behavior is probed faster. We expect that even perturbations of the small BHs in study will eventually possess an exponential tail but at much later times which our numerical scheme cannot probe. From the above, we conclude that the scalarized BH spacetime is modally stable under axial gravitational perturbations, where the qualitative features of the response depend solely on the ratio m/ η . In Fig. 9 we demonstrate the above statement for the case of intermediate size BHs. Our numerics show that a similar analogy occurs irregardless of the BH's size. Even though the source of echoes in our BH is related to the asymptotics of spacetime, and not to the nature of the near-horizon structure, our results are in accordance with perturbations in wormholes with decreasing throat radii [71] and black-bounce models, which transition from regular BHs to wormholes [98].
VI. Conclusions
In this work we studied static and spherically symmetric solutions of a Horndeski subclass which includes a massless scalar field non-minimally coupled to the Einstein tensor. Such theory admits an exact BH solution 'dressed' with scalar hair whose existence induces an effective negative cosmological constant even though the BH does not reside in an AdS Universe. We have studied the modal stability of such solutions under axial gravitational perturbations, with time evolution techniques, and complementary QNM extraction, that solve the linearized gravitational wave equation. Our results designate that the BH under study is linearly stable against axial perturbations, with decaying temporal responses akin to ringdown waveforms. The qualitative features of the ringdown waveform depend solely on ratio of the two available parameters of spacetime, namely the BH mass m and non-minimal coupling strength η . We have further demonstrated that as m/ η increases, we have gravitational-wave ringdown transitions between three distinct response patterns, namely a state with a typical quasinormal ringdown (m/ η 10 −2 ), an intermediate long-lived state which exhibits gravitational-wave echoes (10 −2 m/ η 10 −1 ) and a state where the ringdown and echoes are depleted rapidly to give turn to an exponential tail (m/ η 10 −1 ).
Regardless that our findings point towards linear stability, we only considered the axial sector of gravitational fluctuations. In generality, one must investigate the polar sector of gravitational perturbations as well in order for a complete stability analysis to be established. This extension can be extremely challenging with what regards the achievement of writing the perturbation equation into a one-dimensional Zerilli-like equation and the stability of spacetime itself, since the polar degrees of freedom generically couple to the scalar hair in scalar-tensor theories. A first step towards the aforementioned direction is the consideration of radial perturbations which are a good proxy to polar ones [99][100][101][102]. Radial perturbations can also couple the scalar field with the metric components, thus can serve as more sensible probe to the overall linear stability of the hairy BHs under consideration.
Besides dealing with temporal evolution techniques, another interesting direction would be a complete frequency domain analysis of axial and polar gravitational QNMs which is still lacking in the particular family of BH solutions, in a similar manner as in Refs. [103,104] where scalar QNMs have been discussed. Furthermore, since the BH geometry in study possesses a propagation speed for GWs that differs from that of light, it will be paramount to investigate potential observational imprints in order to disentangle possible degeneracies between GW phase modifications and environmental effects and avoid misinterpreting GWs in modified gravity with strongly-lensed GR GWs [105,106].
Finally, in a recent analysis [107], a class of mechanical models were studied, where a canonical degree of freedom interacts with another one with a negative kinetic term, i.e. with a ghost. Surprisingly, it was shown that the classical motion of the system is completely stable for all initial conditions, even though one would expected that such system to be unstable due to the presence of a ghost field. In our case, we have dealt with a conceptually analogue system, consisting of a scalarized BH for which the kinetic energy of the scalar hair can be positive or negative (first degree of freedom) provided that the strength of the non-minimal coupling to the Einstein tensor has the opposite sign (second degree of freedom), being attractive of repulsive respectively. Regardless of the case, we find that the BH is stable under axial perturbations, thus providing an illustration that the classical mechanics analysis in [107] can potentially apply to BH physics.
A. Solution of the angular differential equation In this appendix, we present the solution of the angular part of the differential equation (3.10). The corresponding differential equation is: where A is the separation constant. By performing a change of variables of the form x = cos θ we obtain the following differential equation: Note that this is very similar to the Legendre differential equation albeit with one sign change. This differential equation is called the ultraspherical or Gegenbauer differential equation. There exist three alternate forms of the equation that yield the same result. We are going to show the two we are interested in here.
First form: The first form has the following solutions where P m (x) and Q m (x) are the Legendre functions of the first and second kind, respectively. Note that m = −2.
B. Convergence tests
Here, we discuss in depth our numerical scheme which is briefly analyzed in Section IV. The essential equations in play are Eq. (4.1) and (4.2), together with the CFL condition and the vanishing of perturbations at radial infinity. In terms of the tortoise coordinate r * , we observe that when r tends to infinity, r * tends to a finite constant which we denote as r max * . The implications of the behavior of r * are twofold: firstly, the reflective boundary condition in terms of r * takes the form u(r max * , t) = u imax,j = 0 and secondly, our region of interest in the (r * − t) diagram lies on the left of the vertical line r = r max * as seen in Fig. 10. It is important to note that the values of the finite constant r max * are proportional to the value of the coupling η i.e. r max * ∼ η (see Table I). This means that the value of η dictates the range of r * since r * ∈ (−∞ , r max * ]. A second important consequence of the above proportionality is that, as η increases we also need to increase the number of grid points N in order to keep the value of ∆r * sufficiently small. To better understand why this is occurring we need to delve into the technical details concerning the procedure executed by our code. The first step is to find the function r(r * ) by numerically solving the differential equation of the tortoise coordinate dr(r * ) dr * = f (r(r * )) g(r(r * )) , together with the condition r(r * = 0) = 1.00001 r h which fixes the integration constant. Hence, after the integration we have r(r * → −∞) → r h , r(r * = 0) = 1.00001 r h and r(r * → r max * ) → ∞, meaning that r * ∈ (−∞ , r max * ]. However, in order to define a numerical grid with which we will perform the time evolution of u, we need to work on a finite interval of r * . We do so by choosing a sufficiently large negative value 1 (which we denote by r min * ) as the second end of the interval of r * . Thus, in the context of the numerical integration we will work on the interval r numerical * ∈ [r min * , r max * ] even though in principle r * ∈ (−∞ , r max * ]. The final step of our code which calculates the time domain profiles expects as inputs the values of r min * , r max * and N in the r * direction. It then calculates the spatial step ∆r * from the relation ∆r * = r max * + |r min * | N (B2) and the time step from ∆t = c ∆r * where c is positive constant value satisfying the CFL condition that should not be confused with the speed of light. The fact that r min * is constant throughout all of our evolutions and that r max * ∼ η implies, through Eq. (B2), that as η increases we also need to increase N in order to keep the value of ∆r * sufficiently small (see Table II). Table I. Reference values indicating the analogy r max * ∼ η .
Finally, we can produce convergence curves to provide some quantitative information regarding the accuracy of our numerical integration scheme. To produce these curves, we first calculate the values of u(r * , t) at a given point as the grid spacing ∆r * is reduced by increasing N . We will denote these values by u(r * , t)| N . We then use the value u(r * , t) for the maximum number of grid points (i.e. the smaller grid spacing ∆r * ) as a reference value indicating the best approximation to the true value of u at that point. We will denote that value by u(r * , t)| best . To calculate the error we subtract each value u(r * , t)| N for every different N from the value of the best approximation and then take its absolute value i.e. |Error| N = u(r * , t)| best − u(r * , t)| N .
(B3) Figure 11. Left: Convergence curve for m = 0.1 , η = 100 corresponding to a case where we obtain a clear prompt ringdown. As u(r * , t)| best we choose the value of u for N = 12000 grid points, i.e. u(r * , t)|12000, indicating the best approximation. All the points are extracted at r * = 0 and t/m = 225.228. Right: Convergence curve for m = 0.5 , η = 5 corresponding to a case where we obtain echoes after the initial ringdown. As u(r * , t)| best we choose the value of u for N = 2800 grid points, i.e. u(r * , t)|2800, indicating the best approximation. All the points are extracted at r * = 0 and t/m = 234.637.
The diagrams in Fig. 11 demonstrate that our code achieves numerical convergence irrespective of whether our compact object responds with a clear ringdown or with a signal with echoes i.e. in rather different regions of our parametric space (m, η ). Even though for the first case of Fig. 11 on the left the code converges rapidly, we expect that the same will occur for the second case depicted in Fig. 11 on the right if we further increase the number of grid points. As a final note, we stress the fact that even though the chosen values of the grid points N are very different for the convergence curves in Fig. 11, the corresponding grid spacing ∆r * is of the same order of magnitude for both cases, as can be seen in Table II | 10,614.8 | 2021-09-06T00:00:00.000 | [
"Physics"
] |
Intrinsic Visible Plasmonic Properties of Colloidal PtIn2 Intermetallic Nanoparticles
Abstract Materials that intrinsically exhibit localized surface plasmon resonance (LSPR) in the visible region have been predominantly researched on nanoparticles (NPs) composed of coinage metals, namely Au, Ag, and Cu. Here, as a coinage metal‐free intermetallic NPs, colloidal PtIn2 NPs with a C1 (CaF2‐type) crystal structure are synthesized by the liquid phase method, which evidently exhibit LSPR at wavelengths similar to face‐centered cubic (fcc)‐Au NPs. Computational simulations pointed out differences in the electronic structure and photo‐excited electron dynamics between C1‐PtIn2 and fcc‐Au NPs; reduces interband transition and stronger screening with smaller number of bound d‐electrons compare with fcc‐Au are unique origins of the visible plasmonic nature of C1‐PtIn2 NPs. These results strongly indicate that the intermetallic NPs are expected to address the development of alternative plasmonic materials by tuning their crystal structure and composition.
The size and morphology of each NP were characterized by TEM (HITACHI HT7820) operating at 200 kV.NPs were purified and a drop of solution was drop-casted on carbon-coated Cu grids for TEM.The atomic resolution analytical TEM (JEOL JEM-ARM200CF) observation was operated at an acceleration voltage of 120 kV.A Wiener filter was applied to original TEM-EDX images for noise reduction [41] using DigitalMicrograph Software (Gatan).
The crystal structure was confirmed at multiple locations with and without the filter.
HAADF images and LSPR properties of single NP were obtained by EELS combined with monochromated STEM (JEOL JEM-ARM200F).STEM-EELS was conducted at an acceleration voltage of 200 kV.We heated the SiN membrane at 400°C in the STEM experiments to prevent contamination from organic molecules.The crystal structure of PtIn2 at 400°C was confirmed by high-temperature XRD (Figure S10).
The crystalline structure of the NPs at room temperature was characterized by XRD (PANalytical AERIS).More-detailed XRD patterns were obtained by synchrotron X-ray radiation experiments performed at the SPring-8 BL02B2 (0.5272 Å).In situ high-temperature XRD was measured with a furnace (PANalytical X'Pert PRO MPD).
The elemental ratios of the synthesized NPs were determined by EDX (AMETEK Apollo XF) combined with SEM (HITACHI S-4800).
The optical response of LSPR was determined by UV-vis spectroscopy (HITACHI UH5700) with quartz cuvettes.SERS spectra were obtained with a Laser Raman Microspectroscopy System Nanofinder 30 (Tokyo Instruments Inc.) equipped with a 532-nm laser diode-pumped solidstate laser (SOC J050GS-1H) at an output power of 5.132 mW through a 50× microscope objective (Olympus MPlanFLN) over the range of 590-1819 cm −1 .The exposure time was 1 s and not accumulated (only 1×) to avoid heavy damage to rhodamine 6G (R6G).A chargecoupled device camera (1024 px × 255 px) operating at −70°C was used as the detector.To verify that the spectra were reproducible, more than 10 different areas were measured for each sample.
Synthesis of Pt seed NPs
To obtain PtIn2 NPs, Pt seed NPs were synthesized first.Oleic acid (31.5 mmol, 10 mL) and oleylamine (97.2 mmol, 32 mL) were mixed in a 100-mL, three-necked flask; followed by stirring at 220°C in an N2 atmosphere.Pt(acac)2 (0.5 mmol, 196.68 mg) was added to oleylamine (24.3 mmol, 8 mL) in a 50-mL, three-necked flask; followed by stirring and heating in an N2 atmosphere.After the Pt(acac)2 was dissolved in the oleylamine, the solution turned bright yellow at ca. 120°C.The mixture was quickly injected into the preheated oleic acid/oleylamine solution.Immediately after injection, the color of the solution changed to blackish brown after 1 min and black after 5 min.The solution was maintained at 220°C for 30 min and cooled at room temperature.Four-fold scale-up was carried out through this method.
Pt seed NPs were well dispersed in chloroform.To remove excess oleic acid and oleylamine, 15 mL of ethanol was added as a poor solvent to 25 mL of crude solution collected with chloroform, followed by centrifugation for 10 min at 9390×g, and then stored in chloroform.
Synthesis of PtIn2 NPs
C1-PtIn2 NPs were synthesized by reacting Pt seed NPs with In amide complexes in oleylamine under strong basic conditions. [26]The expected reaction is as follows: a strong Brønsted base deprotonates a long-chain primary amine, which reacts with InCl3 to form an In oleylamide. [26]he in-situ formed amide is then reduced or thermally decomposed to react with the Pt seed NPs to form the target product.In this method, the size of the PtIn2 NPs is larger than that of the Pt seed NPs, and the formation of indium oxide during the reaction process is also suppressed because of the absence of oxygen atom in precursors and solvents.
Pt seed NPs were additionally purified for the next step.Pt NPs (0.5 mmol of Pt atoms) were collected in a 100-mL three-necked flask with chloroform; added to distilled oleylamine (30.4 mmol, 10 mL); and sonicated for a few minutes.Then, chloroform was removed by evaporation, additionally dried under vacuum (45 min at 120°C), and heated to 180°C under N2.LDA-oleylamine solution was injected, after 30 s, followed by injection of InCl3oleylamine solution.The reaction solution was maintained at 180°C for 2 h and then cooled to room temperature.Although the oleylamine used as a solvent provided the dispersibility of the NPs as a ligand, the dispersibility of the synthesized NPs was improved by injecting 2 mL of oleic acid when the temperature reached 60°C during cooling before opening to the air.Largersized PtIn2 NPs were obtained by setting the heating temperature to 240°C to induce the fusion of NPs (Figure 2d).
As is clear from the phase diagram (Figure S3), the region where PtIn2 is stable is narrow.
When the reaction conditions differed, Pt2In3 and Pt3In7 phases were readily evident.
The sizes of the NPs were statistically determined from the TEM images.Size distributions of NPs are shown in Figure S2.
Purification of C1-PtIn2 NPs
There were two steps of purification: (1) removal of oleylamine and (2) size separation.In step (1), 20 mL of ethanol was added to 15 mL of a chloroform-diluted sample solution and centrifuged at 9390×g for 10 min.In step (2), a small quantity (~5 mL) of ethanol was added to the chloroform dispersion, and the mixture was centrifuged.Centrifugal acceleration was modified depending on the NPs size to be separated.These steps were repeated several times and the target size of the PtIn2 NPs was collected.To prevent a decrease in ligand concentration, oleylamine and oleic acid were added during the purification.PtIn2 NPs were well dispersed in chloroform and retains absorption at least two years even when stored in dispersed solution after purification.
Synthesis and purification of Au NPs
Oleylamine-capped Au NPs were synthesized as reported. [42]Briefly, oleylamine (8.8 mmol, 2.9 mL) was added to toluene (48 mL) in a 200-mL, three-necked flask; followed by stirring and boiling at ca. 120°C.Then, a solution of HAuCl4⋅4H2O (53.7 mg, 0.13 mmol) dissolved in oleylamine (1.2 mL) and toluene (2.0 mL) were quickly injected.The reaction mixture changed to bright magenta in 5 min and gradually deepened over the course of 25 min.Heating was stopped after 2 h and 17-nm-sized Au NPs were precipitated by adding 100 mL of methanol.
The precipitates were centrifuged at 9390×g for 5 min with methanol as a poor solvent.
The Au NPs were purified again by redispersion in toluene.Small-sized Au NPs were removed by centrifugation by using only toluene.
In situ high-temperature XRD of C-supported PtIn2 NPs
To prevent interparticle fusion as much as possible, a PtIn2 NPs chloroform dispersion was added into carbon (CABOT Vulcane XC-72) chloroform dispersion and left to stand.After washing with ethanol, the C-supported sample was dried thoroughly and packed in an Al2O3 holder.The sample was covered with a readily burnable polymeric film, Prolene Thin-Film (Chemplex), to prevent the sample from blowing off during evacuation.
In high-temperature XRD measurements, a vacuum was first drawn to ca. 7 Pa.Then, the XRD pattern was measured at room temperature, following which the temperature was increased at a constant heating rate of 10°C min −1 , and the in situ XRD pattern was measured every 100°C from 100°C to 400°C.The measurement was repeated 3 times (ca. 2 h) at 400°C to fully confirm the phase stability at 400°C, the same temperature used in EELS measurements.
Finally, XRD measurements were carried out again after the temperature decreased to 40°C.
XRD patterns of C-support, Al2O3 holder, and prolene are shown by reference measurements under atmospheric conditions at room temperature.
The quality factors (Q-factors) were evaluated from the measured extinction spectra of the NP dispersion.Q = w / G, where w is the center of the plasmon resonance with a full width at half maximum G.
XRD pattern simulation
The XRD pattern of PtIn2 was simulated using CrystalMaker X and CrystalDiffract 6.9 (CrystalMaker Software Ltd.) (Figure S14).The occupancy of Pt at the Pt sites in the C1 structure was taken as an independent variable, and the occupancy of In sites was taken as a dependent variable so that the overall molar ratio of Pt : In = 1 : 2 was maintained (Table S1).
As shown in Figure S14, the superlattice peaks of C1 structure are 111, 200, and 311 from lower angle.For example, the intensity of the 111 peak is smaller for lower Pt occupancy of the Pt sites.
Theory and computational details
The time-dependent Kohn-Sham equation solved in the Scalable Ab-initio Light-Matter simulator for Optics and Nanoscience (SALMON) is as follows (in atomic units): Where !"# (, ), $ (, ), %& (, ), and '() (, ) are the nuclear attraction potential, Hartree potential, exchange-correlation potential, and external potential of an applied laser field, respectively.In this study, we adopted the laser pulse, the functional form of which is where * () is the laser field given by where w is the laser frequency, and t determines the laser pulse duration.The electron dynamics under the laser pulse were analyzed by using the Fourier-transformed formula where +' is the normalized factor given by We used only the imaginary part because the sin-type wave was used [Eq.( 3)].Then, the imaginary part of Δρ exhibits the spatial configuration of the resonance response of the photoexcited electrons. [4]The Fourier-transformed electric field was also analyzed to evaluate the photogenerated electric field around the NPs.
We investigated the photoexcited electron dynamics of large clusters of ca.600 atoms, Au561 and Pt249In432, with SALMON. [43]The geometrical structures were determined from bulk crystal data.All the electrons other than the Au 5d 10 6s 1 electrons were replaced with effective core potentials obtained by the Troullier-Martins scheme implemented in the fhi98PP program. [44,45]For the exchange-correlation potential, the local density approximation functional was used.The computation box and grid spacing were set to 46 Å × 46 Å × 46 Å and Δx = Δy = Δz = 0.25 Å, respectively.The propagation time was 12.8 fs for the oscillator strength and 20.0 fs for the laser-induced electron dynamics.The applied laser pulse intensity was set to 10 9 W cm −2 .room temperature, the peak of the Prolene film used for the cover appeared strongly, but this film burned out at 100°C.We observed the peak that corresponds to PtIn2 at high intensity even at 400°C, although it was slightly shifted towards lower angles due to the thermal expansion of the lattice.At 400°C, we identified a small quantity of Pt2In3 in the diffraction pattern as the binary phase with a broader FWHM than that of PtIn2 and did not increase over time.Thus, we propose that nearly unreacted Pt seed NPs contained as minor components reacted with the PtIn2 phase to give the Pt2In3 phase.The aforementioned examination strongly supports our proposal that an isolated PtIn2 NP can retain its C1 structure even during EELS measurements at 400°C.S1).220 peak intensity was taken as 100%.220 peak intensity was set as 100%.Pt occupancy at Pt sites vs. f) 111/220 ratio, g) 200/220, and h) 311/220 of XRD simulation results.
Figure S1 .
Figure S1.Rietveld refinement of 44-nm PtIn2 NPs.We measured the synchrotron XRD pattern of 44-nm PtIn2 NPs (red, solid line) with λ = 0.527 Å.We conducted Rietveld analysis by using the standard pattern of Pt and two different patterns of PtIn2 with different sizes obtained from Ref. [32].Total simulated result (blue dotted line) is the sum of the simulation results for two different sized C1-PtIn2 (violet and light blue dotted lines) and Pt (brown dotted line).The reference patterns for PtIn2 (violet solid line, PDF #01-071-5016) and Pt (brown solid line, PDF #00-004-0802) calculated at λ = 0.527 Å and the difference between the experimentally observed and simulated patterns (gray solid line) are shown.Rwp was 6.87% and the goodnessof-fit (GOF) was 3.45.The NPs comprised C1-PtIn2 phase (92.6 wt%) with smaller fcc-Ptbased crystallites (7.4 wt%).
Figure S5 .Figure S6 .
Figure S5.Dependence of λmax of PtIn2 NPs on the surrounding refractive index.We plotted the λmax values of C1-PtIn2 NPs dissolved in cyclohexane (n = 1.4268, red), chloroform (n = 1.4467, green), and toluene (n = 1.4978, blue) against the refractive indices of the solvents.PtIn2 NPs (17 nm) were used because of their high solubility in these solvents.The λmax is proportional to the refractive index of the solvent, which also supports the LSPR feature of C1-PtIn2 NPs.The sensitivity of LSPR λmax to the changes in the surrounding RIU is defined as λmax/RIU = 42.4.
Figure S7 .
Figure S7.The line-scan EDS curves for Pt (blue), In (red), and O (yellow).a) Two line-scan positions (i, ii) and b) the line-scan curves.As seen in line-scan curves in (i), Pt and In exist alternately.On the other hand, the longitudinal line-scan curves in (ii) clearly show the formation of a double-shell structure at the surface.
Figure S8 .
Figure S8.STEM-EELS maps of a single PtIn2 NP.EEL spectra of 44-nm PtIn2 NP exhibit higher-energy losses at a) 4.7-5.1 eV, b) 8.2-8.6 eV, and c) 14.0-14.4eV.(a) and (b) show the large intensity at the surface of the NP.This corresponds to LSPR at the high energy reportedhere (i.e., higher-order LSPR), which we consider to be dipole and quadrupole modes.In contrast, interband transitions and/or bulk plasmons (c) corresponded to a large intensity at the center.
Figure S9 .
Figure S9.Crystal structure analysis of a single PtIn2 NP by high-resolution TEM.a, b) Highresolution TEM images of a PtIn2 NP measured at 400°C after EELS measurements.c) FFT image of the red square region of (b).The spots corresponding to the 200, 220, and 400 reflections of the C1 structure are indicated in yellow.d) Noise-filtered inverse FFT result of (c).The inset in (d) is the crystal lattice of C1-PtIn2, where lighter and darker dots correspond to Pt and In atoms, respectively.The atomic arrangement in (d) is similar to that in the inset.
Figure S10 .
Figure S10.High-temperature XRD measurements of C1-PtIn2 NPs supported on carbon.The stability of PtIn2 at high temperatures was investigated every 100°C from room temperature.At
Figure S11 .
Figure S11.TEM images of Au NPs and optical properties of SERS samples.a) TEM images of Au NPs used for SERS.Nearly spherical NPs are 17 ± 4 nm in diameter.b) Normalised extinction spectra of 24-nm C1-PtIn2 NP dispersion in chloroform (violet), Au NP dispersion (pink), and R6G solution (black); with λmax at 534, 527, and 530 nm, respectively.The quality factor of this resonance can be described by Q =w / G. where w is the centre of the plasmon resonance with a FWHM of G. QPtIn2 = 2.80 was smaller than QAu = 7.98.
Figure S12 .
Figure S12.SERS measurements.SERS spectra of R6G (black), R6G with 24-nm PtIn2 NPs (violet), and R6G with 17-nm Au NPs (pink) on a Si wafer with a 532-nm laser as the excitation light source.Although the R6G sample exhibited almost no clear peaks, both NPs drastically enhanced the SERS signals caused by the electric field enhancement effect of LSPR.As expected from the Q values, the SERS enhancement by the Au NPs was stronger than that by the PtIn2 NPs.
Figure S13 .
Figure S13.SEM image of the substrate used for surface-enhanced Raman measurements: a) Rhodamine 6G (R6G) with Au NPs, b) with C1-PtIn2 NPs and c) only R6G on Si wafer. | 3,677.8 | 2024-01-09T00:00:00.000 | [
"Materials Science",
"Physics"
] |
Effects of pore complexity on saturated P-Wave velocity and its impact in estimating critical porosity
Abstract In Sandstone, the pore complexity in certain rock types is not only related to the mineral constituents, but also to other tectonic properties, including texture and porosity. In sandstones, texture is a primary factor contributing to pore geometry and structure. This condition causes a distributed relationship between P-wave velocity and porosity, although a clear trend exists. The purpose of this research, therefore, was to study the effects of pore complexity with the variation of P-wave velocity on saturated rocks and its impact in estimating critical porosity. In addition, the concept of rock type and convergent point was applied to establish the correlation between saturated P-wave velocity with pore geometry, and pore structure. Subsequently, the data was grouped into several rock types, with each having its own empirical bond and also tends to intersect with others at a converging point. In conclusion, a very good relationship between saturated P-wave velocity and porosity was obtained by categorizing the rocks based on pore geometry and pore structure. However, the individual rock group is known to demonstrate similar pore geometry and specific critical porosity values.
Introduction
In rocks, certain parameters, including porosity and velocity are not independent and several studies have been conducted to define their relationship.In addition, literatures on the effect of pore complexity and the correlation between porosity and P-wave velocity appears inadequate.At low porosity, the relationship tends to be linear, but not the case for high values (Wyllie, Gregory, & Gardner, 1956).This assumes the pore complexity of rock is greatly influenced by the variation of P-wave velocity.For greater porosity, the relationship becomes nonlinear (Raymer, Hunt, & Gardner, 1980), while at zero porosity, the P-wave velocity achieves its maximum, further illustrating the velocity of minerals depends on the mineral type.Conversely, as the solid material decreases with increasing pore spaces, the velocity tends to become minimum.Other studies show the clay presence in rocks also influences P-wave velocity and the type of clay distribution demonstrates separate effects (Minear, 1982).
Apart from clay volume, pore shape also extends a dominant effect on P-wave velocity.More rounded pores (pore aspect ratio approaches 1) instigate further increase in P-wave velocity.Furthermore, clay volume is used to classify the relationship between velocity and porosity (Han, Nur, & Morgan, 1986).This indicates pore complexity due to available clay content, also affects velocity.
In terms of the effects of pore complexity, other factors, including pore geometry, texture and rock microstructure such as grain or matric supported, type and position of clay particles in pore spaces, is significantly considered (Khaksar & Griffiths, 2000), However, rocks with poor grain sorting show a more efficient grain arrangement (Gutierrez, Dvorkin, & Nur, 2001).This instigates stronger rock structure and increased P-wave velocity.For dry rock, the relationship between V p /V s and porosity, amongst others, is influenced by specific surface area (Fabricius, Baechle, Eberli, & Weger, 2007).Therefore, this relationship is probably attributed to the textural variations and porosity types.Simple textures tend to possess lower specific area (Sg) and higher V p /V s ratio at equal porosity.Xu and Payne (2009) modeled the elastic properties of carbonate rocks by considering pore type.Inter-particle pore type, rounded stiff pore (moldic, intraframe, vug), and microcrack such as fracture, all presented various P-wave velocity effects.Meanwhile, the variation of P-wave velocity is strongly influenced by micro porosity, pores network complexity, and pore size (Weger, Eberli, Baechle, Massaferro, & Sun, 2009).Another phenomenon exists, where sedimentary and igneous rocks exhibited distinct pore characteristics and critical porosities (Nur, Mavko, Dvorkin, & Gal, 1995).However, the critical porosity is affected by rock mineral constituents and products of geological processes (Mavko, Mukerji, & Dvorkin, 2009).
In relation to rock quality, a correlation between acoustic wave velocity and permeability exists as data were grouped based on hydraulic units (Prasad, 2003).The hydraulic qualities are strongly influenced by the complexity of pore geometry and pore structure.However, porosity and permeability values are strongly affected by pore arrangement (Wibowo, Permadi, & Bandung, 2013).Pore geometry and pore structure are expressed as a combination of porosity and permeability.Subsequently, the pore complexity correlates effectively with dry P-wave velocity and is easily separated for each rock type (Prakoso, Permadi, & Winardhie, 2016).By applying the Nur's concept (Nur et al., 1995), each group of rock type was observed with a specific critical porosity (Prakoso, Permadi, Winardhi, & Marhaendrajana, 2017).
In dry conditions, the pore space is possibly airfilled, and does not significantly affect the rock bulk modulus.For saturated rocks, fluids, including water, oil or gas are known to exist in the pore spaces.The rock bulk modulus is the combined bulk moduli of rock frame and fluid present in the pore spaces.For dry conditions, the rock type is characterized by specific critical porosity value, as earlier proven (Prakoso et al., 2017).However, the research is uncertain in applying the concept to fluid saturated rock conditions.This is important considering the reservoir conditions are saturated by fluid.Also, the pressure effect needs is deliberated in order to study the variation of P-wave velocity on reservoir rock quality under subsurface conditions.This paper, therefore, studies the effects of pore geometry and pore structure on variations of saturated P-wave velocity, where a non-linear relationship between porosity and saturated P-wave velocity is further analyzed.The results potentially grouped the rock using well data such as sonic and porosity logs.
2 Methods and data
Data used
Two data sets from 2 separate basins, including 117 core data from North West Java and 129 core data from Kutai were analyzed in this study.Data set 1 is sandstone from upper Cibulakan formation, aged early Miocene to middle Miocene, and is deposited at the subsidence/uplift phase.The reservoir rocks, generally, are shallow marine deposits of tidal plains, and are related to quartz sandstone with glauconite or massive quartz sandstone.All core samples are composed of quartz-dominated minerals.The available parameters include porosity, permeability, V p and V s .Data set 2 was obtained from Kutai basin.These core data were acquired from Balikpapan formation.The age of Balikpapan formation is from middle to late Miocene.Sandstone were deposited on deltaic system and associated with fluvial deposits, distributary channels, and mouth bars.In addition, the lithology is dominated by fine grain up to thick coarse grain sandstone deposited alternately with claystone, and shale.Quartz is the major mineral for the entire sample.The data used include porosity, permeability, and P and S wave velocities.Porosity and permeability are measured in the laboratory using standard procedures adopted in the petroleum industry, while wave velocities are evaluated at dry and ambient conditions.
Fluid substitution and critical porosity
Velocity data were determined in the laboratory for dry conditions, as fluid substitution is necessary.The Gassmann's method was applied to estimate saturated P-wave velocity.This condition has been widely employed for fluid substitution of dry (V pdry ) and saturated P-wave velocity (V psat ) (Gassmann, 1951).In this study, the conversion from V pdry to V psat was conducted by assuming the water is used to saturate the pore space.In addition, the P-wave velocity data were measured at atmospheric pressure, hence the need for accurate pressure.A work flow for correcting V p as a result of changes in pressure was proposed.The influence of the pressure is by using the pore space stiffness method.In addition, the pore space stiffness constant (t) was calculated using available compressional sonic log (DTp) data.Furthermore, the required data for converting dry P-wave velocity (V pdry ) to saturated P-wave velocity (V psat ) include: Core data: V pdry , V sdry , porosity and grain density.Log data: Depth, compressional sonic log (DTp), and density log (Rhob).
Two factors are considered in converting dry (V pdry ) to saturated P-wave velocity (V psat ) as follows: Effect of fluid saturation is estimated using the Gassmann's method (Gassmann 1951).Effect of pressure is considered using the pressure-velocity dependency approach substituted with the pore space stiffness method.The flow chart of saturated P-wave velocity (V psat ) calculation is shown in Figure 1.
As previously reported, varying rock lithology showed separate pore space characteristics (Nur et al., 1995).Texture, mineralogy and the diagenetic process are the primary factors determining the critical porosity magnitude.(Mavko et al., 2009).However, critical porosity is the boundary of the consolidated sediment domain with the suspension.Nur equation is possibly applied to estimate critical porosity (Nur et al., 1995).The equation is a modification of the Voigt Bound (Voigt, 1890) equation as follows: For fluid saturated rock, the influence of fluid bulk modulus is considered, as Equation ( 1) is written as follows: where B bulk modulus, B m mineral bulk modulus, B m fluid bulk modulus, / porosity, and / c critical porosity.
Several studies reported similar lithology with varying pore complexities.This variation causes rocks to exhibit distinct fluid flow capabilities.Based on Nur's critical porosity, assuming each rock group with similar pore pattern does not change, the individual group appears to have its own critical porosity value.
Rock type approach
Pore complexity is one of the main factors influencing the rocks in order to permit fluid flow.In addition, a rock sample with similar pore complexity shows the capability to flow fluid differently compared to others.In relation to the pore complexity, each data in a group is known to have similar relationship between pore geometry and pore structure (Wibowo et al., 2013).Rocks with equal deposition and diagenesis tend to acquire similar pore architecture.This was also pointed out by El-Khatib (1995), and Leverett (1941), where the similarity of the capillary pressure curve profile indicates the similarity in pore size distribution (Burdine, Gournay, & Reichertz, 1950).The comparison of the pore architecture shows the rocks possessed similar value of pore shape factor (F s ) and tortuosity (s).Furthermore, the combination of pore shape factor (F s ) and tortuosity (s) is widely known as Kozeny's constant (Kozeny, 1927).Assuming for capillary tube the Kozeny equation is arranged as follows: Hydraulic radius (k//) 0.5 in Equation ( 3) is equivalent to the Kozeny constant, and is express as pore size.Based on Equation (3), Wibowo et al., 2013 confirms the relationship of pore geometry and pore structure for rock type estimation.The relationship is expressed in terms of power law equations (Equation ( 4)).
Constant (a) denotes the pore efficiency, while (b) represents the exponent of the pore structure.For smooth capillary tube, the maximum value b value is 0.5.Lesser value of pore structure exponent (b) indicates complex pore arrangement for low quality rock type.The plot between (k//) 0.5 and (k// 3 ) on log-log scale produces a straight line and the maximum slope of line b is approximately 0.5.Furthermore, lower values of pore structure exponent (b), generates inferior rock type quality with reduced pore efficiency (E p ).Each group of sample data forms a straight-line trend and describes the similarity of the pore architecture resulting from related geological process.
Rock quality is largely determined by pore complexity, believed to be the relationship between two pore parameters, termed pore geometry (k//) 0.5 and pore structure (k// 3 ).In addition, the relationship is known as the power law model (Wibowo et al., 2013).In this study, the concept of power law was applied to define pore geometry (k//) 0.5 and pore structure (k/ / 3 ).These two parameters are used to characterize groups of rocks with similar pore arrangement.Section 2.2 states rocks with equal pore complexity are described by the similarity in pore geometry and pore structure.In addition, the rock group acquires its own critical porosity.This study combines the concepts of critical porosity and power law to define the relationship of pore geometry, pore structure, and rock quality with acoustic wave velocity.However, the relationship is used to explain the scatter data on the relationship between P-wave velocity and porosity.Power law is used to characterize groups of rocks with similar quality and its relation to critical porosity.
Rock type identification
The rock type chart, based on power law model (Wibowo et al., 2013), is used to identify the rock type.Figure 2 shows the resulting data is grouped into several rock types, where each is characterized similarly to the microscopic geological feature.Rock textures are the dominant factors influencing the rock group.Generally, good quality rock type is dominated by large grain size, proper sorting and low hardness, as well as being further characterized by clean sandstone.Conversely, low quality rock type tends to acquire lower grain size, poor sorting, large hardness, and high clay volume.
3.2.Effect of pore complexity on pore space stiffness and saturated P-Wave velocity estimation Gutierrez et al. (2001) and Fabricius et al. (2007) reported the pore rigidity is influenced by pore architecture.Therefore, the pore space stiffness constant (t) is affected by pore complexity, and is estimated by using B dry obtained from DT p and DT s log data.As no DT s data exist, B dry is not possibly generated by Gassmann's fluid substitution method.However, B dry was derived using Poisson ratio and Biot's coefficient.Gregory (1977) provided steps to apply M dry in order to acquire B dry in the condition of a known value in terms of dry rock Poisson's ratio.The assumption used is non-dispersion medium, where the difference in frequency of measurement V p from core and V p from log, does not instigate velocity change.
Figures 3 and 4 demonstrate the pore stiffness (t) is influenced by pore geometry and pore structure.A suitable quality rock type relatively possess simple pore geometry and pore structure with the tendency of high variation of pore stiffness (t) obtained from core and log compared to low quality rock types (Figure 3).These phenomena are due to sufficient rock type with large grain size.Therefore, the cement is dominated by clay minerals, low clay volume and low hardness with significant pressure effect on changes in bulk modulus.Meanwhile, in low quality, the rock tends to be more solid, characterized by small grain size, complex pore geometry, pore structure, calcite dominated cement, large clay volume, and high hardness, therefore increasing pressure does not result to significant impact with changes in bulk modulus.
Figure 4 shows the linear relationship between B m /B dry -1 and porosity.Based on Figure 4, the slope represents m ¼ 1/t and the value of pore space stiffness constant is t ¼ B / /B m .Therefore, B / is estimated based on the value of m and t.
The result of calculating V psat compared to V psat is obtained from log data (Figure 5).This shows the V psat value using the above method is relatively close to V psat data from log.Therefore, the pore stiffness method and Gassmann's are useful to evaluate the effect of pressure and fluid on the P-wave velocity.
3.3.The relationship of V psat with pore geometry (k//) 0.5 and pore structure (k// 3 ) The pore geometry is approximated by combining porosity and permeability (Wibowo et al., 2013).In addition, the parameter is written as (k//) 0.5 , where (k/ /) 0.5 value is equal to pore diameter, derived from the Kozeny equation (Kozeny, 1927).Moreover, the relationship between V pdry , (k//) 0.5 and (k// 3 ), is classified into several rock types (Prakoso et al., 2016).In a particular rock type, larger hydraulic radius (k//) 0.5 and simpler pore structure (k// 3 ) tend to increase V pdry .As the empirical correlation lines are drawn for each unit, a convergent point is established.This indicates all porous media, at this convergence point, possessed indistinguishable property, including wave velocity.
The approach is applied to confirm the relationship between V psat pore geometry (k//) 0.5 , and pore structure (k// 3 ).Furthermore, the relationship between V psat and (k//) 0.5 is clearly described.Without grouping into rock types, larger scattered data are observed.However, the relationship shows V pdry increases as (k//) 0.5 declines.
Assuming very small capillary tube with porosity value 1, then the measured P-wave velocity appears to cover the fluid's pore spaces.By suggesting saturated condition, water acts as the fluid in the pore spaces.Moreover, the convergence point of the relationship between V psat , (k//) 0.5 and (k// 3 ) is possibly determined.Therefore, empirical correlation lines between V psat on (k//) 0.5 or k// 3 commences from the convergence point at (V psat , (k//) 0.5 ) ¼ (1496.7,0.045) or (V psat, k// 3 ) ¼ (1496.7,0.002).This point is employed as the starting point for drawing empirical regression line on each rock type (Figures 6 and 7).
Effect of pore complexity on relationship of Vpsat with porosity and critical porosity
Figures 8 and 9 show for saturated rock, the relationship between porosity and V psat is grouped based on rock type.Furthermore, V psat theoretical curves (dash line) are obtained by using critical porosity model (Nur et al., 1995) for each rock type using different critical porosity values.The relationship between porosity and V psat is also distinguished based on rock type (Figure 8 and 9), with each rock type having different critical porosity values.The value of critical porosity is obtained from the intersection of Nur's theoretical curve with Reuss curve (Reuss, 1929).Improved quality rock type has larger critical porosity values compared to poorer quality rock type.Based on Figure 8 and 9, each rock type is represented by 1 Nur's critical porosity curve and each rock type are characterized by a critical porosity value.Furthermore, the presence of fluid in pores and pressure show an effect on increasing P-wave velocity.However, the condition does not change the relationship pattern between porosity and V psat .
Conclusions
Several conclusions are drawn from this research including: 1. Grouping the rock samples based on rock quality provides more detailed description on the effects of pore attributes on P-wave velocity variation.2. The relationship between V psat and porosity is clearly distinguished on similarities observed in the correlation between pore geometry and pore structure.This indicates the variation of Pwave velocity is greatly influenced by both parameters.3. The relationship between V psat and porosity shows the critical porosity is affected by pore geometry.This causes critical porosity to vary for each rock type.Rock type with enhanced pore quality demonstrates greater critical porosity compared to lower quality rock type.4. V psat estimation using the combining pore stiffness method and Gassmann's fluid substitution provides an accurate result, and is relatively closer to the V psat value from log data.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License http://creativecommons.org/ licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Disclosure statement
No potential conflict of interest was reported by the author(s).
effect of pressure on porosity is less (constant porosity) Pore pressure is constant.The mineral constituent of rock is dominated by quartz and water is known to saturate the pore spaces.Bulk mineral and bulk fluid moduli are assumed constant (B quartz and B water).The effect of pressure on Vp and Vs is equal, hence the Poisson ratio value is assumed constant.Therefore, the factors influencing the change of Bdry are only Bpor.Medium non -dispersion.
Figure 1 .
Figure 1.Flow chart of saturated P-wave velocity calculation.
Figure 4 .
Figure 4. Relationship of pore space stiffness versus porosity.
Figure 5 .
Figure 5.Comparison of V p calculation and V p measurement.
Figure 8 .
Figure 8. Plot V psat versus porosity (dashed line is V psat theoretical curves) for data set 1.
Figure 9 .
Figure9.Plot V psat versus porosity (dashed line is V psat theoretical curves) for data set 2. | 4,516.4 | 2020-01-01T00:00:00.000 | [
"Geology"
] |
Test Procedures for Change Point in a General Class of Distributions
Abstract: This paper is concerned with the change point analysis in a general class of distributions. The quasi-Bayes and likelihood ratio test procedures are considered to test the null hypothesis of no change point. Exact and asymptotic behaviors of the two test statistics are derived. To compare the performances of two test procedures, numerical significance levels and powers of tests are tabulated for certain selected values of the parameters. Estimation of the change point based on these two test procedures are also considered. Moreover, the epidemic change point problem is studied as an alternative model for the single change point model. A real data set with epidemic change model is analyzed by two test procedures.
The initial parameter θ 0 may be known or unknown. The change point k 0 (k 0 = 1, · · ·, n − 1) and the magnitude of change δ are unknown parameters. Without loss of generality, let δ ≥ 0. The following regularity conditions are needed.
The following example shows that these conditions are satisfied in two rich families of distributions.
Example 1. In the exponential family with density function f θ (x) = h(x) exp{ϕ 1 (θ)u(x) + ϕ 2 (θ)}, condition (i) is satisfied provided In the location family where the derivative f (·) exists, condition (i) is satisfied provided For example, for the logistic distribution L(θ, Condition (ii) is typically satisfied. Chernoff and Zacks (1964) considered the quasi-Bayesian change point analysis for independent normal observations. Kander and Zacks (1966) (KZ) extended the work of Chernoff and Zacks (1964) to the case of exponential family distributions. The nonparametric methods in change point analysis can be found in Brodsky and Darkhovsky (1993). Broemeling and Gregurich (1996) surveyed the Bayesian estimation of change point via resampling methods. An excellent reference in change point analysis is Csörgő and Horváth (1997). Gupta and Ramanayake (2001) used KZ's quasi-Bayes method to study the epidemic change point in exponential distribution. For more references see Hjort and Koning (2002) and Habibi et al. (2005) among the other.
In this note, we consider quasi-Bayes and likelihood ratio test procedures to detect a change in a general class of distributions. This paper is organized as follows. The quasi-Bayes test is studied in Section 2. The exact distribution of the test statistic in some special cases and its asymptotic distribution in general cases are also derived. Section 3 contains the exact and asymptotic distributions of the likelihood ratio test statistic. The performances of the two test procedures are compared in Section 4. Estimation of the change point based on two test procedures is also considered in this section. Section 5 considers the epidemic change point model which is an alternative model for the single change point model. A real data set is also considered in this section. This paper although is extension of an old paper however its approach in presenting the results in term of stochastic integrals is interested. It also considers change point detection in general class of distribution with single and epidemic change point model, a topic which is not considered before.
Quasi-Bayes Test
In this section, following KZ the quasi-Bayes test statistic is derived. Assume that k 0 = [nt 0 ], for some unknown t 0 ∈ (0, 1). We consider the point t 0 as a random variable with prior density π(t), t ∈ (0, 1). First, suppose that θ 0 is known. The marginal likelihoods of the sample under H 0 and H 1 are n k=1 f θ 0 (x k ) and respectively, and so the marginal likelihood ratio function under H 1 to that under H 0 is given by Following KZ, as δ → 0, then the marginal likelihood ratio can be approximated by and it can be expressed by Then to test H 0 the corresponding test statistic becomes By partitioning [0, 1] to n equal subdivisions, it can be shown that KZ derived the test statistic T π n in exponential families. The test procedure based on T π n is locally most powerful (see KZ). Under the noninformative prior π(t) = 1 for t ∈ (0, 1) then the test statistic will be obtained. Habibi et al. (2005) studied the behavior of this test statistic.
Example 2. The exact null distribution of T n can be found in some special cases. In exponential families T n reduces to KZ test statistic. So the exact null distribution of T n can be obtained in the normal, exponential, and binomial distributions (see KZ). The exact null distribution of T n can also be found in the logistic distribution as follows. Without loss of generality, let θ 0 = 0. It is easy to verify that where F (·) is the distribution function of the standard logistic distribution L(0, 1). Let S n = n i=1 (i − 1)F (X i ) and g n (·) be the density function of S n . Then S n = −n 2 Tn 2 + n(n−1) However, since the exact distribution of T π n ( or T n ) is very complicated in many cases, the asymptotic distribution of T π n is considered in Theorem 1. Suppose that σ 2 = I(θ 0 ), the Fisher information computed at θ 0 . Theorem 1. Assuming regularity conditions (i) and (ii) and under the null Proof. Consider the stochastic process S n (t) as follows: is the standard Brownian motion on [0, 1] and d is Skorokhod metric (see Billingsley, 1968). The map Λ defined as Integration by part can be applied to show that Remark 1. When the initial parameter θ 0 is unknown, then θ 0 is substituted by θ 0 , the maximum likelihood estimate of θ 0 under the null hypothesis, resulting in the following test statistic: It is easy to show that under the null hypothesis (since θ 0 Example 3. As a special case of Remark 1, consider a sequence of independent random variables X i such that The initial mean θ 0 is unknown and it is replaced by X n . Then, the test statistic is given by It is seen that Next, the asymptotic distribution of T n under the alternative hypothesis is considered. To do so, the following extra condition is assumed. Let µ θ 1 = E θ 1 (g(θ 0 , X k 0 +1 )) and I(θ 0 , θ 1 ) = V ar θ 1 (g(θ 0 , X k 0 +1 )).
Corollary 2. Under (i), (ii), (iii) and H 1 then Although, deriving Corollary 2 from Theorem 1 is straightforward, but we present a proof briefly.
Corollary 3. The approximate power of test in size α based on T n is given by ).
Remark 3.
We can estimate the location of change point using the quasi-Bayesian test. To see this in details, we consider the special case X i = θ 0 +δI(i ≥ k 0 + 1) + N i , where N i are iid random variables from N (0, 1) distribution and δ > 0. The change point estimator k n based on quasi-Bayes test is given by that is U [nt] is pretty close to its mean function E(U [nt] ). Then to study limiting behavior of U [nt] , it is enough to study the limiting behavior of E(U [nt] ). It is easy to see that E(U [n·] ) → U (·), where This shows the consistency of t n = > kn n , that is t n p → t 0 , as n → ∞ (see Bai, 1994).
Likelihood Ratio Test
Here, the likelihood ratio test is considered to test the null hypothesis of no change point. First, assume θ 0 is known. The likelihood ratio function under H 1 to that under H 0 is given by It is easy to verify that as δ → 0 + , then the likelihood ratio function can be approximated by (see Section 2). One would reject H 0 whenever the observed value of T * n is large, where One can show that under the null hypothesis H 0 , as n → ∞, then where N is distributed as standard normal distribution (see Billingsley, 1968).
Remark 4.
The likelihood ratio test statistics T * n is larger than the quasi-Bayes test statistic T π n . To see this, note that This shows that the critical values of likelihood ratio test are larger than the values for the quasi-Bayes test.
Remark 5. When the initial parameter θ 0 is unknown, again θ 0 is substituted by θ 0 , the maximum likelihood estimate of θ 0 under the null hypothesis, resulting in the following test statistic: It is easy to show that under some mild conditions then Example 4. To see Remark 5, consider the special case Since θ 0 is unknown it is estimated by X n and the test statistic is given by Under the null hypothesis where B(·) is the standard Brownian bridge on [0, 1]. Since B(·) d = −B(·), the continuity theorem implies that √ n T * n d → sup 0<t<1 B(t). Under the null hypothesis, random vector v = (v 1 , ..., v n−1 ) has a multivariate normal distribution N n−1 (0, Σ) with (see Hawkins, 1977). Under H 1 , then v ∼ N n−1 (δµ, Σ), where µ = (µ 1 , · · ·, µ n ) with n. The exact distribution of T * n is the distribution of maximum of a multivariate normal. Then the α-th quantile of T * n is the α-th equi-quantile of a multivariate normal distribution which is considered by Genz (1992).
Remark 6. The change point estimator k n based on the likelihood ratio test when δ > 0 is given by This fact suggests plotting V k for k = 1, 2, ..., n − 1. The first point k n at which V k attains its minimum is the likelihood ratio change point estimator.
Comparisons
In this section, we compare the performance of the quasi-Bayes and likelihood ratio tests by studying their significance levels and powers. The significance levels of quasi-Bayes and likelihood ratio tests are α n and α * n respectively, where In what follows, we compare the rate of convergence of α n and α * n to α in the case of logistic distribution. For a given n, we compute α n and α * n using a Monte Carlo experiment with R = 20000 repetitions. Let α nR ( α * nR ) be the number of times that the null hypothesis H 0 of no change is rejected based on the quasi-Bayes test (likelihood ratio test) over R. The SLLN guarantees that α nR ( α * nR ) (see Tables 1, 2) is pretty close to α n (α * n ). The rates of convergence of α n and α * n to α seem good although it seems α n converges to α a little faster.
Approximated power of two tests
Here, we compare the powers of two test procedures in the logistic observations L(δ, 1). The power of quasi-Bayes test β α (δ) (see Corollary 2) are given in Table 3 for α = 0.05 and k 0 = 1, 3, ..., 49. Table 3 also contains the power of likelihood ratio test β * α (δ) which is estimated using a Monte Carlo simulation study with R = 20000 repetitions. In order to keep the table in reasonable size, only the case of sample size n = 50 and magnitude of changes (δ 1 , δ 2 ) = (0.09, 1) with a significance level α = 0.05 is reported. It is seen from the Table 3 that the power of quasi-Bayes test is larger than the power of likelihood ratio test in all cells. The power of likelihood ratio test is too small for δ 1 = 0.09. Higher powers for two tests are achieved if k 0 occurs in the beginning of the sequence.
Epidemic Change Point
The epidemic change point model is an alternative for the single change point model. Yao (1993) published a survey of the available test procedures together with their comparisons. Brodsky and Darkhovsy (1993) constructed estimators for change points and studied their properties. In this section, the epidemic change point is considered in a general class of distributions. Epidemic change point analysis has many applications in practice and studying it in a general class of distribution is an interested topic. Consider a sequence of independent random variables X 1 , · · ·, X n whose density functions are f θ i (x i ), θ i ∈ Θ, i = 1, · · ·, n, one has to test the null hypothesis H 0 : θ 1 = · · · = θ n = θ 0 , against the alternative hypothesis H 1 : θ i = θ 0 i = 1, 2, · · ·, k 0 , θ 0 + δ i = k 0 + 1, · · ·, k 1 , θ 0 i = k 1 + 1, · · ·, n.
Similar to Section 2, the quasi-Bayes test will reject H 0 , when T πe n is large, where
Remark 7.
When θ 0 is unknown, it is estimated by θ 0 . The above asymptotic distributions of quasi-Bayes and likelihood ratio statistics are held by replacing W (·) with B(·), the standard Brownian bridge on [0, 1].
Stanford Heart Transplant Data
The data set is (taken from Kalbfleisch and Prentice, 1980) contains 35 patients with known age groups. The average survival time of the patients were indexed by age group. There can be doubts about the exsitence of an epidemic change in the sequence. To check this possibility, we performed the two test procedures for this data set. The p-values of quasi-Bayesian and likelihood ratio tests are 0.0235 and 0.0552, respectively. We can reject the null hypothesis of no change, in favor of an epidemic change for this data set. The ML estimators of two change points are 29 and 48 years, respectively. | 3,350.2 | 2021-03-23T00:00:00.000 | [
"Mathematics"
] |
Differences in Walking Pattern during 6-Min Walk Test between Patients with COPD and Healthy Subjects
Background To date, detailed analyses of walking patterns using accelerometers during the 6-min walk test (6MWT) have not been performed in patients with chronic obstructive pulmonary disease (COPD). Therefore, it remains unclear whether and to what extent COPD patients have an altered walking pattern during the 6MWT compared to healthy elderly subjects. Methodology/Principal Findings 79 COPD patients and 24 healthy elderly subjects performed the 6MWT wearing an accelerometer attached to the trunk. The accelerometer features (walking intensity, cadence, and walking variability) and subject characteristics were assessed and compared between groups. Moreover, associations were sought with 6-min walk distance (6MWD) using multiple ordinary least squares (OLS) regression models. COPD patients walked with a significantly lower walking intensity, lower cadence and increased walking variability compared to healthy subjects. Walking intensity and height were the only two significant determinants of 6MWD in healthy subjects, explaining 85% of the variance in 6MWD. In COPD patients also age, cadence, walking variability measures and their interactions were included were significant determinants of 6MWD (total variance in 6MWD explained: 88%). Conclusions/Significance COPD patients have an altered walking pattern during 6MWT compared to healthy subjects. These differences in walking pattern partially explain the lower 6MWD in patients with COPD.
Introduction
The 6-minute walk test (6MWT) is commonly used to assess functional exercise performance in patients with chronic obstructive pulmonary disease (COPD) [1]. It is a practical, relatively simple test which has gained importance in evaluating the functional status of patients with COPD [2]. Moreover, a poor 6-minute walk distance (6MWD, ,350 meters) has prognostic value in patients with COPD [2].
The 6MWD cannot be confidently predicted from conventional descriptors of COPD, such as the Global Initiative for Chronic Obstructive Lung Disease (GOLD) stage or the Medical Research Council (MRC) scale [3]. Therefore, it is necessary to assess functional exercise performance in daily clinical practice in patients with COPD.
Walking patterns are generally influenced by the trade-off between the requirements to minimize energetic costs and to maintain stability [4]. Indeed, walking is particularly unstable in the medio-lateral direction ( Figure 1). To compensate for balance disturbances during walking, active adjustment of the step width (largely due to through lateral foot placement) is necessary resulting in variability in walking pattern [4]. Then again, to reduce energetic costs of walking, variability in walking pattern needs to be minimized [5]. In COPD patients, different clinical characteristics, such as decreased lower-limb muscle function [6] and a disturbed balance [7], may compromise the ability to balance the energetic and stability requirements posed by walking. Hence, the walking pattern during 6MWT most probably is different between patients with COPD and healthy elderly subjects [8].
Detailed analyses of walking patterns during the 6MWT have not yet been performed in patients with COPD. Yentes et al. recently reported gross walking abnormalities in patients with COPD, such as the presence of a limp or shuffle [9]. These authors used qualitative assessment of gait abnormalities and did not asses the spatiotemporal aspects of walking abnormalities. The latter would enable a direct comparison of walking pattern between patients with COPD and healthy elderly subjects; and an examination of the association between walking pattern, 6MWD and clinical characteristics, like weight, height, the degree of airflow limitation and exercise-induced symptoms of dyspnea and fatigue.
Features derived from tri-axial accelerometers attached to the lower back can be used to measure walking variability. Accelerometers may also allow monitoring walking abnormalities in patients with COPD, as was done before in patients with chronic heart failure [10]. Moreover, routine assessment of exercise performance in a home-based setting in the context of telemedicine seems possible when close associations between accelerometer features and walking distance are found. Therefore, the aim of this study was to determine walking patterns during the 6MWT of COPD patients and healthy elderly subjects. A priori, the authors hypothesized that patients with COPD have a different walking pattern during 6MWT compared to healthy elderly subjects, which is related to the reduced 6MWD in COPD independent of the degree of airflow limitation.
Participants
Patients were recruited prospectively during a three-day prerehabilitation assessment period at CIRO+, a centre of expertise for chronic organ failure in Horn, the Netherlands [11]. Exclusion criteria were exacerbation-related hospitalization within 4 weeks prior to assessment and the use of a rollator, which is expected to affect the walking pattern [12]. Moreover, patients were excluded from analyses if they were not able to walk at least one 6MWT continuously for six minutes. In all cases non-continuous walks resulted in a worse 6MWD. This is necessary to obtain reliable measures of walking variability [13]. Moreover walking variability cannot be measured over non-walking time.
In total, 93 patients enrolled, of which 14 (15%) stopped during both 6MWT. Therefore, 79 COPD patients were included in the analyses (n = 8 GOLD 1, n = 36 GOLD 2, n = 28 GOLD 3, n = 7 GOLD 4). None of the remaining patients received long-term oxygen therapy (LTOT). All measurements were part of routine baseline assessment for pulmonary rehabilitation [11]. Furthermore, 24 healthy elderly subjects were recruited. Healthy volunteers were recruited amongst healthy subjects who participated in previous trials [14]. None of the healthy subjects used physician-prescribed drugs. The study complied with the Declaration of Helsinki and was approved by the local university's ethics committee (NL30763.068.09). Informed consent was provided by all participants.
Study protocol
Participants performed two 6MWTs on consecutive days [15]. During both tests an accelerometer (Minimod, McRoberts, The Hague, The Netherlands; size: 8.565.061.0 cm, weight: 70 g, +/ 22G, 100 Hz sampling frequency) was attached to the trunk at the level of the sacrum using an elastic belt to collect raw signalling data. Data obtained during the 6MWT resulting in the highest distance were used for further analyses. Prior to and immediately after each 6MWT participants were asked to report dyspnea and fatigue on a ten-point Borg scale. The best 6-minute walk distance was expressed as a percentage of the predicted values [16].
Post-bronchodilator forced expiratory volume in the first second (FEV1) and forced vital capacity (FVC) were determined using spirometry and reference values were from Quanjer et al. [17]. Moreover, in the COPD patients residual volume (RV) and total lung capacity (TLC) were determined using a whole-body plethysmography to calculate the RV/TLC ratio as a measure of air-trapping. Height and weight were assessed to obtain body mass index (BMI, body weight in kilograms divided by squared height in meters, kg/m 2 ). Patients also underwent physical examination and medical history [18], Bio-electrical impedance analysis was used (Bodystat 1500) to determine fat-free mass (FFM) and disease specific equations were used to calculate fat-free mass index (FFMI) [19].
Data Analysis
For data analyses of the accelerometer signals 5 seconds at the beginning and end of the test were excluded to be sure that possible group differences in walking pattern were not due to start and/or stop of the 6MWT. Dedicated software written in Matlab(c) was used to analyse the remaining 350 seconds of raw acceleration data. The software included algorithms to calculate the walking intensity, spatio-temporal aspects of gait and mediolateral stability. Walking intensity was calculated from the integral of the modulus accelerometer output [20]. For this purpose, accelerometer output was low-pass filtered with a fourth-order Butterworth filter (20 Hz). The absolute value of the residual signal was taken to rectify the signal. After this process, the area under the curve over the complete measurement was calculated by integrating the signal over a period of 350 seconds. This integration was done separately for all three measurement directions (i.e. frontal, horizontal and sagittal plane). The integral of the modulus accelerometer output was then obtained by summation of these values [20]. Onsets of support phases were determined from forward accelerations as described by Zijlstra et al. [21]. During the transition from single to double support (i.e. after contra-lateral foot contact), the forward acceleration of the lower trunk changes sign from positive to negative. The peak forward acceleration preceding the change of sign coincides with the instant of foot contact. The acceleration peak preceding a change of sign (from positive to negative) was taken as the instant of a left or right foot contact. Consequently the strides ( = 2 steps) were identified. The cadence (strides/min) was calculated from the mean stride times. The inter-stride trunk acceleration variability was calculated using an unbiased autocorrelation coefficient procedure [22], which previously has been used to identify walking patterns of frail and fatigued elderly [5,22]. The autocorrelation function estimates how a time series is correlated with itself over different time lags. For a time series of trunk accelerations during walking, autocorrelation coefficients can thus be produced to quantify the peak values at the first and second dominant period, representing phase shifts equal to one step and one stride, respectively. Variability, as measured by the autocorrelation coefficients were calculated for the anterior-posterior, vertical and medio-lateral direction [23]. A higher autocorrelation coefficient indicates lower between-stride time variability (range: 0 to 100%).
Sample Size and Power
Sample size and power sample size calculations are based on outcomes of Moe-Nilsson et al. using the interstride trunk acceleration variability of fit and frail older adults [22]. Twentytwo participants in each group would provide 80% power at alpha 0.05 (two-tailed) to detect differences between COPD patients and healthy subject of 8% with a standard deviation of 10%. To cover a larger spectrum of COPD severity by having enough patients in disease stages 1/2 and 3/4, 93 COPD patients enrolled the study.
Statistical Analysis
Statistical analysis was done using SPSS software (version 15.0, SPSS Inc.). Data are reported as mean 6 standard deviation (SD) or percentages, as appropriate. GOLD stages 1 and 2, and GOLD stages 3 and 4 were combined for further analyses due to the small number of GOLD stage 1 (n = 8) and GOLD stage 4 patients (n = 7). The comparisons were conducted with 1-way analysis of variance or chi-square tests as appropriate. Accelerometer features (walking intensity, cadence, variability in anterior-posterior, vertical and medio-lateral direction), subject characteristics (gender, age, height, weight and FEV1) and perceived dyspnea and fatigue (before and after the best 6MWT) were tested in their association with the 6MWD via multiple ordinary least squares (OLS) regression models per group. Previously, different patterns were found for the variability in medio-lateral direction versus anterior-posterior and vertical directions between frail or fatigued persons and fit persons [5,22]. Therefore the interactions between variability for the different directions were also tested. Multicollinearity tests were carried out variables were retained in the model if the variance inflation factor was smaller than 5.0. A topdown procedure was handled for the selection of the final model variables. Accelerometer features probably rely, at least in part, on the walking speed. Therefore, a posteriori walking variability measures were compared in a subset of healthy subjects and COPD patients who had on average a comparable walking distance (range 6MWD: 560 m-640 m). Differences in walking patterns between the best and worst 6MWT in patients with COPD are described in the Text S1, table S1 and table S2. A priori, results were considered statistically significant when p-value was #0.05.
Characteristics
Healthy subjects and COPD patients had a similar gender distribution, age and BMI (table 1). As expected, GOLD stage 3/4 patients had the worst 6MWD, also after correction for confounding variables, like height, weight, age and gender [16]. COPD patients experienced more fatigue and dyspnea during the 6MWT compared to healthy elderly subjects.
Accelerometer features
Accelerometer features showed that COPD patients walked at a significantly lower intensity and a lower cadence (table 2). Differences in intensity and cadence were also found between patients in GOLD stages 3/4 and GOLD stages 1/2. Moreover significantly increased variability (as measured by the lower autocorrelation coefficients) was found for the medio-lateral acceleration in the COPD group compared to healthy controls.
Determinants of 6MWD
The intensity parameter walking intensity correlated most strongly with the 6MWD in healthy subjects (r = 0.902, p,0.001) and COPD patients (r = 0.872, p,0.001) (figure 2). The correlation between 6MWD and FEV1 in COPD (r = 0.452, p,0.001) and between walking intensity and FEV1 in COPD (r = 0.495, p,0.001) were both significant. No significant correlations between these variables were found in healthy subjects. The results of the OLS regression models used to test associations between subject characteristics, accelerometer features and 6MWD for healthy subjects and COPD patients (all GOLD stages) are summarised in table 3. The model variables explained 85% and 88% of the variability in 6MWD in healthy subjects and COPD patients, respectively. walking intensity and height were the only two significant determinants of 6MWD in healthy elderly subjects. In patients with COPD also age, cadence, walking variability measures and their interactions were included.
Discussion
The present study provides the first comprehensive evaluation of qualitative and quantitative measures of the walking pattern during 6MWT in patients with moderate to very severe COPD. It extends previous work on the 6MWT by providing detailed information on walking variability. On average, COPD patients walk with a lower intensity, a lower cadence and show a higher medio-lateral variability during 6MWT in comparison with healthy elderly subjects. The difference in medio-lateral variability remained even if the walking speed was similar. Moreover, walking variability was associated with functional exercise capacity in COPD patients, but not in healthy controls. These results indicate an altered walking pattern in COPD patients.
Increased variability in the medio-lateral direction (largely due to through lateral foot placement) is an active control strategy to compensate for balance disturbances in order to maintain stability in the anterior-posterior direction (the direction of propulsion). In the present study, walking variability was higher in the mediolateral direction in COPD patients compared to the control group (table 2), suggesting larger balance disturbances during the 6MWT in the patients. This may at least in part contribute to the relatively high energetic costs of a 6MWT in patients with COPD [24]. Moreover, this may also explain partially why patients with COPD experience abnormalities with day-to-day walking [25], including falls [26]. Indeed similar deviations in walking patterns were previous observed in frail elderly who fell at least once during the last year or who used walking aid [22].
A high positive association was found between the intensity parameter walking intensity and 6MWD in both COPD patients and healthy controls (figure 2). Previously, similar findings were reported in patients with chronic heart failure [10], [27]. These high associations create future possibilities for routine assessment of exercise performance of patients in a home-based setting in the context of telemedicine. Moreover, multiple accelerometer features and subject characteristics explained 88% of the variability in 6MWD in the patients with COPD. Next to familiar determinants of the 6MWD in COPD (i.e., height and age), also a higher walking intensity, a higher cadence, lower variability in anterior-posterior and vertical directions, and a higher variability in medio-lateral direction were significantly associated with a higher 6MWD (table 3). Moreover, interactions between variability in anterior-posterior and vertical direction and between vertical en medio-lateral direction were found. The degree of airflow limitation (e.g., FEV1) did not significantly explain the variance in 6MWD in a multiple model.
Previously, the cadence and the walking intensity have been studied in patients' home-environments to evaluate daily performance in COPD patients or to study the effects of pulmonary rehabilitation [28]. The present study shows that walking variability is also a clinically relevant variable in COPD patients as it significantly contributes to the prediction of 6MWD. This is an important finding as lower 6MWD has been related to more (6) exacerbation-related hospitalizations and higher mortality rates in patients with COPD [2]. The design of this study was cross-sectional. Future investigations using the current methodology should include repeated measures to investigate the effect of a comprehensive pulmonary rehabilitation program on walking pattern in patients with COPD. Indeed, faster walking may be more stable than slow walking [29]. On the other hand, this study showed that also the COPD patients with the same walking speed as healthy subjects have an increased variability in the medio-lateral direction. This strongly suggests that the differences found in walking variability between COPD and healthy cannot be attributed only to differences in walking speed. It is therefore more likely that in COPD patients walking stability is influenced by dyspnea [30], altered breathing dynamics [31], reduced arm swing, [32], lower muscle strength and/or coordination [5,6], disturbed balance [7,33], or a combination thereof. Accelerometers may be helpful to evaluate 6-min walking patterns as an index of treatment outcome. Moreover current findings should be reproduced in the patients own environment.
Participants who stopped during both 6MWT were excluded from this study, which was necessary to obtain reliable measures of walking variability [13]. As a result, the 6MWD of the COPD patients was higher compared to previous studies [15,34]. Then again, the current mean 6MWD of 494 m is well within the range in 6WMD as observed in the ECLIPSE study [3]. The Modified Medical Research Council (MMRC) Dyspnea Scale was not assessed, therefore it cannot be excluded that dyspnea may have contributed to the 6MWD in patients with COPD [3]. The present results are hypothesis-generating rather than definitive. Future studies are warranted to corroborate the present findings and to explain why patients with COPD have a different walking pattern compared to healthy elderly subjects. This may be due to a variety of factors [5,6,7]. Moreover, the current findings generate a clear rationale to study in detail walking patterns using tri-dimensional analyses, including electromyographic activity of lower-limb muscles [35].
In conclusion, patients with COPD have a different walking pattern during 6MWT compared to healthy elderly subjects, as objectified by using accelerometer signals. In addition to walking intensity, cadence and walking variability are important variables associated with 6MWD in patients with COPD. These differences in walking pattern partially explain the reduction in 6MWD in patients with COPD.
Supporting Information
Text S1 Intra-individual differences between best and worst 6-min walk tests. (DOCX) Abbreviations: AC-AP: autocorrelation coefficient in anterior-posterior direction, AC-V: autocorrelation coefficient in vertical direction, AC-ML: autocorrelation coefficient in medio-lateral direction. Note: effects of the following variables were also tested, but no statistical significance was detected: Gender, weight, FEV1, perceived dyspnea and fatigue (beforeafter the 6MWT), interaction between the autocorrelation coefficient in anterior-posterior and medio-lateral direction. doi:10.1371/journal.pone.0037329.t003 | 4,274.2 | 2012-05-18T00:00:00.000 | [
"Medicine",
"Biology"
] |
Design principles for long-range energy transfer at room temperature
Under physiological conditions, ballistic long-range transfer of electronic excitations in molecular aggregates is generally expected to be suppressed by noise and dissipative processes. Hence, quantum phenomena are not considered to be relevant for the design of efficient and controllable energy transfer over significant length and time scales. Contrary to this conventional wisdom, here we show that the robust quantum properties of small configurations of repeating clusters of molecules can be used to tune energy transfer mechanism that take place on much larger scales. With the support of an exactly solvable model, we demonstrate that coherent exciton delocalization and dark states within unit cells can be used to harness dissipative phenomena of varying nature (thermalization, fluorescence, non-radiative decay and weak inter-site correlations) to support classical propagation over macroscopic distances. In particular, we argue that coherent delocalization of electronic excitations over just a few pigments can drastically alter the relevant dissipation pathways which influence the energy transfer mechanism, and thus serve as a molecular control tool for large-scale properties of molecular materials. Building on these principles, we use extensive numerical simulations to demonstrate that they can explain currently not understood measurements of micron-scale exciton diffusion in nano-fabricated arrays of bacterial photosynthetic complexes. Based on these results we provide quantum design guidelines at the molecular scale to optimize both energy transfer speed and range over macroscopic distances in artificial light-harvesting architectures.
Under physiological conditions, ballistic long-range transfer of electronic excitations in molecular aggregates is generally expected to be suppressed by noise and dissipative processes. Hence, quantum phenomena are not considered to be relevant for the design of efficient and controllable energy transfer over significant length and time scales. Contrary to this conventional wisdom, here we show that the robust quantum properties of small configurations of repeating clusters of molecules can be used to tune energy transfer mechanism that take place on much larger scales. With the support of an exactly solvable model, we demonstrate that coherent exciton delocalization and dark states within unit cells can be used to harness dissipative phenomena of varying nature (thermalization, fluorescence, non-radiative decay and weak inter-site correlations) to support classical propagation over macroscopic distances. In particular, we argue that coherent delocalization of electronic excitations over just a few pigments can drastically alter the relevant dissipation pathways which influence the energy transfer mechanism, and thus serve as a molecular control tool for large-scale properties of molecular materials. Building on these principles, we use extensive numerical simulations to demonstrate that they can explain currently not understood measurements of micron-scale exciton diffusion in nano-fabricated arrays of bacterial photosynthetic complexes. Based on these results we provide quantum design guidelines at the molecular scale to optimize both energy transfer speed and range over macroscopic distances in artificial light-harvesting architectures.
I. INTRODUCTION
Over the last decades research into the role of coherent excitonic delocalization in the dynamics in photosynthetic membranes has shown that strong coherent coupling in subunits of tightly coupled pigments can result in short-ranged excitonic delocalization in the steady state [1][2][3][4][5][6]. Delocalization within these domains, typically restricted to individual proteins termed antenna complexes, is essential for modeling transient and steady state optical spectra of the full lightharvesting ensemble. Additionally, excitonic delocalization within antenna complexes is a crucial ingredient for a modular description of dynamics over longer distances and timescales [7][8][9], which, as observed experimentally [10][11][12][13], can be interpreted as a series of incoherent transfer steps, described by simple rate processes. These energy-transfer rates depend on the properties of the states involved, and thereby rely heavily on the steady state excitonic delocalization within antenna complexes, as has been shown by numerically exact calculations [4,5,14].
The modular architecture found in biological lightharvesing membranes, whereby antenna complexes containing a few pigments self-aggregate into larger structures, offers the potential for artificial solar energy conversion and molecular electronics based on such a modular design [15][16][17][18]. This is made possible by the high degree of experimental control that is today available for the integration of synthetic and biological structures and for the directed assembly of photosynthetic antenna complexes isolated from living organisms [19][20][21][22][23][24] or of supramolecular dye arrays [25][26][27][28][29][30][31]. To facil-itate the realization of these devices, however, a theoretical understanding of the mechanisms involved in to-date unexplained observations of the large diffusion lengths in several of these light-harvesting architectures is needed. For instance, the observed micron-scale diffusion of excitations in nanofabricated arrays of purple bacteria antenna complexes and phycobilisome proteins [19][20][21] exceeds by more than one order of magnitude the theoretical expectation for diffusion based on experimentally determined parameters under physiological conditions. On the one hand, this shows that an important enhancement of the diffusion can be achieved with hybrid technologies. On the other hand, it stresses the necessity for theory to establish and verify physical design principles by which delocalization of electronic excitations over a few pigments can enable the observed long-range energy transfer.
One viable strategy for improving the diffusion lengths of excitons is to extend their lifetime, thus allowing them to propagate across longer distances. Recent studies on the role of dark states in solar energy conversion have provided valuable insight into the potential advantages of protecting excitations from losses due to fluorescence in order to promote charge separation [32][33][34][35][36]. These models, though, do not consider the microscopic origin of the interactions, thereby omitting the conditions that enable or inhibit the active participation of dark states in the dynamics. Because dark states cannot be excited directly by light and typically do not couple efficiently to the propagating bright states, their participation in the exciton propagation is often overlooked and requires a careful reconsideration of the typical models used to describe resonant energy transfer [37,38]. A further challenge is presented by the need of accounting for non-radiative decay due to the interaction between electronic excitations and vibrational motion of pigment molecules, which is responsible for the usually low fluorescence yield of light-harvesting complexes [39,40]. In fact, the competition between radiative and non-radiative decay channels can drastically modify the dissipation landscape, leading to scenarios in which optically dark states have a much shorter lifetime than bright states. Moreover, the dynamics of excitons can be influenced by the presence of correlations between the local vibrational environments of each pigment. Most research on the topic has been focused on clarifying the extent to which these correlations influence the ultrafast spectroscopy of excitons, often leading to contrasting predictions [41]. However, more relevant for our case, weak inter-site correlations can be expected to play a role on much longer timescales too and thus influence non-radiative decay.
In the present work, we provide a theoretical model that identifies the desirable features of spectral structures and exciton delocalization within unit cells (subunits of tightly coupled pigments) that support efficient energy transfer. This model builds on the formation and participation of dark states in the dynamics in order to achieve long-range diffusion across arrays of these subunits. We show how this is possible by a combination of excitonic delocalization within unit cells and close proximity between unit cells. Identifying each LH2 complex as a unit cell of the transfer chain, we show that a model that can explain the long-range energy propagation reported in LH2 arrays does not need to resort to previously hypothesized [42][43][44] long-range quantum coherence involving several LH2 complexes [39,[45][46][47][48]. Indeed, we demonstrate that a theoretical description that is consistent with available experimental data concerning structure and optical response of the LH2 antenna [39,[45][46][47]49] can be developed, which reproduces the experimentally observed exciton diffusion length [20]. This theoretical description therefore presents desirable features for transport across photosynthetic membranes. The low-energy part of the excitonic spectrum of a unit cell comprises states that are protected against dissipation, while high-energy excitons offer fast pathways for energy transfer to neighboring units by virtue of their delocalization. A combination of these two features allows for robust long-range energy transfer.
The remainder of this work is organized as follows. In Section II.A we present the exactly solvable model which contains the necessary features to discuss long-range exciton propagation across a modular array of excitonic unit cells.
Here we discuss several energy-transfer regimes and their relation to the underlying excitonic properties. In Section II.B we introduce the bacterial antenna complex LH2 and provide a thorough characterization of their excitons relevant to energy transfer. In Section II.C we present the results of our simulations of a linear array of LH2 complexes and we discuss how the mechanisms presented in Section II.A apply to a real-world application, backed by existing experimental findings. Section III summarizes the main results of this work and sets a broader context in which these can provide useful guidelins for energy transfer design in molecular materials. The beneficial role provided by exciton delocalization and the formation of states protected against dissipation in excitonic energy transfer can be readily understood in a model system made up of dimeric unit cells (Fig. 1a). Let us consider a homogeneous linear array of unit cells, each of them described by a Hamiltonian where the state |i n describes an electronic excitation of the i-th pigment belonging to the n-th unit cell. Energy transfer across the full array is made possible by the interaction between pigments of different unit cells, described bŷ If the single-pigment dephasing rate γ, the intra-and inter-dimer couplings J 12 and the inter-dimer dipole-dipole couplingsV i j follow then delocalized intra-dimer excitons defined byĤ (n) |α n = E α |α n form. These may be used to describe the (incoherent) energy transfer between adjacent dimers [8]. We refer to "incoherent" or "classical" transfer interchangeably, meaning that inter-unit cell coherences of the density operator of the full chain are negligible ( i n |ρ| j m ≈ 0 for n = m), which then results in classical diffusion across subunits exhibiting local coherent dynamics. The hierarchy of interactions in Eq. (2) is commonly fulfilled in photosynthetic membranes and nano-engineered arrays, where pigments aggregate in antenna complexes within sub-nanometer distances, while the inter-complex distances can span several nanometers, as we will discuss later in detail. For the configuration of antiparallel transition dipoles (i.e. d 1 = −d 2 = de 1 ) (indicated by red arrows in Fig. 1a), the n-th dimeric unit supports a dark (bright) exciton |d n (|b n ) given by the symmetric (antisymmetric) coherent superposition |α n of single pigment states, namely |α n = | d n b n = (|1 n ± |2 n )/ √ 2 when ε 1 = ε 2 . Within this framework, a quantum master equation description of the full chain Hamiltonian in the presence of dephasing and relaxation mechanisms [50] can be replaced by the classical rate equations for the population p n α of the α-th exciton on the n-th dimeric unit cell. Here pairs of subindices αβ (αα ) label excitons on different (the same) unit cells. The rates that describe the transfer of excitations between unit cells W αβ , their overall decay rate Γ α , and their thermalization rate R αα , depend on the characteristics of the quantum states and their environments within these cells. Injection of excitations into the array can Linear chain of dimerized unit cells separated by a distance l consisting of interacting transition dipole moments, and sketch of the geometry and level diagram of two neighboring unit cells. Energy transfer between subunits is considered to be incoherent. (b) Distance dependence of the couplings between unit cell excitons. For large distances (far field, l/l 0 3.2), the coupling between bright states V bb dominates (red line). At short distances dark states are mainly involved in the energy transfer, which proceeds through the couplings V dd and V bd (blue and purple lines). Due to these couplings, the stationary populationsp b andp d deviate from those in the thermal equilibrium regime that characterizes the far-field limit (gray dotted line). (c) Effective decay rate Γ (left) and diffusion length l diff (right) as a function of the local exciton-phonon coupling, quantified by R = √ R bd R db . The inter-dimer distance is fixed to l/l 0 = 1.78, corresponding to the yellow line in (a). For low fluorescence yield and anti-parallel dipoles (blue line) the coupling to local phonons is beneficial for long-range energy transfer as a consequence of a reduced decay rate. A larger fluorescence yield (red line) or a change in exciton symmetry (green line) leads to an unfavourable scaling with respect to R: coupling to local phonons hinders long-range energy transfer. The model parameters take the values γ = (30 fs) −1 = 177 cm −1 /h, l 0 = 1 nm, d = 5 D, be spatially dependent and is taken to occur with rates I n α on site n. The rates W αβ can be obtained from the overlap between homogeneous lineshape functions and depend on the coupling matrix elements V αβ = α n |V (n,n+1) |β n+1 between unit cell eigenvectors and the relative dephasing rate γ αβ between these states via as explained in more detail in Appendix D. The dephasing rate between non-overlapping excitons γ αβ = γ α + γ β is the sum of the linewidths γ α = ∑ α R α α /2, which are typically dominated by pure dephasing R αα over the intra-unit cell thermalization rates R α =α . The thermalization rate R α =α ∝ J (|E α − E α |) |n(E α − E α )| is proportional to the phonon spectral density J (hω) and to the thermal boson occupation number n(hω) across the excitonic manifold. This leads to a Boltzmann distribution of the excitons when injection, loss and inter unit-cell transfer are much slower than thermalization. Excitonic pure dephasing R αα /2 = ∑ i | i n |α n | 4 γ ≡ P −1 α γ is slower than the pigment's pure dephasing γ by a factor given by the inverse participation ratio P −1 α [51]. Consequently, excitonic delocalization (P α > 1) results in slower dephasing rates for unit cell excitons in comparison to individual pigments, which, based on Eq. (4), implies an enhancement of the transfer rates between excitons in neighboring units with increasing unit cell delocalization. For this dimeric system where P α = 2, delocalization over two pigments of states |b n and |d n results in a two-fold speed up on the inter-dimer transfer W dd and W bb , as compared to unit cells where strong dephasing γ or mild coupling J 12 prevents the formation of delocalized excitons |α n . Excitonic delocalization within unit cells also redistributes the optical transition dipole strength of individual pigments, resulting in a fluorescence rate of excitons χ α Γ rad , where Γ rad is the single pigment fluorescence rate and χ α = ∑ i j α n |i n χ i j j n |α n characterizes the optical brightness of an exciton. The brightness quantifies the number of sites participating in the fluorescence from exciton α [52], and is determined by where j ν are spherical Bessel functions of the first kind, λ is the wavelength associated to the pigment's Q y transition and r i j = r i j n i j is the relative position of pigments i and j. In the limit r i j λ , the brightness reduces to the usual measure of superradiance |D α | 2 /d 2 , i.e. the relative dipole strength, where D α = ∑ i i|α d i , leading to a superradiant |b (χ b ≈ 2) and a subradiant |d (χ d ≈ 0) exciton for the dimeric unit cells of Fig. 1a.
Two ideas that will play a major role later are worth stressing at this point. First, we note that, while dark states do not couple significantly to electromagnetic fields, they still play a central role in the energy transfer dynamics across different unit cells. Intuitively, one expects that, if the distance between unit cells is much larger than their internal size (l l 0 ), two neighboring dimers couple via their global dipoles D α . However, if the dimers are placed sufficiently close such that their distance becomes comparable to their spatial extent (l l 0 ) the dipole approximation for their mutual interaction breaks down and states with vanishing dipole strength can start to interact via their higher moments, thus gaining some coupling strength V dd . This interaction can even exceed the one of bright states V bb in certain configurations as shown in Fig. 1b, where the coupling strengths V αβ are plotted against the interdimer distance, for the arrangement shown in Fig. 1a. For distances l 3.2l 0 , dark states couple more strongly than bright states. As a result, energy transfer through the dark manifold can be achieved much faster than through its bright counterpart. Secondly, we note that typical light-harvesting complexes show a rather low quantum yield of fluorescence φ , i.e. most of the optically generated excitations are lost through non-radiative decay channels. This means that dark states, although protected against fluorescence, could easily be more dissipative than bright states, as we show below. As discussed in Appendix B and C, non-radiative decay rates are influenced by the presence of correlations in the vibrational environments of single pigments. When these correlations are taken into account, the non-radiative decay rate is distributed to different excitons analogously to what happens to the radiative rates (i.e. superradiance). In fact, static correlations between local vibrational environments can be interpreted as arising from the presence of delocalized vibrations coupling to different sites (see Appendix C for a detailed explanation). Under these circumstances, the excitonic decay rates gain a prefactor κ α which depends on the phase with which each site contributes to the excitonic wavefunction. Depending on the specific exciton, localized excitations can interfere constructively (destructively) to yield non-radiative rates which can be larger (smaller) than their single-pigment counterpart. In our case, the non-radiative decay rate of the symmetric (antisymmetric) state |d n (|b n ) is enhanced (reduced) by a factor κ d(b) = 1 ± e −l 0 /r c with respect to its single-pigment value Γ non-rad , where r c is the correlation length of the pigment's vibrational environment. Thus, the decay rate from exciton |α n is given by Note that, for large correlation lengths, the non-radiative decay from the bright state can be significantly reduced compared to the one from the dark state, i.e. κ b κ d . Since both radiative and non-radiative decay in light-harvesting complexes are typically much slower than exciton thermalization, the decay of photoexcitations in a single light-harvesting unit occurs from a thermalized exciton distribution with the average rate where p th α ∝ e −E α /k B T . This average decay rate is an experimentally accessible quantity, as is the quantum yield of fluorescence The latter only determines the overall relative contribution of radiative and non-radiative decay, whereas the factors χ α and κ α determine how these rates are distributed across the excitonic manifold. So far, we have discussed how delocalization, transition dipole geometry and environmental correlations determine the dissipation properties of the excitonic states within unit cells, radiative and non-radiative alike, and how the finite size of the unit cell in densely packed arrays opens up the possibility to engage dark states into the propagation dynamics. We now have all the necessary ingredients to determine how all these properties influence the diffusion length of excitons across the light-harvesting array. As we are interested in diffusion over macroscopic distances, it is useful to describe the position of a dimeric unit cell in terms of a continuous variable x = nl as l → 0. Thus, bright and dark state populations at discrete sites p n α (t) are replaced by the respective densities p α (x,t) and Eq. (3) takes the form of two coupled continuous diffusion equations, which allow for an analytical solution of the steady-state density p α (x, ∞), as detailed in Appendix A. If we assume that the injection and decay happen on a much slower timescale than transfer and relaxation, the solutions for local driving at x = 0 take the simple form p α (x, ∞) = (p α /(2l diff ))e −|x|/l diff , withp b +p d = 1. The diffusion length is given by l diff = l W /Γ, introducing the effective transfer and decay rates as weighted averages on the populationsp α . These are determined by the ratiop which only depends on the rates that provide mixing between bright and dark manifolds. In particular, we observe that the presence of symmetric inter-dimer bright-to-dark transfer W db = W bd causes a deviation of the steady state populations in an increase of the stationary dark state population. Thus, the onset of this non-equilibrium state, which can be observed at short inter-dimer separations l when the rates W bd = W db become comparable to R bd and R db (Fig. 1b, gray dotted line), leads to a modification of effective transfer and decay rates with respect to their equilibrium values. In order to study the effect of dark states on energy transfer, we focus on the dependence of the diffusion length l diff on the average intra-dimer relaxation rate R = √ R bd R db , quantifiyng the strength of local electron-phonon coupling, at a fixed lattice spacing l such that bright-to-dark transfer and relaxation take place on a similar timescale and exciton transfer through dark states is faster than through bright states (Fig. 1b, yellow vertical line).
First, we consider the ideal situation in which the main decay channel is radiative (φ = 0.9): Most of the dissipation takes place at the low-energy bright state, which propagates more slowly than the higher-energy dark exciton. In this scenario, long range energy transfer clearly benefits from the establishment of a non-equilibrium state with more population in the dark state. In fact, when reducing R, e.g. by decreasing the local exciton-phonon coupling, we are moving population from the slow, dissipative bright state to the fast, less dissipative dark state. Furthermore, we are at the same time reducing the effective decay rate Γ and increasing the effective transfer rate W , resulting in larger diffusion length l diff (Fig. 1c, red line).
In a more realistic scenario, we expect non-radiative decay to play a much bigger role. Let us then set φ = 0.1, closer to values observed for biological light-harvesting complexes. In doing so, we also adjust the single pigment decay rates Γ rad and Γ non-rad to ensure that the effectve decay rate at thermal equilibrium Γ th from Eq. (6) remains the same as in the case just considered. Since now the main decay pathway is the nonradiative decay from the fast high-energy exciton, an increase in the relaxation rate R would lead to a longer lived excitation, but also to a slower propagation. However, the former prevails: the reduction of effective decay Γ is large enough to ensure a longer-ranged propagation (Fig. 1c, blue line). In other words, by increasing the coupling to local phonons, we counter-intuitively increase the spatial extent of energy transfer, at the price of making it slower. This energy transfer regime can be seen as a natural extension to systems with realistic non-radiative decay channels of the dark state protection scheme proposed in context of quantum photocells [32][33][34][35][36] and recently applied to energy transfer [53]. Our generalized "dark" state protection mechanism makes use of states that are protected against non-radiative decay (and are therefore "dark"), to extend the lifetime of the excitons, which can diffuse across longer distances.
Lastly, we observe that, while non-radiative decay pathways seem to fundamentally limit the transport efficiency of light-harvesting architectures, one can work around this constraint by expoiting the symmetry of the excitonic wavefunction. Let us consider an arrangement in which now the dipoles within each dimeric unit cell are parallel. In this situation, the intra-dimer coupling J 12 becomes negative and the low energy exciton is now the symmetric state (|1 n + |2 n ) √ 2. While most of the optical dipole still resides in the low energy level, now this state also becomes the most sensitive to non-radiative decay. Thus, we are left with a high-energy exciton which shows little dissipation and fast transfer. As shown in Fig. 1c (green line), this situation is similar to the one that we considered with φ = 0.9. The important difference, however, is that in this case non-radiative decay is fully taken into account, but its effect is mitigated by exploiting excitonic delocalization within a unit cell and the ensuing redistribution of decay pathways.
B. Bacterial light-harvesting units
The simple dynamical model considered so far allows for analytical expressions that facilitate the identification of the different mechanisms on which we base our explanation of the to-date unexplained experimental observations of long-range energy transfer in nano-engineered arrays of LH2 photosynthetic complexes of purple bacteria Rb. sphaeroides [20]. These complexes consist of a protein holding two concentric bacteriochlorophyll (Bchl) rings ( Fig. 2a) with the inner B850 ring consisting of 18 strongly interacting Bchl pigments, which at room temperature exhibit an excitonic delocalization across about 3-6 pigments, determined by superradiance measurements [39,[45][46][47][48]. This subunit mediates the transfer between LH2 complexes under physiological conditions, whereas the B800 pigments supports localized excitations which extend the absorption range of the LH2 complex and regulate oxidation [55].
We consider the full B850 HamiltonianĤ (n) describing the interactions among Q y transitions of its 18 BChls, sketched with red arrows in Fig. 2a, and study the delocalization properties of the single-ring excitons |α , with α = 1, . . . , 18. For realistic LH2 complexes, we need to consider different excitonic energies E α from realizations of pigment energies ε i in order to describe the inhomogeneities (static disorder) arising from local protein configurations. Our choice of spectral density, nearest neighbor couplings, static disorder, geometry and magnitude of the transition dipoles are justified by previous independent analysis of experimental observations [6,39,51,[56][57][58][59] and, when incorporated into our model, they reproduce observed absorption spectra and superradiance enhancement as shown in Fig. 2b-c (for details of calculation and parameters, see Appendix D and E).
This parametrization of the B850 ring results in cooperative fluorescence from ∑ α χ α p th α ≈ 3 pigments on average (where · represents the average over static disorder), which is consistent with the experimental observations of superradiance in LH2 [39]. Moreover, fixing the decay rate for an isolated LH2 complex and the quantum yield of fluorescence to the experimentally observed values of Γ th = (1 ns) −1 and φ = 0.1, allows us to obtain the distribution of decay rates Γ α across the exciton manifold. A modest value of r c = 5 Å for the correlation radius of the pigment's vibrational environments [1,60] is sufficient to generate a significant redistribution of the dissipation rates to the higher-energy part of the excitonic manifold ( Fig. 2c), as we would expect for an anti-parallel arrangement of neighboring dipoles. The participation ratio shown in Fig. 2c exhibits a maximum P α ≈ 8, also consistent with previous estimates [39], which underlie excitonic delocalization constrained to small portions of the B850 ring (Fig. 2a).
We should note that for realistic LH2 center-to-center distances l, the aggregation into arrays does not disrupt the excitonic manifold of single rings, as can be expected from the similarity of optical spectra of diluted and densely packed arrays [20]. Typical physiological conditions and lipidreconstituted membranes exhibit a center-to-center distance l ≈ 8 nm, with increasing inter-complex distances for larger lipid concentration [61][62][63]. On the other hand, the process of nano-fabrication of LH2 arrays exploits host-guest interactions on a nano-imprinted substrate and does not involve lipids [20], which allows us to assume the 6.2 nm diameter of LH2 β -helices of Rb. sphaeroides [64] as the absolute minimum for l. Hence, it is reasonable to consider center-to-center separations of l 6.5 nm in the nano-engineered arrays.
To assess the robustness of the single ring excitonic manifold against the coherent interaction between neighboring LH2s, we proceed to diagonalize the full two-ring Hamiltonian (Ĥ (1) +Ĥ (2) +V (1,2) )|μ = Eμ |μ and present in Fig. 2c the average over static disorder of their participation ratios, relative dipole strengths and dissipation rates. Notice that even though the participation ratio in the LH2 pair Pμ is slightly larger, very minor changes occur in the distributions of optical brightness χμ and dissipation rates Γμ with respect to the single ring eigenstates. As a consequence, optical absorption spectra are only slightly affected by the coherent interaction between LH2 rings (Fig. 2b). This result confirms that the coherent electronic interaction for realistic val- [49]. B850 (violet) and B800 (blue) rings are composed respectively of 18 and 9 BChls, whose Q y transition dipoles are indicated by red arrows. Electronic excitations are partially delocalized and undergo relaxation within the exciton manifold, decay to the ground state or transfer to neighboring rings. (b) Experimental absorption spectrum [54] (gray dots) and theoretical fit of the B800 and B850 bands of LH2. The inclusion of coherent inter-ring couplings in an LH2 pair (green to violet dashed lines) leads only to small deviations from the single-ring absorption (yellow line). (c) Energy distribution of participation ratio (top), brightness (center) and decay rates (bottom) of B850 excitons, for a single ring (yellow line) and for a coherently coupled B850 pair for different inter-ring distances (green to violet dots). For a single ring, the underlying distribution is shown (yellow shading). Most delocalized states lie in the mid-energy range, superradiant excitons (χ α > 1, above the gray dashed line) occupy the low-energy end of the spectrum, whereas the states with stronger dissipation are high-energy dark states. Two-ring excitons are slightly more delocalized, but all two-ring quantities lie within the single-ring distributions. (d) Steady state density matrix of two coherently coupled B850 rings undergoing local relaxation and dephasing in the single-ring exciton basis, for l = 6.5 nm (green) and 8.5 nm (violet). Excitons are organized in ascending energy and grouped according to the ring to which they belong. Populations are set to zero to increase contrast. (e) Average energy transfer rates W αβ between two B850 rings for l = 6.5 nm (green) and 8.5 nm (violet). Thicker and darker lines correspond to faster transfer. The three fastest transfer pathways are highlighted (red dashed lines). For short l, the excitons in the mid-high energy range are mostly involved in energy transfer. Averages are performed over 10 4 realizations of static disorder. ues of l does not perturb significantly the excitonic structure of isolated rings. The robustness of the single ring excitonic manifold can be understood by noticing that the maximum coupling between any two LH2 excitons residing on different rings (averaged over static noise and relative rotations on coplanar rings) is below 40 cm −1 even for l = 6.5 nm, which is much smaller than the nearest neighbor interactions within each ring ≈ 250-350 cm −1 [51,57]. One last argument in favor of the robustness of the single ring excitonic manifold to the coherent coupling between rings is provided by looking at the residual inter-ring coherence after exciton thermalization. To do so, we set up a Lindblad equation describing the coherent interaction between two rings and local thermalization and dephasing, (1,2) , ρ] + (D (1) + D (2) )ρ, (11) and solve for the steady state ρ ss . Since both radiative and non-radiative decay take place on a much slower timescale, we neglect them here. The constant of proportionality between the thermalization rates R αα and the spectral density J (|E α − E α |) estimated via fluorescence line narrowing experiments [60] is such that the ≈ 200 fs timescale of equilibration in LH2 [56] is reproduced. As we are interested in demonstrating that the stationary inter-ring coherence is typically negligible, we average the absolute value of the matrix elements |ρ ss αβ | over static disorder to avoid ensemble dephasing. The results are shown in Fig. 2d for two inter-ring distances. Even for the shortest distance (l = 6.5 nm), the average steady state coherence |ρ ss αβ | between any two singlering excitons |α 1 and |β 2 is much smaller than 1/2, which is the value it would take for a maximally coherent superposition of two states (|α At this point, we have established that for center-to-center distances l ≥ 6.5 nm, the incoherent energy transfer between neighboring B850 rings can be treated based upon the single ring eigenstates. Now we proceed to analyze the mechanisms that underlie this incoherent energy transfer between neighboring LH2s. In Fig. 2e we show how different single-ring excitons residing on two neighboring LH2 rings are connected via the average transfer rates W αβ . As we saw in the previous section, a shorter separation between unit cells leads to a substantial participation of optically dark states in the energy process, which can surpass the bright states in terms of transfer speed. In fact, also in this case, when reducing the distance l from 8.5 nm to 6.5 nm, the fastest transfer pathways (red dashed lines) shift from the low energy part of the spectrum, where most of the dipole strength resides, to higher-energy excitons, which show larger delocalization. This finding underlines that the interaction in densely packed arrays does not depend on the exciton transition dipoles but rather on their delocalization, as quantified by the participation ratio.
C. Nano-engineered LH2 arrays
Armed with these facts, we now proceed to discuss the origin of the long-range diffusion observed in [20]. In this experiment, simultaneous excitation with a continuous-wave diffraction-limited laser beam and imaging of the spatial profile of emission through confocal fluorescence detection enabled the read-out of exciton propagation lengths of up to 2 µm in quasi-1D assemblies of LH2 complexes. To the best of our knowledge, theoretical models could only explain such a diffusion by ignoring static disorder and underestimating dephasing [6,65,66], resulting in long-range excitonic delocalization across approximately 40 pigments [42][43][44], a value that is in conflict with the experimental observations [39,[45][46][47][48].
In order to examine this experiment, we determine the rates of the Pauli master equations Eq. (3) for stochastic realizations of the pigment energies ε i and relative orientations of a 1D array of 1001 coplanar LH2 complexes as shown schematically in Fig. 3a, and study the stationary exciton distribution p n α . For moderate excitation power which allows us to remain in the single excitaton sector, driving can be modelled to take place on a single ring via incoherent B800-to-B850 energy transfer (Appendix F). Exciton distributions arising from a Gaussian laser profile can then be obtained by convolution of the driving profile with the solution for localized driving (Appendix A).
Despite the presence of static disorder and the multi-level structure of each unit cell, the stationary population distribution across the LH2 array is still characterized by an exponential distribution around the ring at which driving takes place (Fig. 3a). This allows us to characterize the population profile by a single parameter l diff . When considering an initial Gaussian beam 400 nm wide (full width at half maximum), our model is able to reproduce the experimentally observed micron-range exciton propagation lengths if l = 6.5 nm (Fig. 3b), while natural distances of 8.0-8.5 nm yield a barely noticeable spread of the exciton density. This result suggests that LH2 packing density is a key factor in determining the spatial extent of energy transfer. With a distance of l = 6.5 nm, a competition between thermalization and transfer leads to the establishment of a non-equilibrium steady state exciton population within antenna unitsp α , that has a larger weight on high energy dark states than the thermal distribution, as shown in Fig. 3c. This is a clear signature that the non-equilibrium transfer across these arrays partially proceeds via high energy dark states, which, as explained above, rely on excitonic delocalization within each ring unit cell.
Despite being optically dark, these high-energy excitons are more sensitive to non-radiative decay than their low-energy bright counterpart, as shown in Fig. 2c. Therefore, energy transfer and decay dynamics have competing effects on the diffusion length: Local exciton-phonon interactions leading to exciton thermalization slow down the transfer of excitons by moving population away from fast propagating high-energy states, while at the same time granting them more time to propagate, since low-energy excitons are less sensitive to nonradiative decay. Whether this leads to a larger diffusion length or not, depends on the specifics of the system under consideration. In order to test this possibility, we artificially slow down the exciton thermalization timescale from 200 fs to 2 ps. This corresponds in practice to forcing the steady state exciton distribution in a given ringp n α to be further away from thermal equilibrium p th α . The shift of population towards the higher excitons results in a shorter ranged transfer (Fig. 3d, orange dots), revealing that these light-harvesting arrays operate in the generalized "dark" state protection regime. The relevance of such shelving mechanism is also confirmed by the behavior of the effective transfer and decay rates, (Note that the presence of static disorder in the array forces us to keep track of the unit cell index n and to average over both forward and backwards transfer rates.) For faster relaxation, i.e. larger coupling to local phonons, both effective transfer and decay rates are significantly decreased, as captured by Fig. 3d. As noticed recently [67], the transfer rate W benefits from the engagement of dark states at small inter-ring separations. Their participation allows for an increase in W that exceeds the one predicted by interactions only mediated by the collective dipoles D (n) α and D (n+1) β of two neighboring B850 rings. The relevance of the finite size of unit cells naturally makes the transfer more dependent on their geometrical details and relative arrangement. Indeed, we notice for example that a systematic out-of-plane angle of just 5 • as observed in lipid-reconstituted membranes [64,68], which slightly increases the distance between the closest pigments in neighboring rings, slows down the effective transfer rate and therefore decreases diffusion length (Appendix F). However, the apparently beneficial involvement of fast-propagating dark states in the dynamics is countered by their sensitivity to non-radiative decay. Thus, in any situation in which we are interested in increasing the spatial extent of energy transfer rather than its speed, we need to consider the presence of realistic (radiative and non-radiative) decay channels and design the energy transfer process accordingly. Closely packed LH2 arrays seem to naturally operate in a parameter regime that sustains a generalized "dark" state protection mechanism, where fast exciton thermalization causes a shielding against non-radiative decay.
Finally, we notice from Fig. 3d that our model produces energy transfer rates that are in excellent agreement with existing theoretical estimates obtained for larger inter-ring separations (l 7.5 nm) [67,69], matching the conditions typically observed in reconstituted and biological light-harvesting mem- branes [68,70]. In this regime, inter-complex energy transfer proceeds from a completely equilibrated excitonic manifold (Fig. 3c), far from the non-equilibrium regime in which "dark" state shelving becomes relevant. This suggests that the design guidelines discussed in this work, while relevant for tightly packed nano-engineered systems, might be of secondary importance for more sparsely assembled biological LH2 membranes.
III. CONCLUSIONS
In conclusion, we have shown with the help of an analytically solvable model that room temperature excitation energy transfer can benefit from quantum dynamics within modular unit cells and demonstrated that the resulting design principles apply in photosynthetic membranes with realistic physiological parameters, as well as in nano-fabricated architectures. The resulting hybrid "quantum-classical" design can increase both speed and propagation range thanks to the participation of the dark states due to excitonic delocalization within unit cells. On the one hand, improved speed can be achieved when the packing density of unit cells is made sufficiently large for the coupling of individual pigments on different unit cells to benefit from contributions of high-lying dark states. Crucially, close packing allows for exciton populations to depart from a thermal distribution, increasing the overall diffusion rate via non-equilibrium energy transfer. On the other hand, the exciton propagation length is extended by intra-unit cell relaxation, biasing electronic populations towards low-energy excitons, which are less sensitive to non-radiative decay and therefore increase the overall time window over which energy transfer can take place. This does not only exemplify the beneficial interplay of quantum coherent dynamics and environmental noise [71][72][73] but also provides basic mechanisms that underpin the micron-range propagation of excitations observed in artificial arrays of LH2 photosynthetic complexes. These can be explained by the speed up of intercomplex transfer rates induced by dense packing of lightharvesting units and the protection from non-radiative decay provided by low-energy excitons. Although we do not expect this transfer mechanism to be at play in biological LH2 membranes due to their large inter-complex separations, it could be tested on nano-engineered platforms in experiments where the exciton diffusion length is measured for different lightharvesting arrays, prepared with different LH2 packing densities. Further corroborating evidence for the mechanism that we propose is derived from experimental observation of reduced exciton lifetimes in reconstituted LH2 membranes compared to isolated complexes [61]. Our model already captures this trend correctly in terms of an increased population of the high-energy excitons, and thus could serve as a solid starting point for a more thorough quantitative analysis of this effect. While the multiscale model used in this work contains all the necessary elements to discuss general energy transfer strategies in light-harvesting arrays, further improvements could be achieved by employing more refined theoretical descriptions of the light-harvesting units [74]. We plan to do so in the future by applying some recently developed numerical techniques, allowing for an exact treatment of the non-Markovian exciton dynamics [75].
The fact that the speed and spatial extent of energy transfer can be directly related to excitonic delocalization suggests the possibility of using partial delocalization restricted to single unit cells (due to the magnitude of noise in real roomtemperature scenarios) as a resource to optimize the range of propagation of electronic excitations in technological applications with the goal of outperforming the already extremely high efficiency of natural photosynthesis. The general nature of these design principles hints that this energy transfer scheme might find applications in a broad class of excitonic materials, not limited to the specific architecture excplicitly discussed in this work, although it will be task of future research to probe its technological feasibility. In this appendix, we present the main steps leading to the analytical solution of the minimal model describing incoherent energy transfer across a linear array of dimerized unit cells, which locally support quantum delocalization. We start from Eq. (3) in the main text, i.e. the discrete diffusion equation of a linear array composed by dimerized unit cells, each hosting two levels, |b n and |d n (bright and dark), which can hop to the neighboring cells and are subject to intra-cell relaxation and fluorescence. Writing explicitly both the equations for the bright and dark components for local injection of excitations at site n = 0, we obtain denotes the population of the bright (dark) state of the n-th dimer. The continuum limit is achieved by identifying p n α (t)/l = p α (x,t) (α = b, d) with x = nl and taking the inter-dimer separation l to be vanishingly small, so that x becomes a continuous variable. To simplify the discussion, we assume that the cross-rates are equal, i.e. W db = W bd = w, which is the case for the configuration assumed in the main text. If W bd = W db , a drift term in the diffusion equation is introduced, and the final exciton distribution is not going to be symmetric around the injection point x = 0. Thus, we obtain two coupled diffusion equations Introducing the vectors p = (p b , p d ) T , I = (I b , I d ) T and the matrices we can rewrite Eq. A3-A4 more compactly as Rewriting Eq. (A6) in terms of the Fourier transformp(q,t) = dx e −iqx p(x,t) and considering the stationary state at t → ∞, we obtain the solution aŝ The solution can be brought to a much more transparent form if we assume that the decay described by Γ α takes place on a much slower timescale than inter-and intra-dimer energy transfer. Within this approximation, which is typically satisfied in light-harvesting complexes, we have to leading order in Γ α , where Γ andp α were defined in Eq. (9) and Eq. (10). This simple form allows us to sum the series in Eq. (A7) top where W was defined in Eq. (8). Taking the inverse Fourier transform leads to the final form of the solution which is the exciton distribution discussed in the main text. As one would expect, if inter-exciton conversion due to the crossrates R bd , R db and w is much faster than the decay timescale, both bright and dark manifold have the same final distribution, differing only by the a normalization constantp α . It is easy to check that we would get the same exponential distribution if we considered a diffusion process involving unit cells containing a single level rather than two, with transfer and dissipation rates W and Γ. Therefore, a dimeric unit cell results in the same type of diffusion as a monomeric one, where the large-scale diffusion properties can be tuned by changing the local dimer parameters. A completely analogous procedure leads to the solution of the case in which W bd = W db , which we do not present in detail here. The imbalance between these two rates does not change the definitions ofp α , W and Γ (which only depend on the average cross-rate w = (W bd +W db )/2), but leads to a drift coefficient proportional to ∆ = (W bd −W bd )(p b −p d ). As a final result, the exciton distribution is still exponentially localized around the injection point x = 0, but the diffusion lengths for x < 0 and x > 0 are different. Thus, the diffusion is not symmetric. As a possible application, this property might be used to design excitonic wires that are able to switch the dominant direction of diffusion by small changes of their unit-cell properties, for example implementing artificial photoprotection.
Although in the case of the exactly solvable model we consider a completely homogeneous system, it is possible to show that the presence of moderate static disorder does not modify the exponential shape of the stationary exciton distribution, but only leads to a reduction of the diffusion length. This explains why, also in the case of the LH2 array presented in the main text, we still obtain an exponential distribution. To grasp of the effects of static disorder, we consider a linear array of monomeric unit cells with some slight random inhomogeneity in the transfer rates. The transfer rate from site n to site n + 1 (and vice versa) is given by W n = W + δW n , where we assume δW n = 0 and δW n δW m = σ 2 δ nm . The average population distribution p(x) for small disorder can be obtained by expanding the solution in the form of Eq. (A7) to second order in δW n and taking the ensemble average. This leads to an exponential average exciton distribution with diffusion length 2 ), independent of the specific distribution of the rate fluctuations δW n .
To conclude this section, we note that the stationary solution p(x, ∞) for local exciton injection δ (x) is sufficient to determine the exciton profile p (x, ∞) for any other (normalized) injection profile g(x). In fact, definingĝ(q) the Fourier transform of g(x), we immediately obtain that the new solution satisfiesp Transforming back to real space, we obtain the new solution for generic driving as a convolution between the solution for local driving and the generic driving profile, namely The same principle allows us to draw conclusions for exciton propagation in a LH2 array upon driving with a Gaussian laser profile, using only the results of simulations for local driving.
APPENDIX B: MICROSCOPIC ORIGIN OF NON-RADIATIVE DECAY
In this appendix, we derive a microscopic Hamiltonian describing vibrationally-induced non-radiative decay of a photoexcited molecular state. We start from the ab initio Hamiltonian of a single chromophore. We determine the form of the non-adiabatic coupling at the basis of nonradiative decay in the usual harmonic approximation for intra-molecular vibrations. We use this result to compute the internal conversion rate for delocalized excitonic states of a molecular aggregate.
Molecular Hamiltonian
The full Hamiltonian of a molecule is given by [50] H(r, p; R, P) = T el (p) +V el-el (r) +V el-nuc (r; R) (B1) +V nuc-nuc (R) + T nuc (P), where T denotes the kinetic energy and V the Coulomb interactions. The sets of electronic coordinates and momenta are denoted by r and p, whereas the nuclear degrees of freedom are described by R and P. The typical challenge in condensed matter physics is to find approximate eigenstates and eigenvalues of this Hamiltonian. The huge difference between electronic and nuclear masses justifies the typical Born-Oppenheimer (BO) approach, where the electronic degrees of freedom are treated on a fully quantum level (i.e. r →r and p →p), whereas the nuclear coordinates are kept fixed and their momenta are initially neglected. This allows to find the adiabatic electronic eigenstates by diagonalization of the electronic Hamiltonian at fixed R: The eigenstates |φ i R and eigenenergies ε i R -which take the name of potential energy surfaces (PES)-depend parametrically on the nuclear coordinates. Now we express the Hamiltonian (B1) in terms of these adiabatic electronic states. Although their interpretation as physically relevant states depends on whether the nuclear kinetic energy is negligible or not, they always form a perfectly legitimate orthonormal basis of the electronic Hilbert space at fixed R. Making use of (B2), we see that Note that the second term on the right-hand side does not necessarily reduce to δ i j T nuc (P), due to the parametric dependence on R of the electronic states. However, if nuclei are considered as classical particles, the nuclear kinetic energy does not affect the electronic adiabatic wavefunction in any way, and the full molecular Hamiltonian (B1) takes the wellknown BO form The adiabaticity of this Hamiltonian is represented by the fact that the nuclear motion does not couple different electronic states: when the electrons are in the state |φ i R , the nuclei evolve according to the effective Hamiltonian h i (R, P) = T nuc (P) + ε i R . Introducing a diagonal nuclear mass tensor M and expanding the i-th PES for small deviations from its stable equilibrium configuration R i (i.e. close to its minimum ε i R i ), the nuclear Hamiltonian h i becomes where H ≥ 0 is the Hessian matrix of the PES ε i R at R i . On the second line, we introduced the mass-rescaled normal coordinates and momenta ξ = UM 1/2 R and π = UM −1/2 P, where U is the unitary transformation that diagonalizes the massrescaled Hessian, i.e. U T DU = M −1/2 HM −1/2 . D has only diagonal entries, corresponding to the square of the normal frequencies ω 2 k . As usual, we have neglected the Duschinsky mixing, that is, we have assumed that the Hessian H at the nuclear equilibrium configuration is independent of the specific PES, so that it is diagonalized by the same transformation and yields the same eigenvalues (i.e. vibrational frequencies) for the PESs in which we are interested.
At this point, we can easily quantize the nuclear normal modes by redefining them in terms of a set of bosonic annihilation (creation) operatorsb k (b † k ). Assuming from now on h = 1, we haveξ The quantized vibrational Hamiltonian (B5) becomeŝ Note that we have introduced the Huang-Rhys factors s i k = ξ i k 2 ω k /2, which in general depend on the PES under consideration.
Since our ultimate goal is to describe the dynamics of photo-excitations, we focus on two electronic states |φ g R and |φ e R , describing the electronic ground state and the first optically excited state. We neglect any further dependence of these states on the nuclear coordinates and refer to them simply as |φ g and |φ e . If we refer all energies and nuclear coordinates to the minimum of the PES of the electronic ground state, the BO Hamiltonian (B4) takes the usual spin-boson form with pure dephasing interaction R e + ∑ k s k ω k and g k = −ω k s e k . The adiabatic electron-phonon coupling can then be described by the spectral density
Non-adiabatic electron-phonon coupling
Now, we turn back to (B3) and consider the effect that the nuclear kinetic energy has on the adiabatic electronic wavefunction. For this, we take into account the quantum nature of nuclei as well, i.e. considering R →R and P →P as quantum mechanical operators. Note that the second term on the right-hand side of (B3) acts non-trivially on both electronic and nuclear degrees of freedom. To better understand this, we consider how the nuclear kinetic energy operator acts on a product state formed by |φ j R and a nuclear state |χ ν . In coordinate representation,P becomes the differential operator −i∂ R , therefore we have r, R|T nuc (P)|φ j Using (B11), we can now compute the matrix element of the nuclear kinetic energy operator between two arbitrary electronic-vibrational states as where the three terms appearing on the second line of (B12) directly originate from those appearing in (B11). It is easy to spot the first of them as the one giving rise to the BO Hamiltonian (B4). On the other hand, the terms involving derivatives of the adiabatic electronic wavefunction with respect to the nucrear coordinates are not necessarily proportional to δ i j , and therefore describe the non-adiabatic coupling between different PESs. Within the BO framework, they are neglected by merit of the argument that the electronic wavefunctions depend very weakly on the nuclear coordinates, therefore their derivatives are negligible. In fact, electronic wavefunctions usually vary significantly over distances of 1-10 Å, corresponding to typical inter-nuclear separations. On the other hand, typical nuclei in organic molecules jiggle by about 0.01-0.1 Å around their equilibrium position. On this length-scale, therefore, electronic wavefunctions are expected to be fairly smooth and almost constant. Thus, the last two terms in (B12) can be regarded as higher order contributions of a perturbative expansion whose zeroth order term corresponds to the BO Hamiltonian. For simplicity, let us focus only on the lowest non-zero order of this non-adiabatic perturbation, i.e. the one involving the first derivative of the adiabatic electronic wavefunction. Let us define the quantity which is closely related to the Berry connection. Note that we can always assume the phase of the adiabatic electronic wavefunctions to be independent of R, corresponding to a specific gauge choice for the Berry connection. This, together with the normalization condition φ i R |φ i R = 1, immediately implies that F ii k (R) = 0, meaning that this term only connects different electronic states and does not modify the vibrational Hamiltonian within a given PES. On top of that, the orthogonality condition φ i R |φ j R = 0 further imposes a hermiticity constraint A i j k (R) * = A ji k (R). As usually assumed for optical transitions, we neglect the dependence of the coupling between different PESs on the nuclear configuration. This means that A i j k (R) ≈ A i j k (R i ) is practically constant in the integral over R in (B12). We also neglect any further dependence of the adiabatic states on the nuclear coordinates, as we did in the previous section in order to recast the BO Hamiltonian (B4) into spin-boson form (B9). Thus, we can finally write the initial molecular Hamiltonian (B1) including the lowest order non-adiabatic correction asĤ BO +Ĥ non-ad , where where in the last step we have rewritten the nuclear momenta in terms of the normal modes defined above and introduced the transformation Thus, the non-adiabatic electron-phonon coupling can lead to transitions between different electronic states |φ i and |φ j initiated by the action of nuclear momenta. If we consider only two electronic states, namely the ground state g and one optically excited state e, we can see this as causing nonradiative transitions between g and e mediated by the coupling constant f k := α ge k ω k /2. This process goes under the name of internal conversion (IC), and is therefore determined by the following spectral density analogous to (B10).
Internal conversion in molecular aggregates
While it is generally true that the non-adiabatic couplings are much weaker than the pure-dephasing electron-phonon coupling, they still play a role on longer time-scales, where other phenomena such as fluorescence enter into play. Therefore we should devise a way to treat IC on the same footing as fluorescence in molecular aggregates. We consider an aggregate of N interacting two-level chromophores, each one described by the Hamiltonian (B13). We focus on the subspace spanned by the global ground state |g = |φ g 1 . . . |φ g N and the single excitations |i = |φ g 1 . . . |φ e i . . . |φ g N . Assuming that the only site dependent parameters of our model are the electronic excitation energies ε i and couplings J i j , the Hamiltonian of the aggregate iŝ On the first line, we recognize the free Hamiltonian of the excitonic system (diagonalized by excitons |α = ∑ i |i i|α with energy E α ) and the free Hamiltonian of environmental vibrations. On the second and third line, we find the systembath interactions, mediated by the two bath operatorŝ which respectively cause pure dephasing in the site basis and non-radiative transitions between site excitations and global ground state. Postponing a more detailed justification for later, we allow for correlations between different vibrational environments to be present when the bath is at equilibrium, where the time evolution is computed with respect to the free bath Hamiltonian and the expectation value · eq is taken on the stationary state of the bath. The correlation functions G (t) and F (t) are assumed to be site-independent, and only depend on the temperature of the vibrational bath and on the spectral densities J (ω) and J non-ad (ω). We assume that the internal timescale of the bath is sufficiently fast and the system-bath coupling is sufficiently weak so that the dynamics of electronic excitations can be described by a Lindblad equation in the exciton basis. We focus on internal conversion for now. Following the microscopic derivation outlined in [76], we obtain a Lindblad equation with jump operators |g α| and rates appearing in Eq. (5). The rate Γ non-rad is the single-pigment non-radiative rate, determined by = 2πJ non-ad (|ω|)|n(ω) + 1|.
In principle, it depends on the frequency at which the specific transition to the ground state takes place (i.e. ω = E α for |g α|). However, since optical frequencies are much higher than vibrational and thermal energies, we can assume J non-ad (ω) to be fairly small and constant across the excitonic energies E α . The correlations introduced in (B18) and (B19) are crucial for the determination of the distribution of the IC rate across the excitonic manifold, as shown in Fig. 4. For example, if the correlations are absent, i.e. κ i j = δ i j , Eq. (B21) predicts that the IC rate is essentially the same for all excitons (blue line). If, instead, we allow for some positive correlations decaying exponentially as a function of the distance between chromophores with some characteristic correlation length r c (i,e, κ i j = e −r i j /r c ), excitons which delocalize over neighboring pigments show an increased decay rate (green and yellow solid lines). Correlations of such form are commonly used when modelling various optical spectra of pigment-protein complexes [60]. If the correlations are negative for neighbouring pigments, the opposite behavior is observed (dashed lines). Optical correlations between transition dipole moments give the usual fluorescence profile (red line), with few bright excitons at the bottom of the band. We will argue later in favour of the correlations between local vibrational environments that we have postulated, by considering some experimental results.
Other non-radiative decay pathways
So far, we have seen how IC can be described microscopically and how it influences the non-radiative decay of different optical excitations in a molecular aggregate. However, there could be other processes that compete with fluorescence and IC on the same timescale, which could influence the effective decay rate from a given exciton. The other decay channel typically present in molecular systems is inter-system crossing (ISC) [77]. During this process, a singlet excitation S 1 FIG. 4. Excitonic decay rates. Normalized decay rates from the excitonic manifold to the ground state for different processes in the B850 band of LH2: radiative (red), non-radiative with increasing bath correlation length (blue, green and yellow solid lines). The dashed lines correspond to models with the same correlation length, but assuming anticorrelated bath fluctuations for couples of pigments that have opposite transition dipole moments. The decay rates are normalized such that on average they all give the same decay rate from a thermal exciton distribution. The results are obtained as ensemble averages over 10 4 realizations of static disorder. generated by optical absorption from the singlet ground state S 0 can be turned into an excited triplet T 1 by means of spinorbit interactions. Since the radiative transition from T 1 to S 0 is strongly suppressed, the excitation is lost non-radiatively either by IC or by triplet-triplet energy transfer to other pigments present in the molecular aggregates (i.e. carotenoids), before it can flip the spin of a neighboring oxygen molecule, thus forming singlet oxygen, which can cause oxidative damage to the photosynthetic apparatus.
If our focus is to follow the dynamics of excitons in a narrow band (i.e. the B850 of the LH2 complex), we do not want to take into account all these processes: we are only interested in the rate at which spin-orbit interactions transfer excitations from one adiabatic PES to another. As we explicitly worked out for IC, it has been shown that also the spin-orbit interaction results in coupling between adiabatic PESs mediated by the nuclear momentum [78]. This tells us that the effective Hamiltonian causing ISC will exhibit couplings of the same form as (B13), with properly redefined coupling constants. The resulting ISC rate, therefore, has the same form as (B21), i.e. it depends on the density of vibrational states at excitonic energies. Assuming that this density of states is sufficiently flat, as done before, we have also in this case that the dependence of the rate on the excitonic state is dominated by the pattern of correlations between local baths. ISC can be therefore absorbed together with IC into a single rate κ α Γ non-rad , describing non-radiative decay of population from exciton α.
APPENDIX C: PHYSICAL ORIGIN OF CORRELATED ENVIRONMENTS
In this appendix we argue in favor of the presence of correlations between the vibrational environments of single pigments belonging to the same pigment-protein complex, which take place on the timescale of non-radiative decay. First, we propose an estimate based on experimental results, and later move on to discuss a physical mechanism which can support inter-site vibrational correlations, relying on inter-molecular vibrational modes.
Phenomenological view
Let us step back for a moment from the microscopic derivation of internal conversion, and follow another approach. We start from experimental evidence, and use this knowledge to set up a phenomenological model of nonradiative decay of a molecular aggregate. The most valuable insight comes from comparing excited state lifetimes (τ) and fluorescence quantum yields (φ ) of isolated pigments with those of the aggregate. Table I shows theses quantities in the case of bacteriochlorophyll a molecules (Bchl-a) and LH2 complexes [39]. Since the fluorescence quantum yield of LH2 is lower than the one of Bchl-a, there must be some additional decay channels in LH2 that are not present in the monomers and which can therefore be interpreted as an effect of aggregation. To be more quantitative, let us define the quantum yield of fluorescence φ = # of photons emitted # of photons absorbed = Γ rad Γ rad + Γ non-rad = Γ rad τ and express the radiative and nonradiative decay rates as Γ rad = φ /τ and Γ non-rad = (1 − φ )/τ. The values reported in [39] allow us to determine radiative and nonradiative decay rates for Bchl-a and LH2 (Table I). While the difference in radiative lifetimes between the monomer and the aggregate is easily explained by exciton delocalization and superradiance, a clear molecular mechanism underpinning the mismatch between nonradiative decay rates is not known with certainty. Nonetheless, we can put forward a simple argument. Since the nonradiative decay in LH2 is about 3.5 times faster than in Bchl, we can imagine the existence of additional dissipation channels. In fact, we expect the same intra-molecular decay channels present in Bchl to be at play also in LH2. Thus, the additional dissipation present in LH2 must come from some other decay pathway that has no analogue in Bchl, which must account for (Γ LH2 non-rad − Γ Bchl non-rad )/Γ LH2 non-rad = 71% of the nonradiative dissipation. The first reasonable candidate is the protein environment, which can offer other IC pathways through vibrations and conformational changes. Since the B850 ring of LH2 is composed by dimeric subunits bound to the same protein, it makes sense to assume that the IC channels offered by the protein can in principle be correlated. Following this argument, we allow for correlations between the vibrational environments that couple to IC and ISC transitions. As a consequence, excitons that delocalize on neighboring pigments can experience a modified decay rate, as discussed above and shown in Fig. 4.
Microscopic mechanism
We have seen how, within a BO framework, a coupling arises between (optical) electronic transitions and vibrations of a molecule. A sudden change in the electronic state leads to a different electrostatic potential experienced by the nuclei, which therefore initiate their dynamics. For this reason, this coupling involves only vibrations of the nuclei over which the electronic states are delocalized. It is therefore clear that intramolecular modes will experience this type of direct coupling to the electronic dynamics. However, if the chromophore is embedded in a protein environment, the vibrational modes of the protein will also influence electronic energies, leading to a direct coupling between electronic excitations of the chromophore and longer wavelength vibrational modes. This observation lies at the heart of theoretical descriptions of electronic resonances coupled to a shared phonon environment [79]. Thus, the pure dephasing coupling in (B15) can be written explicitly in terms of these protein vibrational modesĉ q with frequency ν q aŝ By redefining the operatorsĜ i (B16) to include also the protein modesĉ q , we obtain the following bath correlation functions Ĝ i (t)Ĝ j (0) eq = ∑ k g 2 k e −iω k t (n(ω k ) + 1) + e iω k t n(ω k ) δ i j + ∑ q ξ iq ξ jq e −iν q t (n(ν q ) + 1) + e iν q t n(ν q ) .
(C2)
While the term on the first line (corresponding to intramolecular modes) vanishes for for i = j, the second term (arising from protein modes) is able to generate correlations between sites. Moreover, since protein motion takes place on a larger and slower scale than intra-molecular vibrations, it is reasonable to assume that two neighboring pigments i and j couple to the protein motion with the same phase (ξ iq ξ jq > 0). This results in positive inter-site correlations, and thus can lead to the redistribution of non-radiative decay rates discussed in Appendix B.
Another possible mechanism able to generate positive intersite vibrational correlations is based on the idea that the vibrational motion of a chromophore can mechanically couple to the slow vibrations of the protein environment. To formalize better this idea, let us consider a slightly modified version of the model described by the Hamiltonion (B15). To simplify the discussion, we consider a molecular aggregate composed only of two pigments, i.e. i = 1, 2, and we neglect the direct coupling between electronic excitations and protein modes (C1). If the two pigments are bound to the same protein, it makes sense to assume that their intra-molecular modesb 1,k andb 2,k couple to a set of common modesĉ q with frequency ν q . Since these shared modes are associated to the protein structure, it is reasonable to assume that they are much slower and effectively classical, meaning that their energy ν q is much smaller than thermal energy k B T . We assume that the intra-molecular modes of the two pigments couple the protein vibrations with the same strength and phase. This choice makes sense if we think that the two pigments have opposite orientation and are embedded in an elastic medium (see Fig. 5). The Hamiltonian of the vibrational bath then readŝ where we have neglected the counter-rotating coupling terms to simplify the following treatment, although they could be included. Defining the symmetric and antisymmetric combinations of intra-molecular modesb k = (b 1,k +b 2,k )/ with frequenciesω k andν q . Thanks to its unitarity, the transformation can be easily inverted and the bath operatorsĜ i defined in (B16) can be reexpressed aŝ where we have defined the transformed system-bath couplings Analogous expressions can be found for the operatorsF i defined in (B17).
Let us now focus on the bath correlation functions for the pure dephasing environment introduced in Eq. (B18), evaluated on the thermal state ρ eq ∝ e −Ĥ B /k B T . (The system-bath coupling leading to internal conversion can be treated in a completely analogous way.) Using the new normal mode decomposition ofĜ i (C6), we get To lowest order in the inter-site vibrational coupling η kq , we find the following perturbative expressions Note that, in order to be perfectly consistent, we should keep also the second-order correction to G k . However, since the protein modes generally have much smaller energies ν q than both k B T and intra-molecular modes ω k , their thermal occupation number is much higher, i.e. n(ν q ) n(ω k ). Therefore, the correction arising from the first line in (C8) is negligible with respect to the one originating from the second line. Taking into account all these assumptions, we obtain Ĝ 1 (t)Ĝ 1 (0) eq ≈ ∑ k g 2 k e −iω k t (n(ω k ) + 1) + e iω k t n(ω k ) × e −iν q t (n(ν q ) + 1) + e iν q t n(ν q ) .
Note that both these equations can be rewritten in terms of positive spectral densities: In the case of (C11) it is the one defined in (B10), whereas for (C12) we can define Taking the Fourier transform of these correlation functions, we obtain the rates that allow us to write down a Lindblad equation for the reduced dynamics of the excitons, which are proportional to the spectral densities J (ω) and ∆J (ω).
At this point, we can see that our particular choice of coupling the local intra-molecular modes to the common protein modes with the same phase leads to the establishment of positive inter-site correlations (∆J (ω) > 0). If we chose to couple local modes with opposite phase instead, we would end up with negative correlations between the vibrational environments.
APPENDIX D: LINESHAPES, LINEAR SPECTRA AND AND TRANSFER RATES
In this section we determine the lineshapes of an excitonic system (i.e. our unit cells), which allow for the calculation of both linear optical spectra and excitonic transfer rates. Consider an excitonic system with HamiltonianĤ = ∑ α E α |α α|, which we identify as our unit cell, where Greek letters denote exciton states of the unit cell |α = ∑ i |i i|α . (Here we relax the convention used in the main text, according to which primed indices refer to the same subunit.) Within a single unit cell, excited state population thermalizes due to the interaction between electronic and vibrational degrees of freedom of both intra-and inter-molecular origin. The main electron-phonon coupling mechanism is described by the pure-dephasing interaction presented in the aggregate Hamiltonian (B15). Under the conditions of weak electronphonon coupling and fast vibrational relaxation, this process can be described by a Lindblad equation. In the presence of correlations κ i j between different local environments, we can write down the resulting Lindblad dissipator as where the first line describes population transfer across different excitons and the second describes pure dephasing processes. The rates can be obtained respectively as where ω αβ = E α − E β , and γ is the single pigment optical dephasing rate. Note that, when the local vibrational environments are not correlated, i.e. κ i j = δ i j , the factors involving excitonic amplitudes in (D2) and (D3) reduce to the spatial overlap between excitons and their participation ratio. Lindblad dynamics in the exciton basis results in an exponential decay of the optical coherences between exciton α and the ground state g with a rate γ α = γζ αα + ∑ β ( =α) R β α /2, whereas the inter-exciton coherence decays with a rate γ αβ = γ α + γ β − 2γζ αβ . This leads to simple expressions when computing optical absorption and emission spectra. These are related to the absorption and emission tensors, A(ω) and E(ω) respectively, defined by their matrix elements Hereσ µ (t) denotes the Heisenberg time evolution of the annihilation operator of exciton µ,σ µ = |g µ|, and · g(e) denotes the average over the equilibrium ground (excited) state.
Evolving the transition dipole operator through the dual of D, results in the simple expressionσ µ (t) = |g µ|e −(γ µ +iE µ )t , which leads to diagonal absorption and emission tensors, with each exciton having a Lorentzian lineshape, i.e. A αβ (ω) = δ αβ f α (ω) and E αβ (ω) = δ αβ f α (ω)p th α , where p th α is the thermal population of exciton α, and Once we have the lineshape for each exciton, we weight each individual lineshape by its associated brightness χ α ≈ |D α | 2 /d 2 , and calculate straightforwardly the absorption spectrum of the unit cell as ω ∑ α χ α f α (ω). In the case of the B850 subunit, once averaged over static disorder, this expression gives excellent agreement with experimental results, as shown in the main text (Fig. 2b). If we introduce a second excitonic system, weakly interacting (with respect to the timescales associated to dephasing) with the first one via dipole-dipole couplings we can calculate the incoherent energy transfer rate between excitons α and α on the two subunits via generalized Förster theory, according to which, we have where V αα = ∑ i,i α|i V ii i |α , which reduces to Eq. (4) from the main text in the case of dimeric unit cells, onceh is reintroduced. Fig. 6 and Table II summarize the geometry and the parameters that have been used in the simulations of the B850 rings. The ring structure is dimerized, meaning that each pigment i is identified by two indices, one specifying the dimer (n = 1, . . . , 9) and the other the position within the dimer (ν = 1, 2). The two bacteriochlorophyll (Bchl) molecules belonging to the same dimer are usually labelled α and β . A single ring is described by the Hamiltonian (ε + δ ε n,1 )|n, 1 n, 1| + (ε + δ ε n,2 )|n, 2 n, 2| + J 1 (|n, 1 n, 2| + H.c.) + J 2 (|n, 2 n + 1, 1| + H.c.) where the primed sum indicates summation over all couples of non-adjacent pigments and J i j stands in this case for the dipole-dipole interaction between pigments. The fluctuations of the site energies δ ε i are given by the sum of two Gaussian random variables: one with standard deviation σ p , completely uncorrelated for different pigments, describing local energy shifts due to the slightly different protein environments; one with standard deviation σ 0 , which is the same for all pigments, describing global shifts of the ground state energy. Throughout this work, the B800 ring enters the dynamics only during the excitation process as we describe more in detail in the next section. However, we explicitly included B800 rings in the calculation of the absorption spectra shown in Fig. 2b, for a better comparison with the experimental data. We model the B800 ring as a set of 9 transition dipoles arranged on a circle of radius 3.1 nm, concentric to the B850 ring and vertically displaced from it by 1.7 nm. We consider the B800 dipoles to be perfectly coplanar and tangent to the circle. In accordance with previous works [51], we set the ratio between B850 to B800 pigment dipole strength to 1.1. The site energy of B800 pigments ε B800 has an average value of 1.25 10 4 cm −1 , with Gaussian static disorder with standard deviation σ B800 = 100 cm −1 , whereas the single pigment optical dephasing rate is γ B800 = 70 cm −1 [51].
The B800 pigments couple to each other and to the B850 pigments via dipole-dipole interactions. As a result, one may expect that excitons can in principle delocalize across both rings. However, the presence of static disorder is sufficient to destroy any inter-ring delocalization [51]. This can be seen in Fig. 7, where we plot the average population of a LH2 exciton |α (diagonalizing the total B800 + B850 Hamiltonian) on site |i . The absence of significant B800-B850 coherent mixing thus justifies the approach adopted throughout the paper, where we only consider the indirect effect of incoherent B800-to-B850 energy transfer to populate the B850 manifold after initial laser excitation. |i . The eigenstates are clearly divided in two blocks, representing the fact that there is negligible coherent mixing between B800 and B850 excitons. The ensemble average is obtained from 10 4 realizations of static disorder.
APPENDIX F: SIMULATIONS OF A LH2 ARRAY
In this section we give more details on the simulations of the full LH2 linear array. In analogy with the experiment of Escalante et al. [20], we look at the stationary exciton probability profile along the linear array upon continuous wave driv- ing, i.e. we look numerically for the steady state of Eq. (3) (with all rates dependent also on the specific subunit n, since every subunit exhibits a different realization of static disorder, i.e. different spectra and pigment positions, affecting relaxation, fluorescence, transfer and injection rates). In the experiment the driving is provided by a 800 nm laser with spatial intensity profile with a full width at half maximum of 400±50 nm. In order to simplify the numerics, we consider instead injection on a single B850 complex at the center of the chain. In this way, we can simulate shorter chains (up to 1001 subunits) and be safely protected from systematic errors introduced by the finite size of the array. The results for spatially broad excitation can be recovered as shown in Appendix A in the case of the exactly solvable model, i.e. by convolving the result for local injection with the desired excitation profile. We clarified that our injection profile is local, i.e. I n α ∝ δ n 0 n where n 0 = 501 is the central site of the chain, but we have not specified yet in which states α of the ring excitations are injected. Optical excitation at 800 nm cannot be absorbed by the B850 subunit (as seen in Fig. 2b), nevertheless it can excite the B800 ring. We are not explicitly considering the B800 subunit in our model. However, excitations enter the B850 ring upon downhill B800 → B850 energy transfer. Due to the small coherent coupling between the two concentric rings, this transfer process is largely controlled by incoherent rates of the form of Eq. (D7), therefore we can think of an indirect excitation of the B850 excitons with an energy distribution IG. 9. Energy transfer between tilted B850 rings. Total energy transfer rate from exciton β to any other exciton α on a neighboring ring (escape rate), as a function of the energy of the initial excitonic state β . Results for coplanar (tilded) rings are shown with solid (dotted) lines, for inter-ring distances ranging from 6.5 nm (green) to 8.5 nm (purple). Even a slight tilt of 5 • outside the plane results to a significant reduction of the transfer rate at short inter-ring distance. Results are obtained as averages over 10 3 realizations of static disorder.
given by where ε B800 and γ B800 are optical gap and dephasing rate of the B800 pigments, whose values are given in Appendix E. The disorder-averaged injection profile (F1) is shown in Fig. 8a.
The resulting stationary probability profiles shown in Fig. 8b for different lattice steps l ranging from 8.5 nm to 6.5 nm, after averaging over 10 3 realizations of static disorder. The probabilitiesp n α , shown as a function of the position x = (n − n 0 )l, exhibit a clear exponential decay around the injection site in the middle of the array. A convolution of these exciton population profiles with a Gaussian injection profile with 400 nm full width at half maximum yields the distributions reported in Fig. 3b.
All simulations are performed for coplanar arrangements of B850 rings. As discussed in the main text, the nanoengineered arrays analyzed here allow for coplanar B850 rings. However, in the main text we also note that in biological light-harvesting membranes, an angle of about 5 • relative to the aggregation plane is observed. This small tilt leads to slower energy transfer, especially for short inter-ring distances, as shown in Fig. 9.
Lastly, we note that all simulated LH2 arrays feature a fixed inter-complex distance l throughout the length of the array. This clearly represents an approximation, since every realistic macromolecular assembly will show fluctuations in the lattice constant. While these large-scale geometric defects are expected to severely limit the range of ballistic energy transfer and eventually lead to diffusive transport, they only act as an additional source of static disorder in our model, which already describes energy transfer in the diffusive regime. Thus, in our case, this further noise source will lead to changes in the transfer rates but will not affect the scaling of diffusion length with respect to the lattice constant, leaving our main conclusions unaltered. Nevertheless, the effect of lattice constant fluctuations can be estimated by analyzing the dependence of the diffusion length l diff on the inter-complex distance l, shown in Fig. 3d. This seems to be described by a monotonically decreasing convex function. In other words, this means that the increase in diffusion length that we obtain by shortening the lattice constant l by an amount δ l exceeds the decrease that we would obtain by expanding the lattice constant by the same amount to l + δ l. Thus, when assuming a symmetric distribution of lattice constants around l, the averaging would lead to a slightly larger diffusion length. However, we note that this effect might be too small to be observed in our case, due to the presence of other sources of static disorder. | 19,768 | 2018-12-19T00:00:00.000 | [
"Physics"
] |
Magneto-optical study of metamagnetic transitions in the antiferromagnetic phase of α-RuCl3
α-RuCl3 is a promising candidate material to realize the so far elusive quantum spin liquid ground state. However, at low temperatures, the coexistence of different exchange interactions couple the effective pseudospins into an antiferromagnetically zigzag (ZZ) ordered state. The low-field evolution of spin structure is still a matter of debate and the magnetic anisotropy within the honeycomb planes is an open and challenging question. Here, we investigate the evolution of the ZZ order parameter by second-order magneto-optical effects, the magnetic linear dichroism and magnetic linear birefringence. Our results clarify the presence and nature of metamagnetic transitions in the ZZ phase of α-RuCl3. The experimental observations show the presence of initial magnetic domain repopulation followed by a spin-flop transition for small in-plane applied magnetic fields (≈1.6 T) along specific crystallographic directions. In addition, using a magneto-optical approach, we detected the recently reported emergence of a field-induced intermediate phase before suppressing the ZZ order. The results disclose the details of various angle-dependent in-plane metamagnetic transitions quantifying the bond-anisotropic interactions present in α-RuCl3.
To date, several experimental techniques have been applied to map out the equilibrium phase diagram of α-RuCl 3 in the temperature and magnetic field plane 33,[36][37][38][39][40][41][42][43][44][45] . Signatures of fractionalized excitations have been detected by various spectroscopy techniques 11,15,16,46,47 , hinting towards a proximate spin-liquid behavior. Initially this has lead to a wide spreading in reported values for the possible interaction strengths and a controversial discussion on the effective spin Hamiltonian capturing the experimental observations 48 . Nowadays, the parameter space of the effective spin Hamiltonian and the size and sign of the present exchange interactions converge towards a unifying description of α-RuCl 3 physical properties. Especially the role of the symmetric off-diagonal exchange interaction Γ has been studied intensively and Sears et al. reported recently that its size is comparable to the anisotropic ferromagnetic Kitaev K exchange interaction 22 , indicating its key role in understanding α-RuCl 3 large anisotropic susceptibilities for magnetic fields applied within χ ∥ and perpendicular χ ⊥ to the honeycomb planes 32,39,49 . Despite this, also the orientation of a magnetic field applied within the honeycomb planes has been found to be crucial to resolve strongly angle-dependent low-energy excitations revealing fingerprints of a potential QSL state in α-RuCl 3 19,50,51 . Different experimental studies reported that α-RuCl 3 undergoes several phase transitions at low temperatures and small finite applied fields before entering the quantum-disordered phase 44,[52][53][54][55][56] , and the appearance of metamagnetic transitions has been predicted theoretically 31,57 . The emergence of an intermediate fieldinduced transition at around 6.2 T, depending on the orientation of the external in-plane magnetic field either along or perpendicular to the Ru-Ru bonds 44,[54][55][56] has been reported and points towards the necessity to include anisotropic inter-layer exchange interactions in the model Hamiltonian. In contrast, the low-field response for in-plane fields up to ≈2 T is still only partially understood, since the precise knowledge about the orientation of the order parameter is difficult to attain with thermodynamic probes. Therefore, its nature has been interpreted differently 15,33,39,52,58 , while there are first experimental signatures revealing the necessity to consider the present bond-anisotropy stemming from a small inequivalence in the Ru-Ru bond length in α-RuCl 3 59 . Nevertheless, the detailed magnetic-temperature (B, T) phase diagram of α-RuCl 3 is still under intense debate and an experimental probe that couples sensitively to the ZZ order parameter needed to be utilized. Magneto-optical (MO) spectroscopy is a well-established and powerful contactless technique to explore magnetic ordering phenomena and the emergence of topological magnetic structures on small scales with remarkable sensitivity [60][61][62][63][64][65][66][67][68][69][70][71] . Especially the quadratic MO effects are perfectly suited to study the antiferromagnetic ordering vector (L = M ↑ − M ↓ ≠ 0) and recently, direct optical probing of ZZ antiferromagnetic order via optical spectroscopy has been reported 72 . The main quadratic MO effects (even in M) in reflection are named magnetic linear dichroism (MLD) and magnetic linear birefringence (MLB) 62,73,74 , which are defined for the reflection of linearly polarized light under normal incidence and depend on the difference in the diagonal components of the dielectric tensor 67 . In this context, the origin of MLD and MLB can be understood in terms of different absorption (reflection) coefficients parallel and perpendicular to the magnetization M or Néel vector L. These effects manifest themselves in the polarization rotation θ (MLD) of a linearly polarized light upon reflection from the sample or give rise to an elliptical polarization η (MLB). They stem from the spin-orbit and anisotropic exchange interactions and can be related to spin-spin correlation functions 62,[75][76][77] . For symmetry reasons, it follows that the considered second-order MO effects are to lowest order quadratic in the antiferromagnetic order parameter such that the scaling (θ, η) ∝ L 2 holds 66 (Supplementary Information Notes 8,9 and 10).
Motivated by these intriguing questions, we performed a systematic MO spectroscopy study to track the evolution of the ZZ order parameter in α-RuCl 3 in thermodynamic equilibrium. The orientation-dependent MO response reveals the effect of the present magnetic anisotropy for magnetic fields applied perpendicular and parallel to the Ru-Ru bonds within the honeycomb layers. We show that the remarkable sensitivity of MO spectroscopy helps to clarify the emergence of two different intermediate field-induced and orientation-dependent metamagnetic transitions. Our results provide a detailed picture of the low-field behavior clarifying the influence of unidirectional bond-anisotropy within the honeycomb planes and we derive a value for the anisotropy field strength. (c) Fig. 1 Magneto-optical experiment. a Experimental setup (HW half-wave plate, P polarizer, C chopper, M beam-splitting mirror, L lens, S sample, PEM photo-elastic modulator, A analyzer, PD photo-diode). E i and E r correspond to the incident electric field polarization and the one upon reflection from the sample, respectively. The inset displays the effect of MO rotation Θ = θ + iη and the definition of the angles θ 0 , ϕ, θ. b Polarization dependence of the MO response as a function of the polarization orientation θ 0 of the incident light. Gray squares are data, solid line is a fit according to Equation (2). c Universal temperature dependence of the MO response for three different incident polarization orientations of θ 0 = 0 ∘ , 45 ∘ , 90 ∘ .
Magnetic linear dichroism
First, we explore the nature of the detected MO signal and its relation to the magnetic order. Here, we apply the Voigt geometry 67 , which is typically used to study antiferromagnets with spin alignments that are perpendicular to the light wave vector. Below, we derive the relation of the experimentally observed rotation of the polarization plane of the reflected linearly polarized light, but the same consideration applies to the change in the ellipticity. The rotation θ is related to both the amplitude of the order parameter and the relative orientation of the Néel vector to the electric field E of the incident linearly polarized light. In α-RuCl 3 the in-plane component of the Néel vector is oriented parallel to the ZZ chain direction. Figure 1a depicts the relative angles of the polarization of incident light and the Néel vector to the vertical polarization by θ 0 and ϕ, respectively. The MLD response is then given by 63,65 tanðθ MLD Þ ¼ More general and taking the presence of both linear and second-order MO effects into account, a total rotation of the polarization θ, in the limit of small rotations, can be expressed as The coefficients A Lin and A MLD are the amplitude of linear and quadratic MO effects, respectively. Since in the MLD geometry, the incident light is normal to the surface, the A Lin is mainly determined by the polar MO Kerr effect (PMOKE).
, where r ∥ and r ⊥ are the amplitude reflection coefficients of the light polarized parallel and perpendicular to the Néel vector, depends quadratically on the in-plane component of the antiferromagnetic order parameter L, i.e. A MLD $ L 266 (Supplementary Information Note 10). It is worth noting that the above relation indicates that the MLD signal, in contrast with the linear MO response, is a harmonic function of the incident polarization θ 0 , which becomes maximal for ϕ − θ 0 = 45 ∘ and it indicates an extreme sensitivity of the second-order MO response to the orientation of the incident linearly polarized light with respect to the spin-pointing direction. Recently it has been reported that the presence of antiferromagnetic ZZ chains can give rise to a polarization-dependent MO response, which is independent of the spin-pointing direction 72 . In α-RuCl 3 both, the ZZ chain direction and the spin-pointing direction are collinear in small magnetic fields, such that the spin-pointing and ZZ chain direction cause the MLD. Clearly, the MO response shown in Fig. 1b is polarizationdependent, which is manifested in a sinusoidal modulation as expected for MLD. At the same time, the MO response scales with A MLD / L 2 ðB; TÞ. Figure 1c shows the temperature dependence of the ZZ order parameter studied for three different polarization orientations of the incident light, θ 0 . Clearly, the magnitude of θ depends on θ 0 and for θ 0 = 45 ∘ becomes zero, while the qualitative temperature-dependent behavior remains, as expected, similar. In subsequent measurements, we chose the spolarized probe to obtain the maximum signal. Figure 2a shows the variations of θ MLD as a function of temperature. According to ref. 11 , the temperature dependence of the ZZ order parameter in zero magnetic field L(T, B = 0T) follows a power-law θ MLD / L 2 / ð1 À T=T N Þ 2β below the Néel temperature T N . The deduced Néel temperature T N = (7.19 ± 0.14) K indicates ABC-stacking in the samples under study as opposed to ABAB-stacking accompanied by stacking faults, which causes further transitions above T N 56 . Within the experimental uncertainty, the data can be fitted using a critical exponent β = 0.19 ± 0.07, which is close to the 2D Ising universality class 78 . Figure 2(b) displays the phase diagram in the (B, T) plane extracted from the MLD induced rotation at different fixed magnetic fields (isomagnetic). We find the evolution of the order parameter L(T, B) to scale proportional to ðH À H c Þ γ , where H c corresponds to the critical magnetic field above which the antiferromagnetic ZZ order is suppressed and the system enters the magnetically disordered state. The power-law fit exhibits a critical field value of H c = (7.48 ± 0.3) T and γ = 0.31 ± 0.07, which is again close to the theoretical value of 0.32 for the 2D Ising symmetry class 33 . These findings support that the MO response clearly displays the evolution of the order parameter. More details on the fitting of the critical behavior can be found in the SI.
Magnetic field orientation-dependent measurements Figure 2c displays the honeycomb structure, bond-anisotropy, and spin orientation in α-RuCl 3 . We point out the breaking of C 3 symmetry in α-RuCl 3 crystals originating from inequivalent Ru-Ru bond lengths, which leads to the existing monoclinic C2/m space group 34,35 . Phenomenologically, the inequivalence in the Ru-Ru bond lengths causes a change in the present interactions along the stretched bond indicated by J 0 ; K 0 and Γ 0 (cf. Fig. 2c). This causes the pseudospins, which are tilted by an angle of ≈32 ∘ out of the honeycomb plane 22 to have their in-plane projection being preferentially oriented perpendicular to the stretched bonds. This is a key point that needs to be taken into account to understand the anisotropic MO response in the following results. Nevertheless, the local C2/m symmetry has been found to be broken in multidomain samples 52 , due to a randomness in the monoclinic distortion of one Ru-Ru bond. Consequently, there can be three possible and symmetry-allowed ZZ domains in zero field cooled samples below T N , which are related by 120 ∘ rotations within the plane. However, applying a finite magnetic field along specific crystallographic orientations can change the spin-pointing direction within the honeycomb planes as will be discussed in the following and is schematically depicted in Fig. 2c via a spin-flop process.
Having established the iso-magnetic response of the order parameter L encoded in the MLD response θ MLD we turn now to the iso-thermal MLD response of α-RuCl 3 at a temperature of 3 K and magnetic field strengths up to ±7 T applied within the honeycomb planes along two different crystallographic directions to investigate the magnetic in-plane anisotropy. We studied two samples from the same batch which have been oriented along different crystallographic directions. Once, for sample (a) the inplane magnetic B ab field is applied perpendicular to a Ru-Ru bond, i.e., along one of the symmetry-equivalent {1, 1, 0} directions, while the field was directed parallel to the Ru-Ru bonds for sample (b), i.e., along one of the symmetry-equivalent {1, 0, 0} directions (Fig. 2c). The MO measurements were conducted with the magnetic field swept continuously from 0 to ±7 T to systematically track the dynamical MO response (Supplementary Information Notes 2 and 3). Figure 3a shows that the MO response θ MLD differs significantly for two distinct orientations of an in-plane magnetic field B ab , a first experimental evidence for the present in-plane magnetic anisotropy. The purely magnetic origin for this anisotropy is verified by the fact that the temperature-dependent evolution of θ MLD and η in zero magnetic field is similar for both samples ruling out possible temperature-related effects like strain or thermoelastic changes (Supplementary Information Note 2). The absolute field-induced change in the rotation defined as Δθ(B) = θ MLD (0T) − θ MLD (±7 T) for both field configurations at 3 K is in the order of 100 mdeg, which is large and underlines the microscopic impact of strong spin-orbit interactions (~100 meV in α-RuCl 3 79 ) on the second-order MO responses. The value at 7 T was set to zero to extract the field-induced changes in the rotation between 0 and 7 T at constant temperature (Supplementary Information Note 3). Despite the difference in the full hysteresis loops for the different field orientations, we divide the MO response shown in Fig. 3a into three field regimes to allow a simple description and comparison. Magnetic linear dichroism of α-RuCl 3 for different sample orientations. a Full magnetic field sweep scans of the magneto-optical response for B ab ∥{1, 1, 0} and B ab ∥{1, 0, 0} at 3 K. The hysteresis loops can be divided into three field regimes I, II, and III, which are discussed in the main text. b Temperature dependence of the MLD response as the magnetic field is aligned along the {1, 1, 0} direction. Labels are discussed in the main text. c Linear MO response extracted from the MLD response for the two different field orientations. The light gray area indicates field-regime I. d Zoom-in into field regime I and II for two subsequent field sweeps, which are divided into four steps as discussed in the main text. Inset shows the zero field time-dependent recovery of MO signal after the field sweep. e High-field regime III. The kink in the rotation for B ab ∥{1, 1, 0} highlighted by the dashed line is connected to the transition towards the new AFM phase. The shaded area indicates the emergence of the intermediate magnetic transition.
In each regime, the MO signal is comparable, while at the crossovers a vivid anisotropic response is present. The first transition at ±1.6 T is a large and steep change in the rotation θ MLD (absolute reduction of ≈50 mdeg) for the magnetic field applied perpendicular to the Ru-Ru bonds, whereas the rotation changes only slightly for the in-plane magnetic field applied parallel to the bonds. The marked difference for the two differently oriented samples in regimes I and II are displayed in Fig. 3d in more detail. This clear difference indicates the presence of a metamagnetic transition as will be discussed later. The second transition in the response occurs at around 6.2 T for the field applied perpendicular to the Ru-Ru bonds where an anomaly in θ MLD is observed by a kink. Reducing the magnetic field strength continuously from ±7 T back towards 0 T causes the emergence of significant hysteresis in the MLD response opening at a field strength of ≈6 T. The hysteresis loops close at a field strength of ≈0.7 T for both field orientations, but open again for small fields close to 0 T. Here, the integrated hysteresis weight is a factor of ≈1.5 larger for the field applied perpendicular to the bonds than along the bonds pointing again towards a difference in the inplane anisotropy energy. Figure 3b reports the iso-thermal MO responses as a function of the applied magnetic field. We found a similar behavior showing the three regimes I-III for three bath temperatures of 3, 4.5, and 6 K, corresponding to locations in the phase diagram deep insight the ZZ phase, in an intermediate range and close to the critical transition temperature towards the quantum paramagnetic phase. This finding indicates the purely magnetic nature of the MO field response.
In contrast, the odd MO response scales linearly in the applied magnetic field for both samples and shows no pronounced hysteresis for a whole field sweep (Fig. 3c). However, in regime I small spikes can be observed for both samples.
In order to understand the nature of these spikes, MOKE measurements with the sample rotated by 45 ∘ w.r.t. the incident light wave vector have been performed. We find a clear featureless linear scaling of the obtained rotation with the applied magnetic field ( Supplementary Information Note 6). Therefore, we assume that the small spikes are related to second-order MO effects. Since under the real experimental condition, the perfect symmetry cannot be achieved, minor differences in MLD obtained from negative and positive fields are unavoidable.
Low-field magneto-optical response Figure 3d compares the initial part of the hysteresis in regime I for both crystallographic orientations for subsequent field sweeps from 0 to 7 T. To emphasize similarities, we consider the normalized MO response. Here, we can divide the fielddependent alternation of MLD into the following four distinct steps.
(i) The first step shows the following characteristics. First, it displays an initial big, but a gradual and monotonic change in θ MLD , which starts immediately when applying a small inplane magnetic field. Second, it is seemingly independent on the field orientation in the honeycomb plane and terminates at a field strength of~0.7 T. Third, the absolute amplitude of this first step decreases for subsequent field sweeps, but displays a similar field dependence for both field orientations. Fourth, the MO response does not recover immediately when sweeping the field back to 0 T. However, there is a slow recovery of the MO response within tens of seconds at zero field to a saturating value (see inset 3d). The above mentioned distinct features of step 1, as we explain later, can reasonably be assigned to an initial field-driven gradual domain repopulation or ZZ domain switching. (ii) In step 2, no further change in the magneto-optical response can be observed for any of the subsequent field scans between 0.7 and 1.6 T with distinct in-plane field orientations. This indicates that in step 2 neither the ZZ chain orientation nor the spin-pointing direction and the amplitude of L are changed by the external magnetic field. This observation shows that at around 0.7-1.6 T the magnetic field changes between the three degenerate domains are terminated. (iii) A further increase in the applied field strength leads then to a sharp change (step 3) at ≈1.6 T for the in-plane field aligned perpendicular to the bond. In contrast, this abrupt change is not present for the field directed along the bond. This is a clear indication for a field-induced metamagnetic transition, which we will discuss later in terms of a spin-flop transition originating from the intra-layer bond-anisotropy of the frustrated honeycomb magnet. This anisotropic change in rotation at a field of ≈1.6 T is, in contrast to step 1, completely reproducible in subsequent field sweeps. Further, step 3 is clearly observed and remains sharp at all temperatures demonstrating the field evolution of the MO response is reproducible in the entire ZZ phase (cf. Fig. 3b). (iv) Further strengthen the applied field leads to the emergence of another steady state of the obtained signal from 2 to 3 T (step 4) pointing towards a homogeneously field-aligned ZZ ordering. Here, similar to step 2, neither the ZZ chain nor the spin-pointing direction change. Moreover, the different steps can be clearly observed in subsequent measurements at different temperatures as shown in Fig. 3b.
These different observations confirm the magnetic origin of the field response and provide first important key information to distinguish both the effects of fieldand anisotropy-related domain selection and an accompanied spin-flopped phase for inplane applied magnetic fields. In the first step, the in-plane field leads to an immediate population of ZZ domains with the easy ZZ chain axis closer to the field direction 44,52,77,80 . This interpretation is further supported by a recent study of the thermal and magneto-elastic properties of α-RuCl 3 for field applied in the ab-plane. It was found, that under an in-plane magnetic field α-RuCl 3 shows lattice contraction along the {1, 1, 0} direction 41 . Especially in low fields up to ≈0.5 T the magnetostriction coefficient changes continuously, such that the initial ZZ domain structure and distribution will be gradually changed by the external magnetic field. The fact that the MO hysteresis does not close while reducing the in-plane field back to 0 T points towards irreversible processes. This fits to the initial domain repopulation picture and hysteric magnetocaloric measurements below T N 41 .
High-field magneto-optical response
Turning to the high-field regime III, the kink which is observed only for one direction (Fig. 3e) indicates clear dependence on the orientation of B ab with respect to the pseudospin-bonds. This kink in the rotation is reproducible during different measurement cycles and likely related to the previously reported first-order transition into an intermediate differently ordered ZZ phase for inplane magnetic fields of ≈6−6.5 T aligned perpendicular to the Ru-Ru bond 44 . We observed this kink only at low temperatures since its intensity dramatically decreases by temperature (Fig. 6c) in agreement with the recent reports 44,81 . We point out that signatures of this metamagnetic transition seem much weaker compared to the spin-flop transition at a field of ≈1.6 T, which is related to the already suppressed amplitude of L close to the critical line and the fact that it might originate from a competition of anisotropic inter-layer exchange interactions as opposed to a change in the in-plane magnetic ordering 44 .
Low-field anomaly
In the following, we present the results of two individual experiments which further support the observed anisotropic MLD response at ≈1.6 T. Figure 4a shows the temperature dependence of the iso-magnetic MLB η. Since the birefringence originates from spin-spin correlations contributing to the magnetic energy, its derivative scales proportional to the magnetic part of the specific heat 62,82,83 . The obtained curves for [(dη/dT)/T] B are shown in the inset of Fig. 4a, which resolve changes in η more clearly and display a typical λ-shape as has been reported previously for the magnetic part of the specific heat at the transition for α-RuCl 3 20,84 . The red arrows indicate the peak values of these derivatives, which coincide with the vanishing MO rotation reported in Fig. 2b. As expected, the power-law fit ðH À H c Þ γ η gives similar values H c = (7.21 ± 0.09) T and γ η = 0.29 ± 0.04 as for the MLD response. Indications of the transition at 1.6 T are also visible in the temperature-dependence of η, where a small kink in the curves is indicated by a dashed line in Fig. 4b for all three temperatures.
Furthermore, we performed in-plane magnetometry measurements at 3 K (Fig. 5a) using a superconducting quantum interference device (SQUID, Quantum Design, MPMS). The obtained magnetization curves for both field orientations as in the MO experiments cross through the zero point, as expected for a compensated antiferromagnet, excluding any finite ferromagnetic contribution or background signal. Although at first glance, the magnetization curves M(H) do not exhibit any clear signs of a transition at 1.6 T, the standard deviation of the magnetization measurements illustrated in Fig. 5b give an indication (Supplementary Information Note 4). The spin-flop transition is of firstorder, i.e., close to the critical field value both the initial ZZoriented domains and some already spin-flopped domains coexist, such that the system is driven into a regime of large fluctuations stemming from the competition of anisotropy-stabilized ZZ and already field-driven spin-flopped domains. These fluctuations display the instability of the coexistence of energetically different ZZ phases near the critical spin-flop field strength, which leads to discontinuous jumps in the magnetization accompanied by irreversible behavior. The instability is clearly visible in the standard deviation of the magnetization measurements illustrated in Fig. 5b. This effect has been observed previously in susceptibility measurements at the phase boundary between the antiferromagnetic and spin-flopped phase in the hexagonal antiferromagnet NiO accompanied by an initial field-induced domain alignment 85 .
DISCUSSION
Based on these independent and consistent experimental observations, we elaborate on our main findings of the response to the magnetic field orientation along the two different crystallographic orientations and discuss the low-field response in detail. In zero field, the sample comprises three possible ZZ domains in which the spin direction differs by 120 ∘ with statistically distributed unequal but likely comparable populations. A small external applied field within the ab-plane will then favor ZZ domain(s) for which the Néel vector is most nearly perpendicular to B ab in order to maximize the susceptibility. Such a scenario has been discussed for a similar threefold degenerate domain structure in the uniaxial antiferromagnet NiO 85 . This is perfectly in line with the initial changes in the MO response indicative of the metastability of the domain population.
During the process of field-induced domain repopulation the volume of the three distinct ZZ chains changes, which immediately is reflected in changes of θ MLD 72 . However, if this domain repopulation would be the only mechanism causing the ZZ degeneracy to be lifted, a further increase of the magnetic field strength should then only affect the amplitude of the order parameter L, while its orientation should be unchanged for small enough field strengths. It follows that no additional abrupt and anisotropic changes in θ MLD would be expected for just a gradual repopulation of the ZZ domains. In this regard, the key information provided by the MLD response is encoded in steps 2 and 3. In step 2 the system is approaching a steady-state in an interval of the applied field from ≈0.7 to 1.6 T, that is the initial and field-orientation independent domain reorientation is terminated. Hence, the clear difference at 1.6 T for the different orientations of the magnetic field gives experimental evidence that it has to be caused by a different mechanism than the already terminated domain repopulation. At 1.6 T an abrupt change for the field perpendicular to the bond appears, whereas no clear change is observable for the field along the bond. Hence, there is seemingly a competition between the external applied field and the anisotropic intra-layer interactions stabilizing the orientation of the Néel vector in some domains until the external field overcomes the anisotropy energy. This competition can be modeled based on a modified spin-flop theory and allows to derive a value for the anisotropy constant K a (Supplementary Information Note 5). We start with the free energy of the system at the ground state, which gives information about the spin-pointing direction and can be generally expressed as 77,86,87 Here μ 0 is the vacuum permeability, χ + and χ − correspond to the extrema of the in-plane oscillating susceptibility χ ∥ (ϕ) of α-RuCl 3 , where χ + and χ − occur for the magnetic field applied parallel or perpendicular to one of the Ru-Ru bond directions, respectively 59 . The values of χ + and χ − at a temperature of 2 K have been reported in ref. 59 . The angle of the pseudospins and the magnetic field is parametrized by α and Ψ is the angle between the applied field and the magnetic easy axis, which is perpendicular to one of the stretched Ru-Ru bonds. Equation (3) was numerically evaluated for every single ZZ domain and field orientation to find the angle α at fixed Ψ, which minimizes the energy for each field value H. For this, α was varied between 0 and π for each field value H. For the field oriented along the stretched bond, α = π/2 does not change for increasing field. For anisotropy-stabilized domains at a finite angle with the applied field, we find a continuous change in α(H), which converges towards α = π/2 for increasing field strength. Only for the field applied perpendicular to the stretched bond there is a discontinuity in α(H). In addition, we consider the initial field-induced changes in the ZZ domain volumes on a phenomenological level similar to reported in ref. 77 . More information regarding the modeling of the MO response can be found in Supplementary Information Note 5.
The model satisfactorily reproduces the field evolution of the MO response for both in-plane field directions (Fig. 6d). Further, the anisotropy constant K a can be estimated from the experimentally determined spin-flop field H sf = 1.6 T applied within the ab-plane The derived value for the effective anisotropy field K a /μ 0 is 0.0027 T. The interpretation of a spin-flop transition is supported by recently reported in-plane susceptibility measurements, where a two-fold oscillation has been observed for fields below 2 T pointing towards the intrinsic in-plane bond-anisotropy with a πperiodicity as opposed to a characteristic sixfold oscillation emerging for larger field strengths indicating the field-induced reorientation of the order parameter and hence spin-pointing direction in α-RuCl 3 44,59 . However, this produces only little changes in the magnetization (cf. Fig. 5a), although the orientation of L changes remarkably. This is a reasonable explanation why in previously reported magnetization data, which lack the sensitivity to small changes in L, a change in the low-field susceptibility has been overlooked, although some changes at around 1.5 T in dM/ dB have been found previously ( 39 , SI of ref. 43 ). Figure 6a, b summarizes the experimental results. Independent of the in-plane field orientation there is an initial change in the ZZ domain population in regime I. According to the direction of the applied field, some of the initial domains become more populated at the expense of the others, but the general evolution of the rotation θ MLD in small fields is similar for both field orientations. The experimental data show that this process terminates at a field strength of~0.7 T. For the field range of 0.7-1.6 T, i.e., step 2, there are already energetically optimal field-aligned domains, although some anisotropy-stabilized domains exist for the field applied along the {1, 1, 0} direction. Then at the transition into regime II at step 3, the external field wins the competition over the bond anisotropy for the magnetic field applied perpendicular to the Ru-Ru bonds, such that the pseudospins in all domains rotate perpendicular to the field directions pushing α-RuCl 3 into a single polarized spin-flopped state. A further increase in the field strength, i.e., moving closer to the critical line of the phase transition towards the quantum paramagnetic state, results in a natural decrease of the observed rotation, i.e., the magnitude of the order parameter. Before entering the quantum paramagnetic state above 7 T another anomaly is seen in the MO data. In contrast to the low-field transition, this anomaly at 6.2 T in dθ/dB is not immune to temperature changes and fades off for increasing temperatures from 2.8 to 3.5 K systematically for fields applied perpendicular to the bond (c.f. Fig. 6c). Its dependence on the inplane field angle is illustrated by the fact, that for B ab ∥{1, 0, 0} no signature of an intermediate field-induced transition in the vicinity of 6.2 T can be identified since dθ/dB is almost constant even at the lowest temperature of 2.8 K. The fact that this intermediate field-induced transition strictly depends on the orientation of B ab w.r.t. to the crystallographic axis conveys the emergence of an anisotropy-related novel magnetically ordered phase. This phase was named ZZ2 and its origin has been discussed in terms of inter-layer anisotropies very recently 44 . It was understood in terms of a field-induced inhomogeneous spin canting between different threefold and sixfold ZZ stackings for B ab ∥{1, 1, 0}, while the canting is homogeneous for B ab ∥{1, 0, 0} 44 . In addition, it has been discussed in terms of an inverse melting phenomenon 54 . It is worth to note that the MO response is sensitive to this transition and its first-order character fits to the observed large MO hysteresis effect. Nevertheless, a microscopic origin of this fieldinduced transition, nor an estimate for the underlying inter-layer exchange interactions can be given based on our observations, but calls for future MO measurements at elevated magnetic fields.
In summary, we have reported magneto-optical spectroscopy measurements on the Kitaev spin liquid candidate material α-RuCl 3 . Our study establishes magneto-optical spectroscopy as a versatile experimental tool to elucidate exotic phases of quantum materials. We observed two intermediate metamagnetic transitions at ≈1.6 T and 6.2 T for the magnetic field applied perpendicular to the Ru-Ru bond, while none of these transitions appears for the magnetic field aligned parallel to the Ru-Ru bonds. We clarified the nature of the low-field transition and discussed it in terms of a spin-flop transition, where the external field overcomes the anisotropy energy to align the Néel vector nearly orthogonal to the field direction. The effective anisotropy field has been determined to be 0.0027 T. Further, we confirmed the emergence of the previously observed high-field intermediate phase. Our results point out the importance of anisotropic intraand inter-layer bond anisotropies and the necessity to include those in future theoretical calculations. Besides that, we illuminated the importance of the in-plane field angle, which calls for future MO studies in the high-field proximate spin liquid phase of α-RuCl 3 . The spin-flop transition at a moderate field strength motivates further studies on α-RuCl 3 , as the reorientation of the Néel vector opens pathways to vary the magnetoresistance almost continuously. Here, measurements of the anisotropic magnetoresistance, which are also even in the magnetization as MLD and MLB 63 , should be considered for future experiments to access the precise control of the Néel vector 88 . This could in principle lead to future implementation of α-RuCl 3 in antiferromagnetic spintronics 89 .
Sample growth, characterization, and orientation
High-quality α-RuCl 3 crystals were prepared by vacuum sublimation 37 . The different samples of the same batch were characterized by SQUID magnetometry, showing a sharp transition at around T N ≈ 7 K in zero applied field corresponding to the ABC stacking order 35,56 . No additional magnetic transitions above T N are observed, which have been related to a different stacking order ABAB with a two-layer periodicity or strainintroduced stacking faults due to extensive handling or deformation of the crystals. This bulk technique can only provide the first indication of sample quality for an optics study. Cleaving or polishing introduces strain. In this regard, we refrained from any sample treatment and used an as-grown α-RuCl 3 sample. The temperature dependence of the equilibrium MO response shown in Fig. 1d shows a clear phase transition at T N ≈ 7 K, confirming good sample quality ( Supplementary Information Note 1).
The different α-RuCl 3 samples have been oriented via a standard X-ray Laue-diffractometer at room temperature.
Experimental procedures and measurement techniques
For the study of the MO response of α-RuCl 3 , the high-quality as-grown samples were placed in a helium-cooled cryostat (Oxford Spectromag) with temperatures down to 2.2 K inside the coils of a superconducting magnet with magnetic field strengths up to ±7 T. Figure 1a illustrates the experimental setup. The magnetic field was applied along different crystallographic directions within the crystallographic ab-plane, i.e. within the honeycomb layers. The polarization of incident light was rotated by a half waveplate and after initial tests, set to the purely linearly s-polarized setting, i.e., θ 0 = 0 for the maximum signal in zero field. The measurements of the second-order MO response were carried out in the so-called Voigt geometry 67 at near-normal incidence, such that the light wave propagation k was perpendicular to the honeycomb layer planes (k⊥ab) and magnetic field vector B ab (Fig. 1a). The MO measurements were performed using a continuous laser with a wavelength of 532 nm and the laser spot was focused to a spot size of 200 × 200 μm on the sample with the power set to 50 μW. Detection of the MO response Θ = θ + iη was done using a polarization modulation technique, in which the relative phase of two orthogonal linear polarizations was modulated that pass through a photoelastic modulator (PEM) 90 . The change in rotation θ and ellipticity η were probed simultaneously ( Supplementary Information Note 7).
DATA AVAILABILITY
The data supporting the findings of this study are available from the corresponding authors on reasonable request. | 8,811.6 | 2022-03-15T00:00:00.000 | [
"Physics"
] |
A Distributed Gamified System Based on Automatic Assessment of Physical Exercises to Promote Remote Physical Rehabilitation
,
I. INTRODUCTION
In Europe alone, the estimated cost of physiotherapy to healthcare systems for the 179 million European citizens that live with a neurological condition is e798 billion to support [1]. Many require physical rehabilitation on a daily basis. In Spain, as a way of example, the average cost of treating a single stroke patient is estimated at e27,711 a year [2]. Stroke patients suffer from a number of problems, including physical The associate editor coordinating the review of this manuscript and approving it for publication was Aneel Rahim . problems such as weakness and paralysis of one or more limbs, spasticity, instability, and changes in sensation. Weakness and paralysis are managed through physiotherapy. The incidence of stroke is increasing worldwide, particularly in low-and middle-income countries [3]. There is no doubt an unmet clinical need to be addressed [4].
Physical rehabilitation helps regain mobility and muscle control, improves balance and ultimately enhances the quality of life of patients. Rehabilitation requires that the patient performs prescribed exercises repeatedly. For example, a stroke patient will be assisted initially during face-to-face sessions in clinic guided by a physiotherapist and instructed to continue exercising at home. Unfortunately, patients do not always receive the recommended amount of therapy for a variety of reasons, including costs, availability of therapists in remote areas, or non-adherence to the prescribed therapy due to lack of motivation [4].
Technological solutions that support home rehabilitation go some way in addressing these problems, but there are important challenges relating to acceptance and adoption that are difficult to tackle. One problem is the patient's reluctance to use technology. The reasons for this are varied and include digital literacy, the perception of the intrusiveness of video systems and the inconvenience of wearable devices and so on. Another major challenge is to demonstrate the benefits of using such technologies to patients thereby improving physiotherapy compliance. This paper describes the use of gamification to improve compliance.
This article presents a system designed to support remote physical rehabilitation (home rehabilitation), which is capable of automatically recognizing, comparing and evaluating the exercises in real time during a rehabilitation sessions. Exercise recognition is achieved by comparing the limb motion with predefined reference exercises. This comparison uses a variant of the DTW (Dynamic Time Warping) algorithm [5]. In computational terms, an exercise can be described by the motion of body parts (e.g. arm) and joints (e.g. elbow) that can be tracked; the term skeleton tracking is used. As such, each exercise can be described by a time series, allowing the limb/joint motion to be compared to either a reference motion for the purpose of exercise recognition or against the patient's previous attempts to detect progress (or lack of).
The system is low-cost and modular. The system will operate with any limb/joints tracking device. The interface is webbased, the user interacts via a browser. The system is scalable; it allows the integration of new movements and the definition of new physiotherapy routines. The system allows remote assignment, supervision, and monitoring by the physiotherapist. Preliminary evaluation with rehabilitation routines are conducted with multiple users (n = 27).
The system is designed around the Microsoft Kinect device which provides skeleton tracking without the use markers or other wearables. The Kinect output is joint trajectory information. Although a camera-based system, such as the Kinect may be considered to infringe on privacy, it does provide much richer information than a less intrusive system based on devices such as IMUs and unlike the latter there is no worn component/sensor and therefore easier to use.
Motivation is a key issue in adherence to rehabilitation. The system described in this paper addresses motivation through gamification. It motivates the patient to participate and offers visual feedback of to the patient on their performance. The strategy employed for maintaining or increasing motivation is to provide the patient with a visual indication of performance and progress through a graphical user interface. Progress shown is a function of the DTW comparison algorithm output. It is important to reward any attempt.
The rest of the paper is structured as follows. Section II performs a review of the main research topics relevant to this work. Section III discusses the proposed architecture, while section IV details the main characteristics of the module responsible for comparing exercises, as well as the technical details of the automatic exercise recognition module. Subsequently, in section V the system is evaluated in terms of performance, usability and ease of use. Finally, section VI summarizes the main contributions of this work and introduces possible lines of future work.
II. RELATED WORK AND BACKGROUND
There are a number of tools addressing clinical or sports rehabilitation. This work focuses on clinical needs. Typically, the exercise performed by a patient is monitored and assessed employing computational methods [6] using either physical (worn) sensors for tracking the joints [7] or computer vision techniques [8]. In the systems in the latter group often use the Kinect device [9]. The Kinect is a low-cost camerabased system; its effectiveness has been shown in the field of physical rehabilitation [10], [11].
The Kinect device originally developed for entertainment, has received much attention and is now the most widely used in technology-supported rehabilitation [11] as alternative. The Kinect, has been the subject of numerous research works in the field of rehabilitation [9]. The reliability and validity of measurements are key. [12] investigated the accuracy of the Kinect joint tracking thereby producing a tool to check suitability of the Kinect for any given health application. Along the same lines, [10] demonstrated the validity of the Kinect for posture evaluation by comparing it to a motion tracking systems that use markers. Several examples of using the Kinect in clinical settings have been reported. [13] monitor patients with psychomotor problems, such as body scheme disorders and left-right confusion. The system was evaluated with a group of 15 users with promising results. [14] investigated the feasibility of using a Kinect for unsupervised rehabilitation by analyzing the hand movement of a stroke patient while moving a virtual square inside a defined area. Similarly [15] investigate the use of the Kinect for rehabilitation in stroke patients but focusing on balance. The system was evaluated with 13 users who demonstrated some improvement following the sessions.
Recent years has seen a steady increase in the use of and types of technologies employed in rehabilitation [16]. According to [17] technology can help stroke patients regain some function during the recovery phase. Among the most common devices aimed at injury recovery are orthoses, exoskeletons, and other stationary equipment [18]. The stationary equipment require attendance at a clinic. This may impose limits on the frequency of sessions and hence recovery. Aside from the Kinect other low cost entertainment devices have been re-purposed for rehabilitation. In [19] a study using the Nintendo Wii TM is conducted with 41 patients VOLUME 8, 2020 lasting 14 days to demonstrate the effectiveness of the device as a rehabilitation tool. Similarly the PlayStation Move TM , is used in [20] for burn rehabilitation. In [21] the Leap Motion TM device is used in conjunction with a video game to obtain information of a patient's hand movement to monitor the performance of the exercises.
When monitoring the patient as they perform an exercises, the goal is to determine how well they are performing with respect to a reference which could be the patient on a previous day. The tracking of a limb in motion, in other words, the joint position trajectory can be expressed as time series. Comparison of time series can be performed automatically without having to previously define a set of rules. [22] use the DTW algorithm to compare the joint trajectory during the execution of the exercises against the previously recorded reference. [23] compare joint trajectory using Hidden Markov Model (HMM) [24]. Both approaches are widely used to compare movements over time, although DTW-based implementations in general provide better results than HMM-based analogues [25] depending on the nature and quantity of the data to be processed [26].
Another advantage of DTW is that identifying and classifying movements can be achieved automatically without the need for explicitly training the system as described in [27]. In this work, the exercise recognition is divided into into three stages using the DTW algorithm, the patient's initial and final postures are compared, together with the angular trajectories of the extremities involved in the exercise. In [28], a variant of the DTW algorithm is presented for comparing incomplete time series called Open-End DTW (OE-DTW). The technique was successfully validated through the classification of exercises used during the rehabilitation of stroke survivors, irrespective of whether the exercise was performed correctly or incorrectly. [29] propose an exercise classifier based on fuzzy logic to solve the classic problem of overlapping body parts. This would be an example of fuzzy pattern recognition [30]. Other authors combine the use of fuzzy logic, to deal with the uncertainty and vagueness of the data obtained by the tracking system, with the DTW algorithm to make a comparison of the exercises performed by a patient with respect to the reference exercise [31].
III. ARCHITECTURE A. OVERVIEW
A scalable architecture is proposed, shown in Figure 1 to remotely monitor patients as they perform an exercise. Patient movement is analyzed automatically and feedback is immediate following completion of exercise. The exercises are recognized automatically during their performance. Thus, the patients can perform any prescribed exercises without informing system. Finally, the results obtained are sent to the therapist for the assessment of the patients.
The comparison mechanism is achieved by applying the DTW algorithm to the exercise motion as performed by the patient and a reference e.g. patient on a previous day or phys-iotherapist. The solution includes different gamification techniques to enhance patient engagement and motivation thereby improving adherence.
Regarding the classification of exercises performed by the patient, the system is able to automatically recognize them. In other words, there is no need for the patient to explicitly select which exercise will be done next. On the contrary, the system can classify the movement made by the patient according to the most similar exercise existing in the data base. This feature, which may help patients with cognitive problems that affect speech to interact with the system in a more natural way, is currently at experimental stage and has to be explicitly enabled by the user through the system settings. More details about it are provided to the reader in section IV-B.
B. ARCHITECTURAL DESIGN
The system was designed and implemented for ease of scalability and modularity. To this end, a multi-layer architecture was proposed with three modules having well-defined functionality to meet the key requirements of the system.
Given the requirement for remote monitoring, the architecture has to offer full network functionality. This network architecture implements two distinct roles, one for patient and another for clinician roles. The system interface is webbased so that both patient and clinician interact with the system via a web browser. The interface shows role-specific actions determined by role, that is, patient or clinician. The clinician's interface allows i) the creation of recording of exercises, ii) assigning (prescribing) exercises to a patient and iii) monitoring their progress (viewing patient's assessment). The patients' interface, allows viewing prescribed exercises, ii) view own results, and iii) view their progress. The patient's view incorporates gamification element to enhance motivation.
The implementation consists of three independent but connected modules. Information flow between modules is multilayered and bidirectional. The implementation is described below.
• Capture module communicates with the capture device to retrieve RGB images and skeleton (joint tracking) information for further analysis. The module also detects voice commands sending them to the processing module.
• Processing module processes the information received from the capture module. Its responsibilities include storing the information in the defined exchange format, automatically comparing exercises using the appropriate algorithm, classifying the exercise and executing the detected voice commands.
• Display module supports user interaction, provides visual feedback on the patient's performance using gamification techniques.
To share information between the different modules, network sockets are used as the inter-process communication (IPC) mechanism. This has an advantage when scaling the system horizontally, where each module can run on different working nodes, in case of low performance equipment. The data-flow diagram is shown in Figure 2, which shows the complete flow of information that takes place in the system.
In addition to network sockets, the display module also makes use of WebSockets technology to communicate with the web application and to transmit information to connected clients. This application provides the graphical interface and the common user entry point to the system.
The capture module implements motion capture retrieving color images and skeleton and joint tracking information from the Kinect. When the capture module identifies a human skeleton, the position and rotation of each of its joints (3-D pose estimation) is recovered. This information is serialized and sent to the processing module, where it is stored in the defined exchange format.
Both clinician and patient roles can interact and use the motion capture functionality. The clinician captures an exercise using the video recording function. This exercise is interpreted as as a reference to demonstrate to the patient or compare to the patient's exercises. The exercise is stored in BioVision Hierarchy (BVH) format [32], which maintains the 3-D transformation (position and rotation) of each joint hierarchically over time, in order to overlay the captured joints over the recorded video.
The hierarchy of joints to be stored depends directly on the exercise recorded. Thus, for exercises involving, for example, upper body movements, only those joints are stored. The selection of joints can be established by the therapist prior to recording the reference exercise.
Along with the BVH file, a new JSON file is also generated, for the sole purpose of storing the positions of each joint following the same sequence defined in the BVH file in order to be used as input to the exercise comparison algorithm, which will provide users with feedback on the correct execution of the exercises. This file serves the comparison algorithm.
IV. REMOTE REHABILITATION A. EXERCISE COMPARISON
A variant of the DTW algorithm [5], FastDTW version [33], which offers a linear temporal order of complexity is employed to automatically analyze and evaluate the exercises performed by the patient.
The algorithm provides the optimal alignment of two time sequences by calculating a cost matrix obtained from the difference between two data point indices in the sequences. The algorithm removes the time dimension thus providing results independent of the time difference between the two sequences. This makes it especially useful for comparing time series, such as speech recognition and audio synchronization.
The exercise analysis process involves comparing the exercise performed by the patient, noted as r, with the reference, noted as m.
An exercise movement consists of a set of time stamped points, one for each of the joints involved in the movement (see Figure 3). Each set is a temporal sequence of 3-tuples (x, y, z) ∈ R 3 , which indicate the position of the joint associated with the series at a given instant obtained from the depth sensor camera. The values on each axis represent the position of the joint over time on that axis and can be viewed as a trajectory or curve. In this way the problem of movement VOLUME 8, 2020 comparison can be seen as a time series comparison problem. The DTW algorithm allows two time series to be compared by measuring the similarity between two temporal sequences, which may vary in speed i.e. irrespective of the time difference between the two sequences, namely, dist numerical results after the comparison of both series such that dist ∈ Q ≥0 . Figure 4 shows two curves generated along the Y-axis for the right elbow joint (point 10) during an exercise in which the right arm is raised. The dashed curve corresponds to the movement performed by the therapist (reference exercise), while the continuous curve corresponds to that of the patient (exercise to be compared). The alignment of both curves obtained after the application of the DTW algorithm (dist = 5.2) is represented by the segments joining the curves. This value indicates the distance between the two curves, so the closer this value is to 0, the greater the similarity between the two exercises, and the less significant differences will exist between the two curves. This dist value is calculated by the algorithm using as a metric the Euclidean distance between the curves. The comparison, at exercise level, is made by comparing the curves in the X, Y and Z axes of the movements made by the patient r and the therapist movement m in each of the points of the joint scheme (points 1 to 20) involved in the movement, applying the DTW algorithm to each joint independently. For each series, that is, for each movement of a joint i in the exercise we obtain a distance that considers the distances in the axes X, Y and Z of that joint; this distance is noted as d DTW i .
Finally, the overall distance between the exercise performed by the patient r and the reference movement of the therapist m is calculated as the arithmetic mean of the distances obtained in each of the joints involved in the exercise being j the total number of joints.
The algorithm does not establish an upper limit to define a confidence interval for interpreting the results obtained. To overcome this limitation, a calibration phase is integrated to normalize the results. In this phase, the therapist has to perform a calibration exercise correctly, and then perform it incorrectly (e.g. by remaining still). In this way, a lower limit close to 0 is obtained after performing the exercise correctly, and a higher one after performing it incorrectly. Once this threshold is defined, it is divided into equal intervals and each of them is associated with the corresponding feedback that would be provided to the patient. Thus, if the distances obtained by applying the DTW algorithm to the calibration exercise are d r , exercise performed correctly, and d w , exercise performed incorrectly, the confidence intervals would be calculated by obtaining a margin of error, e, that serves to relax the interpretation of the results obtained by the patient when performing the exercises. From this margin of error, the lower and upper limits of the intervals would then be calculated from the addition between the correct and incorrect distances divided by the number of intervals we want to define (which in the case of this work is 3), as shown in equation (1).
Thus, the first interval would indicate a correct execution of the exercise, the second interval would indicate an acceptable execution and the third interval would indicate an incorrect execution. In this way, the feedback provided to patients in those intervals is discretized, depending on how they perform the exercises. This way of defining the intervals is flexible enough to prevent the scores obtained by the DTW algorithm from being misinterpreted as closer to the interval limits. Once the calibration is complete, the intervals obtained are sent to the patient, so she/he can proceed to perform the assigned exercises, from which the system will provide an appropriate evaluation based on performance.
In addition, the joint positions must be normalized to avoid possible computational errors caused by a change of location of the patient or therapist with respect to the camera. To do this, these positions are recalculated with respect to one of the joints of the individual that does not affect the movement of the exercise, by default, the joint at the base of the neck. This allows both calibration and reference exercises to be recorded remotely by the therapist for the patient to repeat, making it possible to compare the two results.
B. EXERCISE RECOGNITION
Currently, in the proposed system, the patient must select the exercise to perform, by voice commands or the user interface. Voice or touch interface may not be suitable for patients with more severe physical or cognitive disability. A mechanisms that eliminated the need for the patient to select and exercise is required.
The standard DTW algorithm operates on finite time series in other words on a completed exercise. The Open-End DTW (OE-DTW) by Tormene et al. [28], a variant of DTW allows comparison on incomplete time series. It provides the percentage of coincidence between two curves at every time instant of the series. This feature can be used to allow the comparison to begin as soon as movement is detected and continues so that it is not necessary to manually select an exercise. In addition, the OE-DTW algorithm can be used to identify the exercise attempted by the patient by comparing the incoming tie series it to all predefined reference. As the exercise progresses, more data will be collected to establish a more informed comparison.
In normal operation, the patient would initiate the movement of the exercise he/she wishes to perform and the system would detect the exercise being performed. To do this, the system periodically compares the positions of the joints with those stored for existing reference exercises. When an optimal candidate is found, the exercise corresponding to that candidate is marked as definitive and on completing the exercise, the patient is informed of their performance. An optimal candidate is considered to be the reference exercise that minimizes the distance between the joint trajectory of patient and reference as indicated by OE-DTW algorithm.
Formally, suppose the system has stored n rehabilitation routines which have been performed by the therapist: M = {m 1 , m 2 , . . . , m n }. And that the patient performs an exercise r. The problem is to find the model m i such that min i {D(r, m k )}. An exercise can be seen as a set of series S i , that is, where each S i is the series containing the joint trajectory i involved in the exercise identified by the capture device (see Figure 3). The series S i consists of being e t j each of the positions of the joint i over time, i.e. e t j = (x t j , y t j , z t j ) | x t j , y t j , z t j ∈ R where x t j , y t j and z t j represent the position of the joint in the X, Y and Z axes at the instant t j , respectively.
Thus, the problem is to compare the partial joint trajectory of the incomplete exercise performed by the patient r with those of reference (M ) up to the instant t j . The resulting OE-DTW distance value between exercise r and a model m i in an instant t j is calculated as where dt j DTW i calculates the distance between r and m in joint i for the exercise accounting for the distances in axes X, Y and Z at time instant t j .
At each time instant t j the system selects as a model for the movement executed by the patient, the model m k that minimizes the distance D t j (r, m k ), that is: As an example, if we consider a use case in which the patient has to perform repetitions of up to three different exercises (e.g. greeting with the right arm, raising the right VOLUME 8, 2020 arm and advancing the right arm forward), the implementation would have to find the best candidate while the patient performs the exercise over time. Therefore, when the user begins to perform a repetition for the exercise in which she/he has to wave the right arm, a comparison process is launched using the OE-DTW algorithm to try to classify the movement from the positions (x, y, z) of each of the joints, detected until that time instant. Several comparisons are then made in parallel with each of the reference exercises defined above. The minimum distance obtained after applying the algorithm to the three reference exercises will be the one that indicates the exercise that has been recognized.
C. MOTIVATION AND GAMIFICATION
Patient motivation is essential for a positive outcome in rehabilitation in general and more so in remote rehabilitation. Gamification has been shown to play an important role on positive psychological effects of engagement in rehabilitation [34]. In this context, elements of gamification and serious games can contribute to engage the patient, especially when performing repetitive exercises during a long period of time [35].
The system does not require the patients to attend the rehabilitation center physically, so it must be capable of motivating patient to ensure compliance to prescribed physiotherapy. For this, the system provides different gamification mechanisms to maintain the patient's motivation as shown in Figure 1: • Feedback based on stars: the results of assessing the patient performance is the DTW distance. The star feedback system translates this number into a meaningful and understandable format for the patient. The number of stars the patient receives (i.e. minimum 1 and maximum 3) is determined by the DTW distance obtained from applying the algorithm when comparing the exercise performed by the patient with the reference. Thus, the number of stars is related to the three intervals defined during the calibration process. Patients are motivated by the systems in order to obtain the maximum number of stars per exercise.
• Scores, high scores and multipliers: the scores are directly related to the number of stars obtained per repetition. The equation for the score is where s base is a constant base score value assigned to each interval resulting from calibration (i.e. 100, 200 and 500), d DTW is the DTW distance scaled to an interval [0.1, 1.1] and m is a multiplier that initially takes the value of 1 and increases to a maximum of 4 for each successful repetition, i.e. when the distance obtained falls over the second or third calibration interval. Although the score is a numerical representation of the obtained number of stars, registering high scores can motivate patients to make an extra effort when performing the same rehabilitation exercises day after day.
• Experience bar and level: to provide a sense of progression to the patient, the system incorporates a system of levels that the patient can reach by filling in an experience bar. This gamification technique is oriented to maintain the patient's engagement over long period of time. This bar is filled at the end of a rehabilitation session with the sum of the scores obtained during the routine according to the formula where x i is the total amount of experience required to reach the i level, x base is a constant amount of experience (x base = 1000), t is the target level to reach and k is a constant (k = 1.5) used to exponentially increase the difficulty needed to reach the next levels. The variable x base is only used to reach level 2; in subsequent levels this variable is adjusted automatically based on the total score obtained by the patient during their rehabilitation session in order to adapt the difficulty to their needs. The levels represent the main objective that the patient must achieve, as their attainment implies that rehabilitation routines are being performed. In the same way, the physiotherapist can define achievements or rewards that the patient will unlock when reaching certain levels. These achievements are a useful indicator for the patient regarding progress in rehabilitation.
A. ALGORITHM PERFORMANCE
The comparison of the exercises performed by the patient with the reference exercises is implemented with the DTW algorithm applied to (x, y, z) of the joint trajectories. Computation begins on completion of the first repetition of exercise followed by feedback to patient. This process is repeated for each repetition.
The duration of the computation should be as short as possible to provide a satisfactory user experience for the patient, and there should no interruptions between repetitions. The system was evaluated with a series of tests based on 7 upper trunk exercises with up to 3 repetitions. The selection of exercises for the evaluation was based on the approximate duration of the exercise and the joint trajectories during the exercise. Table 1 shows the results of the repetitions at the exercise level. The data collected were duration of the movement and the execution times of the DTW algorithm for each repetition.
The algorithm execution times vary between a minimum of 817ms and a maximum of 5002ms for the proposed exercises. The relationship between execution time, duration of exercise and number of joints involved was investigated. The correlation coefficient between the duration of the performed exercise and these execution times was to be r = 0.9595, indicating a strong positive correlation between these variables. This is explained by the number of samples accumulated during that time (i.e. positions of each joint over time). It was found that the execution time of the algorithm is not impacted to the same extend by the number of joints involved in the movement studying this correlation (r = 0.4569). Figure 5 shows a graphical representation for both cases.
These results demonstrate that the system can successfully compare exercises that include an arbitrary number of joints in their movements without compromising on computation speed. In addition, the algorithm execution times are acceptable given a real scenario where the patient would take breaks between repetitions.
The exercise recognition function was evaluated in two tests. In the first test, the exercise to detect and recognize was waving with the right arm in a set comprising 7 exercises and in a second test, the set comprising 3 exercises. In these tests, the running time of the OE-DTW algorithm was collected, as well as the results of the comparison at successive time intervals.
For the test to be successful, the exercise had to be recognized in less than 10 seconds, or in other words, in less time than the duration in which the reference exercise was recorded.
The charts in Figure 6 show the results obtained. In the first test with 7 reference exercises (left), the distance values are quite close and the exercises are clearly distinguishable at beyond 6 second at which point the exercise is correctly recognized, that is, the difference between the distances obtained by the OE-DTW algorithm are significantly lower than those obtained after comparing with the other reference exercises. In the chart corresponding to the 3 reference exercises (right), this difference can be seen more clearly. The execution times of the algorithm (lower horizontal axis) increase as the duration of the movement increases, as demonstrated in the previous comparison tests. An additional correlation between the algorithm execution time and the number of existing reference exercises can also be seen in this case. This is due to the fact that the number of comparisons that the algorithm performs grows linearly according to the number of exercises assigned to the patient's exercise routine.
All the tests in this section were performed on a workstation equipped with an Intel Core i7-7700 and 16 GB of RAM running a 64-bit version of Windows 10. The system makes use of the OE-DTW algorithm available in the software package statistical, available for the programming language R [28].
B. PRELIMINARY CLINICAL EVALUATION
The system has been evaluated in terms of its usefulness and ease of use by a number of participants (n = 27) selected taking into account their own experience of attending rehabilitation sessions following recent of historical physical injuries. Participants consisted of 16 men and 11 women, ranging in age from 22 to 51. The main reasons for attending physical rehabilitation sessions include ankle sprain, wrist injuries, low back pain, epicondylitis, cervical pain and fiber rupture, among others. This evaluation was conducted to examine the potential benefits that the system can provide to patients that require physical rehabilitation and can carry out the exercises at home.
Participants performed a two-exercise routine. The first of them had to be repeated 3 times and consisted of waving with the right arm, lifting it above the head. The second had to be repeated twice and consisted of moving the right arm back and forth. These exercises were simple enough for the participants to understand their execution without any problem, thus trying to focus their attention on the system itself rather than on the execution of the exercises. After that, they filled in a questionnaire with questions based on the TAM framework [36] to measure the perceived usefulness and the perceived ease-of-use of the system. These questions were scored on a Likert scale ranging from 1 (totally disagree) to 5 (totally agree).
The results obtained following analysis of the questionnaires are shown in the Table 2. The mean values for the statements are higher than 4 points in most cases, indicating a positive view of the system by users. Only the PEOU4 statement has the lowest score (3.13), with the highest standard deviation (1.13). Even so, we can conclude that these results are satisfactory, since the system is not intended to replace face-to-face rehabilitation sessions with the therapist, but to complement them in order to democratize access to physical rehabilitation for people who cannot attend face-to-face sessions in the rehabilitation center.
The participants also left some open comments, indicating what they liked and disliked about the system. In the first case, the positive comments referred to how motivating it was to perform the rehabilitation routine thanks to the gamification mechanisms, specifically, the scores and multipliers; to the ease of understanding and replicating the exercises thanks to the demonstrative videos; and to the visualization of the joints on the video while the exercise is being performed. Regarding the negative comments, the participants indicated that would also wish to exercise the lower part of the human body; for certain users, the tracking of the skeleton made by the capture TABLE 2. Statistical values relating the ''Perceived usefulness'' and ''Perceived ease-of-use'' dimensions to the scores provided by the participants using the system (1: totally disagree, 5: totally agree). Numbers in parentheses denote standard deviations. device produced incorrect positions for certain joints, mainly those of the wrists.
VI. CONCLUSIONS AND FUTURE WORK
This paper presents a distributed gamified system to support home-base rehabilitation, by remotely monitoring rehabilitation workouts as prescribed by a physiotherapist. Motivational aspects have been given special consideration to engage patients when making exercises and scalability designed-in to extend the system functional capabilities as required.
The system automatically compares and evaluates the exercises performed by the patients to provide them with appropriate feedback. In addition, the system can recognize the exercise performed by the patient, so as it is not necessary to select the exercise to be performed at start. The gamification system offers a motivating function that promotes compliance and improves adherence to ensure a positive outcome. In addition, the functionality of the system can be extended. The modularity allows modules to be exchanged to improve certain characteristics, such as, for example, the algorithm used to analyze exercises or the module responsible for recognizing the human skeletons of the users, among others. Moreover, the success of the evaluation testing conducting demonstrate the potential of this type of systems in healthcare, in order to facilitate the rehabilitation of patients and to monitor their recovery.
The system continues to be evaluated with more participants, specifically with stroke patients from the General Hospital Nuestra Señora del Prado 1 (Talavera de la Reina, Spain). The goal is to evaluate not only from a technological perspective identifying possible technological and functional improvements but from a clinical perspective in a clinical study to determine usefulness for both patients requiring physical rehabilitation and clinicians in the mid-term. This is essential within this context of patients affected by neurolog-ical diseases, which represent the largest cause of disability worldwide.
As future work, the exercise comparison mechanism is intended to be evolved into a learning-based solution that automatically weigh the joint positions based on how much they are involved during the exercise performance, so that even more accurate and faster results can be obtained. In addition, pattern recognition techniques are intended to be used in order to infer personalized rehabilitation routines depending on each patient's needs and their ability to adjust to rehabilitation treatments. | 8,503.6 | 2020-05-15T00:00:00.000 | [
"Medicine",
"Engineering",
"Computer Science"
] |
A Langevin-Type q-Variant System of Nonlinear Fractional Integro-Difference Equations with Nonlocal Boundary Conditions
We introduce a new class of boundary value problems consisting of a q-variant system of Langevin-type nonlinear coupled fractional integro-difference equations and nonlocal multipoint boundary conditions. We make use of standard fixed-point theorems to derive the existence and uniqueness results for the given problem. Illustrative examples for the obtained results are also presented.
Introduction
The Langevin equation provides a decent approach to describe the evolution of fluctuating physical phenomena. Examples include anomalous diffusion [1], time evolution of the velocity of the Brownian motion [2,3], diffusion with inertial effects [4], gait variability [5], harmonization of a many-body problem [6], financial aspects [7], etc. However, the failure of the ordinary Langevin equation for correct description of the dynamical systems in complex media led to its several generalizations. One such example is that of the Langevin equation, involving fractional-order derivative operators, which provides a more flexible model for fractal processes. For some recent results on Langevin equation, see ( [8][9][10][11][12]) and the references therein.
The topic of q-difference equations has evolved into an important area of research, as such equations are always completely controllable and appear in the q-optimal control problem [13]. Furthermore, the variational q-calculus is regarded as a generalization of the continuous variational calculus due to the presence of an extra parameter q whose nature may be physical or economical. The variational calculus on the q-uniform lattice is concerned with the study of the q-Euler equation and its applications to commutation equations, and isoperimetric and Lagrange problems. In other words, the q-Euler-Lagrange equation is solved for finding the extremum of the functional involved instead of solving the Euler-Lagrange equation [14]. There do exist q-variants of certain significant concepts, such as q-analogues of fractional operators, q-Laplace transform, q-Taylor's formula, etc.
Fractional-order operators are found to be of great utility in improving the mathematical modeling of several real-world problems. The variational principles based on fractional derivative operators lead to the class of fractional Euler-Lagrange equations [15]. In addition, one can find some interesting results on optimal control theories for fractional differential systems in the articles [16][17][18][19][20][21].
The popularity of fractional calculus in the recent years led to the birth of the fractional analogue of q-difference equations (fractional q-difference equations), for instance, see [22,23].
One can find interesting results on nonlinear boundary value problems involving fractional q-derivative and q-integral operators, and different kinds of boundary conditions in the articles [24][25][26][27][28][29][30][31][32][33][34][35][36][37]. In a recent work [38], the authors studied the existence of solutions for a nonlinear fractional q-integro-difference equation equipped with q-integral boundary conditions. However, it is observed that there are a few results for coupled systems of fractional q-integro-difference equations [39]. More recently, a coupled system of nonlinear fractional q-integro-difference equations with q-integral coupled boundary conditions was studied in [40].
The objective of the present work is to enrich the literature on boundary value problems of coupled systems of fractional q-integro-difference equations. Keeping in mind the importance of the fractional Langevin equation, we introduce and study a new problem consisting of a coupled system of Langevin-type nonlinear fractional q-integro-difference equations complemented with nonlocal multipoint boundary conditions. The proposed problem is interesting in the sense that it enhances the literature on fractional q-variant of Langevin equations with mixed nonlinearities in terms of the parameter q. On the other hand, the consideration of multipoint non-separated boundary conditions involving the values of the unknown functions together with their q-derivatives at the end points as well as the interior nonlocal positions of given domain extends the scope of the present work to a more general situation (also see Section 5). For the motivation of nonlocal boundary conditions, we recall that nonlocal multipoint boundary conditions appear in feedback controls problems, optimal boundary control of (finite) string vibrations arising from interior arbitrary positions, etc. For more details, see [41][42][43][44]. In precise terms, we investigate the following boundary value problem: , y(t)) + β 1 I ξ 1 q g 1 (t, x(t), y(t)), 0 ≤ t ≤ 1, c D r 1 q ( c D r 2 q + λ 2 )y(t) = α 2 f 2 (t, x(t), y(t)) + β 2 I ξ 2 q g 2 (t, x(t), y(t)), a j y(η j ), where c D p i q and c D r i q denote the fractional q-derivative operators of the Caputo type, 0 < p i , r i ≤ 1, 0 < q < 1, I ξ i q (.) denotes Riemann-Liouville integral of order ξ i > 0, f i , g i are given continuous functions, λ i = 0, α i , β i , i = 1, 2, and a j , b j , k j , m j , j = 1, . . . , n are real constants and µ 1 , µ 2 , µ 3 , µ 4 , σ 1 , σ 2 , σ 3 , σ 4 ∈ R, η j ∈ (0, 1), j = 1, . . . , n.
Here, one can notice that the right-hand sides of the fractional q-Langevin equations in the system (1) involve the usual as well as q-integral-type nonlinearities. These equations correspond to different combinations of nonlinearities, such as ordinary nonlinearities, f 1 (t, x(t), y(t)) and f 2 (t, x(t), y(t)) for β 1 = 0 = β 2 , purely q-integral-type nonlinearities, I ξ 1 q g 1 (t, x(t), y(t)) and I ξ 2 q g 2 (t, x(t), y(t)) for α 1 = 0 = α 2 , and so on. The paper is organized as follows. In Section 2, we recall some general concepts and results on q-calculus and fractional calculus. We then solve a linear variant of the given problem that provides a platform to define the solution for the problem at hand. Section 3 is devoted to the main existence results, which are established with the aid of some classical fixed-point theorems. The paper concludes with an illustrative example.
Preliminaries on Fractional q-Calculus
Here, we recall some basic definitions and known results on fractional q-calculus. Definition 1. Let β ≥ 0, 0 < q < 1, and f be a function defined on [0, 1]. The fractional q-integral of the Riemann-Liouville type is (I 0 q f )(t) = f (t) and and satisfies the relation: More generally, if α ∈ R, then For 0 < q < 1, we define the q-derivative of a real valued function f as For more details, see [22].
where [β] is the smallest integer greater than or equal to β.
Definition 3 ([45]
). The fractional q-derivative of the Caputo type of order β ≥ 0 is defined by where [β] is the smallest integer greater than or equal to β.
Definition 4.
(q-Beta function) For any x, y > 0, is called the q-beta function.
Recall that
Then the unique solution of the following linear system of equations: subject to the boundary conditions (2) is given by and where Proof. Applying the q-integral operators I q , respectively, on the first and second equations of (4), we obtain where c 0 and d 0 are arbitrary real constants. Now, applying the q-integral operators I p 2 q and I r 2 q , respectively, to both sides of the above equations, we obtain where c 1 , d 1 ∈ R are arbitrary constants. By using the conditions (2), we obtain a system of equations in the unknown constants c 0 , c 1 , d 0 and d 1 given by where δ 1 , δ 2 , δ 5 , δ 6 , δ 7 , δ 8 are given in (8), and Solving the system (11) for c 0 , c 1 , d 0 and d 1 , we find that where Λ is given by (7). Substituting the values of c 0 , c 1 , d 0 and d 1 in (9) and (10) yields the solution (5) and (6). By direct computation, one can obtain the converse of the lemma. This completes the proof.
In view of Lemma 5, we define an operator G : C × C → C × C by where
Existence and Uniqueness Results
In the sequel, we set the notation In the following theorem, we prove the existence of a unique solution to the system (1) and (2) by applying the Banach contraction mapping principle [47]. (A 1 ) There exist positive constants ι 1 , ι 2 such that for each t ∈ [0, 1] and x i , y i ∈ R, i = 1, 2, (A 2 ) There exist positive constants κ 1 , κ 2 such that for each t ∈ [0, 1] and x i , y i ∈ R, i = 1, 2, Then the system (1) and (2) has a unique solution on [0, 1], provided that Proof. Let N 1 , N 2 , M 1 , M 2 be finite numbers such that where Υ is given in (15).
Next, we present an existence result for the problem (1) and (2) which is proved by means of the Leray-Schauder nonlinear alternative [48].
Conclusions
We have studied a new class of nonlocal multipoint boundary value problems of Langevin-type nonlinear coupled q-fractional integro-difference equations. First of all, the given problem was converted into an equivalent fixed-point problem. Then, we proved an existence and uniqueness result for the problem at hand by applying the Banach contraction mapping principle. In our second result, we presented the criteria ensuring the existence of a solution for the given problem. We also demonstrated the application of the obtained results by solving some particular problems. We emphasize that our results are new and contribute significantly to the literature on nonlocal multipoint boundary value problems of nonlinear coupled q-fractional integro-difference equations. It is imperative to note that our results correspond to the non-coupled separated boundary conditions for all a j = 0, b j = 0, k j = 0, m j = 0, j = 1, . . . , n, which are indeed new in the given configuration. | 2,380 | 2022-01-14T00:00:00.000 | [
"Mathematics"
] |
Amide-Containing Bottlebrushes via Continuous-Flow Photoiniferter Reversible Addition–Fragmentation Chain Transfer Polymerization: Micellization Behavior
Herein, a series of ternary amphiphilic amide-containing bottlebrushes were synthesized by photoiniferter (PI-RAFT) polymerization of macromonomers in continuous-flow mode using trithiocarbonate as a chain transfer agent. Visible light-mediated polymerization of macromonomers under mild conditions enabled the preparation of thermoresponsive copolymers with low dispersity and high yields in a very short time, which is not typical for the classical reversible addition–fragmentation chain transfer process. Methoxy oligo(ethylene glycol) methacrylate and alkoxy(C12–C14) oligo(ethylene glycol) methacrylate were used as the basic monomers providing amphiphilic and thermoresponsive properties. The study investigated how modifying comonomers, acrylamide (AAm), methacrylamide (MAAm), and N-methylacrylamide (-MeAAm) affect the features of bottlebrush micelle formation, their critical micelle concentration, and loading capacity for pyrene, a hydrophobic drug model. The results showed that the process is scalable and can produce tens of grams of pure copolymer per day. The unmodified copolymer formed unimolecular micelles at temperatures below the LCST in aqueous solutions, as revealed by DLS and SLS data. The incorporation of AAm, MAAm, and N-MeAAm units resulted in an increase in micelle aggregation numbers. The resulting bottlebrushes formed uni- or bimolecular micelles at extremely low concentrations. These micelles possess a high capacity for loading pyrene, making them a promising choice for targeted drug delivery.
Introduction
In recent decades, polymer science has concentrated on achieving the accurate synthesis of well-defined macromolecules with a specific architecture and molecular weight characteristics.The discovery of reversible-deactivation radical polymerization techniques marked a significant advancement in this field.Visible light-mediated photo-induced reversible addition-fragmentation chain transfer (RAFT) polymerization, which is gaining popularity due to a number of benefits over the classical RAFT process, is a further development of this approach.These advantages include [1][2][3][4][5][6][7][8][9][10][11] the use of low-cost LED light sources, polymerization under ambient conditions, and the use of eco-friendly solvents, e.g., water.Achieving spatiotemporal control and high chain-end fidelity enables easy and efficient production of block copolymers.Photo-induced RAFT polymerization can occur through three different methods.The first method is conventional RAFT polymerization, which requires a chain transfer agent and photoinitiator to act as a radical source.
Molecular oxygen is known to inhibit radical polymerization.Photoinduced polymerization is extremely sensitive to the presence of oxygen.Together with the strong attenuation of light within the reaction medium due to absorption, this can lead to a complete stoppage of the polymerization, as well as the impossibility of scaling the process.In some cases, common methods of oxygen removal (freeze-pump-thaw cycles, nitrogen purging) may yield unsatisfactory outcomes.
The oxygen-tolerant versions of PET-RAFT polymerization have significantly improved the situation.In addition, enzyme-assisted polymerization variants, for example in the presence of glucose oxidase and glucose, have gained popularity in recent years [19][20][21][22][23][24][25][26].Glucose oxidase can scavenge oxygen and has been used in RDRP processes for its removal.The enzyme oxidizes glucose to δ-gluconolactone (which then spontaneously hydrolyzes to gluconic acid) in the presence of molecular oxygen, producing hydrogen peroxide.However, the use of expensive and sometimes toxic photocatalysts or enzymes requires further careful purification of the polymers if they are to be used for medical purposes.
Photoiniferter polymerization, unlike PET-RAFT, is not oxygen-tolerant; however, the use of a continuous-flow photoreactor together with a suitable solvent makes it possible to achieve very high conversions in a short time (2-3 h) while maintaining satisfactory control.In this case, the scalability of the process is also an additional important advantage.
Several studies have demonstrated the benefits of utilizing continuous-flow reactors over the traditional periodic mode in photomediated atom transfer radical polymerization processes.The limitations of the periodic mode include light attenuation in the reaction mixture and inability to scale the process.On the contrary, the scalable process [27][28][29], improved heat and mass transfer properties [28,30], increased light irradiation efficiency [31], higher conversions in less time [13,17,29,[32][33][34][35][36], narrow molecular weight distribution [17,32], and simplified multi-step syntheses are some of the advantages of continuous-flow reactors [14,28,[36][37][38][39][40].These reactors allow for the sequential or parallel connection of multiple reactors, which simplifies complex syntheses.Overall, the utilization of continuous-flow reactors in photoRAFT processes has demonstrated encouraging outcomes and presents a plausible solution to the limitations of the periodic mode.
Thermoresponsive polymers with a low critical solution temperature (LCST) in aqueous solutions have gained attention for their potential use as smart materials in various applications [41][42][43][44][45]. Methoxy oligo(ethylene glycol) methacrylate (MOEGM) copolymers are promising candidates in this field because of their desirable properties such as biocompatibility, low toxicity, and biodegradability [46,47].The LCST of these copolymers may be precisely tuned by modifying the quantity of ethoxylated moieties and selecting suitable hydrophobic comonomers.They demonstrate a sharp and reversible LCST transition with minimal hysteresis [48][49][50][51][52][53][54].The recent literature [55][56][57][58] has emphasized the supramolecular assembly, thermoresponsive property control, and conformational management of PEG-based brushes.Studies have tested different derivatives of hydrophobically modified MOEGM polymers, revealing their capacity to form micellar structures in aqueous solutions.These polymers show promise as polymeric nanocontainers for delivering hydrophobic drugs.Furthermore, PEG-based brushes have an advantage in that they create unimolecular micelles [59][60][61][62], which offer greater stability in changing environmental conditions.Overall, these recent studies shed light on the fascinating characteristics and promising uses of hydrophobically modified MOEGM polymers.
Here, we report on the synthesis of ternary amphiphilic copolymers.The base that provided these copolymers with thermoresponsive properties consisted of methoxy oligo(ethylene glycol) methacrylate and alkoxy(C12-C14) oligo(ethylene glycol) methacry-late units.Additionally, units of modifying amide comonomers including acrylamide, methacrylamide, and N-methylacrylamide were incorporated.The aim of this study was to investigate the synthesis and self-assembly of these copolymers into micelle-like structures and to reveal the role of amide comonomers in these processes.
they create unimolecular micelles [59][60][61][62], which offer greater stability in changing environmental conditions.Overall, these recent studies shed light on the fascinating characteristics and promising uses of hydrophobically modified MOEGM polymers.
Here, we report on the synthesis of ternary amphiphilic copolymers.The base that provided these copolymers with thermoresponsive properties consisted of methoxy oligo(ethylene glycol) methacrylate and alkoxy(C12-C14) oligo(ethylene glycol) methacrylate units.Additionally, units of modifying amide comonomers including acrylamide, methacrylamide, and N-methylacrylamide were incorporated.The aim of this study was to investigate the synthesis and self-assembly of these copolymers into micelle-like structures and to reveal the role of amide comonomers in these processes.
Photoiniferter RAFT Polymerization
A continuous-flow reactor was used for the PI-RAFT polymerization.The reactor was composed of an aluminum cylinder with a diameter of 12 cm and a height of 8 cm that contained a LED strip.Inside the aluminum cylinder, a glass cylinder with a diameter of 7.5 cm was positioned coaxially with a PTFE tube of internal/external diameter of 2/3 mm wrapped around it.The LED strip was positioned 1.6 cm away from the surface of the tube.With an operating volume of 18.5 cubic centimeters, the irradiated part of the tubular reactor was 5.9 m long.The 5050 SMD LEDs (Wenzhou Rockgrand Trade Co. Ltd., Wenzhou, China) were utilized as light sources, with 60 LEDs per meter and a maximum power output of 14.4 W/m at 12 V.These LEDs emitted blue light with a wavelength of 470 nm.The light intensity was regulated by a switching power supply model PS3005N manufactured by QJE (Xinyujie Electronics Co., LTD., Shenzhen, China).Its value was assessed using an OHSP-350C spectral analyzer from Hangzhou Hopoo Light&Color Technology Co., LTD, Hangzhou, China, and adjusted to 5 mW/cm 2 nearby the surface of the tube.
Photoiniferter RAFT Polymerization
A continuous-flow reactor was used for the PI-RAFT polymerization.The reactor was composed of an aluminum cylinder with a diameter of 12 cm and a height of 8 cm that contained a LED strip.Inside the aluminum cylinder, a glass cylinder with a diameter of 7.5 cm was positioned coaxially with a PTFE tube of internal/external diameter of 2/3 mm wrapped around it.The LED strip was positioned 1.6 cm away from the surface of the tube.With an operating volume of 18.5 cubic centimeters, the irradiated part of the tubular reactor was 5.9 m long.The 5050 SMD LEDs (Wenzhou Rockgrand Trade Co., Ltd., Wenzhou, China) were utilized as light sources, with 60 LEDs per meter and a maximum power output of 14.4 W/m at 12 V.These LEDs emitted blue light with a wavelength of 470 nm.The light intensity was regulated by a switching power supply model PS3005N manufactured by QJE (Xinyujie Electronics Co., LTD., Shenzhen, China).Its value was assessed using an OHSP-350C spectral analyzer from Hangzhou Hopoo Light&Color Technology Co., LTD, Hangzhou, China, and adjusted to 5 mW/cm 2 nearby the surface of the tube.PI-RAFT polymerization was performed as described below.CDTPA (14.7 mg, 35.3 µmol, 1.0 eq), MOEGM (1.2993 g, 2.81 mmol, 80 eq), AOEGM (1.7191 g, 2.81 mmol, 80 eq), and AAm (0.1008 g, 1.42 mmol, 40 eq) were mixed with DMSO or THF (4.23 g) and stirred until fully dissolved.The total monomer concentration in resulting mixture was 50 wt%.The experimental setup was assembled according to Figure 2. The reaction mixture in the feed vial was purged with nitrogen for 10 min.The tubular reactor and the collecting vial were then purged for five minutes.A 50 mL syringe prefilled with nitrogen was placed in the syringe pump and the reaction mixture was transferred to the tubular reactor by pumping nitrogen at the desired rate.To protect the collecting vial from light, it was covered with foil.The syringe pump regulated the residence time of the reagents in the reactor.A product aliquot was mixed with acetonitrile to analyze the monomer conversion using HPLC.The polymerization was stopped by exposing the mixture to air and cooling it down in the dark.The copolymers were further diluted with tenfold ethyl alcohol and purified by dialysis (MWCO 8-14k) in ethyl alcohol for three days in the dark and then dried under vacuum.Their structures (Figure 3) were confirmed by 1 H NMR (Figures S1-S7) and IR spectroscopy. 1 PI-RAFT polymerization was performed as described below.CDTPA (14.7 mg, 35.3 µmol, 1.0 eq), MOEGM (1.2993 g, 2.81 mmol, 80 eq), AOEGM (1.7191 g, 2.81 mmol, 80 eq), and AAm (0.1008 g, 1.42 mmol, 40 eq) were mixed with DMSO or THF (4.23 g) and stirred until fully dissolved.The total monomer concentration in resulting mixture was 50 wt%.The experimental setup was assembled according to Figure 2. The reaction mixture in the feed vial was purged with nitrogen for 10 min.The tubular reactor and the collecting vial were then purged for five minutes.A 50 mL syringe prefilled with nitrogen was placed in the syringe pump and the reaction mixture was transferred to the tubular reactor by pumping nitrogen at the desired rate.To protect the collecting vial from light, it was covered with foil.The syringe pump regulated the residence time of the reagents in the reactor.A product aliquot was mixed with acetonitrile to analyze the monomer conversion using HPLC.The polymerization was stopped by exposing the mixture to air and cooling it down in the dark.The copolymers were further diluted with tenfold ethyl alcohol and purified by dialysis (MWCO 8-14k) in ethyl alcohol for three days in the dark and then dried under vacuum.Their structures (Figure 3) were confirmed by 1 H NMR (Figures S1-S7) and IR spectroscopy.PI-RAFT polymerization was performed as described below.CDTPA (14.7 mg, 35.3 µmol, 1.0 eq), MOEGM (1.2993 g, 2.81 mmol, 80 eq), AOEGM (1.7191 g, 2.81 mmol, 80 eq), and AAm (0.1008 g, 1.42 mmol, 40 eq) were mixed with DMSO or THF (4.23 g) and stirred until fully dissolved.The total monomer concentration in resulting mixture was 50 wt%.The experimental setup was assembled according to Figure 2. The reaction mixture in the feed vial was purged with nitrogen for 10 min.The tubular reactor and the collecting vial were then purged for five minutes.A 50 mL syringe prefilled with nitrogen was placed in the syringe pump and the reaction mixture was transferred to the tubular reactor by pumping nitrogen at the desired rate.To protect the collecting vial from light, it was covered with foil.The syringe pump regulated the residence time of the reagents in the reactor.A product aliquot was mixed with acetonitrile to analyze the monomer conversion using HPLC.The polymerization was stopped by exposing the mixture to air and cooling it down in the dark.The copolymers were further diluted with tenfold ethyl alcohol and purified by dialysis (MWCO 8-14k) in ethyl alcohol for three days in the dark and then dried under vacuum.Their structures (Figure 3) were confirmed by 1 H NMR (Figures S1-S7) and IR spectroscopy. 1
Characterization Techniques
1 H NMR spectra were recorded at 25 • C in CDCl 3 or DMSO-d6 using an Agilent 400 MHz DD2 spectrometer (Agilent Technologies, Santa Clara, CA, USA).The dn/dc values for copolymers were determined within a concentration range of 1-15 mg/mL at 27-30 • C using a BI-DNDC differential refractometer from Brookhaven Instr.Corp., Holtsville, NY, USA.The monomer concentrations in reaction mixtures were measured using an HPLC system that included a Kromasil 100-5-C18 4.6 × 250 mm column, refractometric and matrix UV detectors, a thermostat manufactured by Shimadzu Prominence (Tokyo, Japan).Acetonitrile was used as an eluent, with a flow rate of 0.9 mL/min.The thermostat temperature was of 55 Polymer molecular weights and molecular weight distributions were determined through GPC analysis, utilizing a Chromos LC-301 instrument (Chromos, Dzerzhinsk, Russia) equipped with an Alpha-10 isocratic pump and a Waters 410 refractometric detector, along with two exclusive columns, Phenogel 5 µm 500A and Phenogel 5 µm 10E5A, from Phenomenex (with a measuring range of 1 k to 1000 k); tetrahydrofuran was used as the eluent.Calibration was performed using polystyrene standards.
Differential scanning calorimetry (DSC) was conducted for polymer specimens (approximately 10-15 mg in an aluminum pan) under a dry argon flow utilizing a DSC 204F1 Phoenix calorimeter (Netzsch, Selb, Germany) furnished with a CC 200 controller for liquid nitrogen cooling.The heating and cooling rates were set at 10 • C/min and −10 • C/min, respectively, between −80 • C and 80 • C.
Laser light scattering (LLS) experiments were conducted using a Photocor Complex multi-angle light-scattering device (Photocor Ltd., Moscow, Russia) that was equipped with a thermostabilized diode laser (λ = 659 nm, 35 mW) and a thermo-electric Peltier temperature controller (temperature range from 5 to 100 • C, accuracy of 0.1 • C).LLS was employed to measure the hydrodynamic radii (Rh) of polymer molecules and micelles (DLS), weight-averaged molecular weights (Mw), second virial coefficients (A2), and aggregation numbers (N agg ) of micelles (SLS).
After preparation, the polymer solutions were equilibrated at room temperature for 24 h and filtered using CHROMAFIL PET syringe filters (0.20 µm) before conducting the measurements.At least three measurements were conducted per sample, resulting in an average hydrodynamic radius of Rh in nanometers.The single-angle Debye plot method was utilized to determine Mw and A2.
The scattering geometry employed a vertically polarized incident light and detection without a polarizer (VU geometry, Rv).According to [65], the Rayleigh ratio for toluene at an incident wavelength of 659 nm and measurement temperature was calculated to be 1.142 × 10 −5 cm −1 at 25 • C.
Critical micelle concentrations (CMCs) of copolymers were determined using pyrene as a fluorescent probe via fluorimetry.To obtain the copolymer solutions, 10 different concentrations ranging from 1 × 10 −6 to 0.5 mg/mL were prepared by dissolving the polymers in an aqueous pyrene solution (6 × 10 −7 M).The resulting mixtures were sonicated for 5 min and then incubated at room temperature for 24 h prior to the measurements.Steady-state fluorescence spectra were measured on a Shimadzu RF-6000 spectrofluorometer (Shimadzu, Tokyo, Japan) under specified conditions: excitation slit width of 3 nm, emission slit width of 3 nm, scanning speed of 200 nm/min, excitation wavelength of 335.0 nm, and emission wavelength of 350.0-500.0nm.The ratio of intensities of the first (I 1 , 373 nm) and third (I 3 , 384 nm) vibronic pyrene emission bands, denoted as (I 1 /I 3 ), was measured as a function of copolymer concentration.The concentration corresponding to the inflection point at which I 1 /I 3 begins to decrease was defined as CMC.
Additionally, pyrene, a hydrophobic drug model, was employed to assess the loading capacity of the micelles.The loading capacity was evaluated through UV spectroscopy using the subsequent procedure.An amount of 5 milligrams of dry pyrene was added to 10 milliliters of a 0.1% aqueous polymer solution, and the resulting solution was sonicated at 25 • C for 30 min.The aqueous polymer solution containing pyrene was then filtered using a syringe filter with a pore size of 0.45 µm.The filtrate was diluted 40-fold with acetonitrile.The concentration of pyrene was measured using a Shimadzu UV-1800 (Shimadzu, Tokyo, Japan) spectrophotometer at an absorption wavelength of 334 nm, with a standard calibration curve experimentally determined for pyrene solutions in acetonitrile.
Photoiniferter RAFT Copolymerization
The objective of this study was to develop a simple and efficient method for creating thermoresponsive nonionic copolymers that can form micelles in aqueous solutions and effectively release hydrophobic drugs at a controlled rate.MOEGM homopolymers with side chains of various lengths are known to exhibit thermoresponsive properties [50, 51,66].We used MOEGM and hydrophobic comonomer with higher alkyl moieties (AOEGM) as the main polymer backbone to enhance the amphiphilic properties.Additionally, we introduced modifying comonomer units (acrylamide, methacrylamide and N-methylacrylamide) to assess their impact on self-assembly behavior, CMC, and drug loading capacity of micelles.The copolymers were prepared using the grafting through method.Previous research had determined the optimal conditions for obtaining similar copolymers, including the composition, molecular weight, and compositional homogeneity [62].
Through light-scattering techniques, it was determined that the copolymers could form unimolecular micelles in aqueous solutions, but only above a certain molecular weight where chain flexibility was sufficient.This was crucial for maintaining micelle stability upon dilution.Below this threshold, the copolymers formed multimolecular micelles even before the lower critical solution temperature (LCST).At temperatures above the LCST, the assemblies transformed into larger aggregates.
A methodology was developed for producing self-assembled copolymers with high chain-end fidelity by PI-RAFT polymerization in periodic mode.However, we encountered difficulties in scaling up the synthesis process.When using reaction vessels larger than 5-10 mL, polymerization either did not occur or had a significant induction period.This resulted in low product yields.To address this issue, we employed a flow-type reactor, which allowed obtaining high-molecular-weight copolymers with low dispersity and high conversions in a relatively short time.
Photoiniferter RAFT Copolymerization
The objective of this study was to develop a simple and efficient method for creating thermoresponsive nonionic copolymers that can form micelles in aqueous solutions and effectively release hydrophobic drugs at a controlled rate.MOEGM homopolymers with side chains of various lengths are known to exhibit thermoresponsive properties [50, 51,66].We used MOEGM and hydrophobic comonomer with higher alkyl moieties (AOEGM) as the main polymer backbone to enhance the amphiphilic properties.Additionally, we introduced modifying comonomer units (acrylamide, methacrylamide and N-methylacrylamide) to assess their impact on self-assembly behavior, CMC, and drug loading capacity of micelles.The copolymers were prepared using the grafting through method.Previous research had determined the optimal conditions for obtaining similar copolymers, including the composition, molecular weight, and compositional homogeneity [62].
Through light-scattering techniques, it was determined that the copolymers could form unimolecular micelles in aqueous solutions, but only above a certain molecular weight where chain flexibility was sufficient.This was crucial for maintaining micelle stability upon dilution.Below this threshold, the copolymers formed multimolecular micelles even before the lower critical solution temperature (LCST).At temperatures above the LCST, the assemblies transformed into larger aggregates.
A methodology was developed for producing self-assembled copolymers with high chain-end fidelity by PI-RAFT polymerization in periodic mode.However, we encountered difficulties in scaling up the synthesis process.When using reaction vessels larger than 5-10 mL, polymerization either did not occur or had a significant induction period.This resulted in low product yields.To address this issue, we employed a flow-type reactor, which allowed obtaining high-molecular-weight copolymers with low dispersity and high conversions in a relatively short time.
The copolymers were synthesized using a reversible addition-fragmentation chain transfer (RAFT) agent, specifically 4-cyano-4-[(dodecylsulfanylthiocarbonyl)sulfanyl]pentanoic acid (CDTPA).Its structure, along with the mechanism of visible light-induced PI-RAFT polymerization, is depicted in Figure 4. RAFT agents can be activated by UV or visible light.The choice of wavelength for irradiation depends on the type of chain transfer agent (CTA) used.The PI-RAFT polymerization is initiated when the C-S bond undergoes homolytic dissociation, leading to the cleavage of the leaving group.Photoexcitation can occur through either π-π* or n-π* transitions.Although the n-π* transition is weaker, it is preferred for initiation because it causes fewer side reactions.The wavelength associated with the n-π* transition differs depending on the type of CTA, falling in the UV region for xanthates and dithiocarbamates and in the visible region for trithiocarbonates and dithiobenzoates [67].CDTPA used in this study can also be activated by UV light through the π-π* transition (Figure 5), and green light is also acceptable.However, blue light provides an optimum balance between rate and control, making it the preferred choice.
sitions.Although the n-* transition is weaker, it is preferred for initiation because it causes fewer side reactions.The wavelength associated with the n-* transition differs depending on the type of CTA, falling in the UV region for xanthates and dithiocarbamates and in the visible region for trithiocarbonates and dithiobenzoates [67].CDTPA used in this study can also be activated by UV light through the -* transition (Figure 5), and green light is also acceptable.However, blue light provides an optimum balance between rate and control, making it the preferred choice.Table 1 shows the polymerization results.The primary objective was to achieve the highest possible monomer conversion while maintaining acceptable dispersity, with Đ <1.3.The fastest reaction rate was found with DMSO solvent, which resulted in 91% conversion.As AAm and MAAm are less soluble in DMSO, the ternary copolymers were obtained in THF.In general, DMSO stands out as the most effective solvent for photo-mediated processes.This may be due to several reasons.It is believed that in PET-RAFT processes, fast deactivation of the reactive ground-state oxygen to the singlet species may be due to triplet-triplet annihilation with the excited-state photocatalyst, followed by a reaction with DMSO to form dimethylsulfone [4,8].However, the reasons for the efficiency of DMSO in PI-RAFT processes are not completely clear.
The discrepancy between the theoretical molecular masses and the GPC data is noteworthy, with the latter being significantly underestimated.This has been observed multiple times for bottlebrushes based on macromonomers containing oligo(ethylene glycol) moieties [59,[68][69][70][71], except for low-molecular-weight brushes [7,14,72].The primary cause of this discrepancy might be the nonlinear relationship between retention time and molecular weight in the GPC resulting from changes in the shape and hydrodynamic volume of macromolecules with an increasing degree of polymerization (transition from a spherical to a worm-like structure).At the same time, the differences in hydrodynamics increase significantly compared to polystyrene standards.Table 1 shows the polymerization results.The primary objective was to achieve the highest possible monomer conversion while maintaining acceptable dispersity, with Ð <1.3.The fastest reaction rate was found with DMSO solvent, which resulted in 91% conversion.As AAm and MAAm are less soluble in DMSO, the ternary copolymers were obtained in THF.In general, DMSO stands out as the most effective solvent for photo-mediated processes.This may be due to several reasons.It is believed that in PET-RAFT processes, fast deactivation of the reactive ground-state oxygen to the singlet species may be due to triplet-triplet annihilation with the excited-state photocatalyst, followed by a reaction with DMSO to form dimethylsulfone [4,8].However, the reasons for the efficiency of DMSO in PI-RAFT processes are not completely clear.
The discrepancy between the theoretical molecular masses and the GPC data is noteworthy, with the latter being significantly underestimated.This has been observed multiple times for bottlebrushes based on macromonomers containing oligo(ethylene glycol) moieties [59,[68][69][70][71], except for low-molecular-weight brushes [7,14,72].The primary cause of this discrepancy might be the nonlinear relationship between retention time and molecular weight in the GPC resulting from changes in the shape and hydrodynamic volume of macromolecules with an increasing degree of polymerization (transition from a spherical to a worm-like structure).At the same time, the differences in hydrodynamics increase significantly compared to polystyrene standards.a determined by 1 HNMR (P1, P6, P7) and HPLC (P2-P5).b theoretical molecular weight calculated using the following equation: The process is substantially slowed down by adding amide comonomers, particularly with the use of acrylamide instead of methacrylamide.The involvement of amide units in the copolymer is confirmed by IR spectroscopy data (Figure 6).The spectrum of the MOEGM-AOEGM copolymer contains characteristic bands corresponding to the stretching vibration of the carbonyl group (C=O, 1729 cm −1 ), asymmetric stretching vibration of ether bonds (C-O-C, 1115 cm −1 ).The peaks at 2861 cm −1 and 2925 cm −1 are the stretching vibrational bands of methylene (-CH 2 ) and methyl (-CH 3 ) groups.A broad band at 3300-3700 cm −1 with a maximum at 3518 cm −1 is attributed to the OH vibrations of hydrogen-bonded water; the band at 1642 cm -1 is the bend (v2) of liquid absorbed water.The spectra of ternary amide-containing copolymers additionally contain bands at ~3360 and 3208 cm −1 attributed to NH and NH 2 stretching vibrations.The absorption bands at 1678, 1536 and 1293 cm −1 are attributed to C=O (amide) stretching, NH bending and C-N stretching vibrations, respectively.The process is substantially slowed down by adding amide comonomers, particularly with the use of acrylamide instead of methacrylamide.The involvement of amide units in the copolymer is confirmed by IR spectroscopy data (Figure 6).The spectrum of the MOEGM-AOEGM copolymer contains characteristic bands corresponding to the stretching vibration of the carbonyl group (C=O, 1729 cm −1 ), asymmetric stretching vibration of ether bonds (C-O-C, 1115 cm −1 ).The peaks at 2861 cm −1 and 2925 cm −1 are the stretching vibrational bands of methylene (-CH2) and methyl (-CH3) groups.A broad band at 3300-3700 cm −1 with a maximum at 3518 cm −1 is attributed to the OH vibrations of hydrogen-bonded water; the band at 1642 cm -1 is the bend (v2) of liquid absorbed water.The spectra of ternary amide-containing copolymers additionally contain bands at ~3360 and 3208 cm −1 attributed to NH and NH2 stretching vibrations.The absorption bands at 1678, 1536 and 1293 cm −1 are attributed to C=O (amide) stretching, NH bending and C-N stretching vibrations, respectively.Figure 6.IR spectra of MOEGM-AOEGM-amide ternary copolymers.Sample designations are in Table 1. 1.
The copolymers underwent thermal analysis through differential scanning calorimetry (DSC), displaying two types of transitions in the DSC thermograms, namely glass transition and melting.All numerical values are presented in Table 2.The glass transition temperature of the base copolymer was around −69 • C. The addition of up to 20% amide units resulted in an increase in Tg by 7-9 • C. All copolymers exhibited broadened peaks corresponding to the melting point (Tm) due to the presence of MOEGM units that are hard to crystallize (Figure 7).It is noteworthy that copolymers with acrylamide units displayed a distinct trend: the melting peak broadened and then split upon the introduction of 10% and 20% AAm shifting simultaneously to the higher-temperature region.Conversely, the melting peak became narrower when methacrylamide units were introduced.
Polymers 2024, 16, 134 9 of 14 temperature of the base copolymer was around −69 °C.The addition of up to 20% amide units resulted in an increase in Tg by 7-9 °C.All copolymers exhibited broadened peaks corresponding to the melting point (Tm) due to the presence of MOEGM units that are hard to crystallize (Figure 7).It is noteworthy that copolymers with acrylamide units displayed a distinct trend: the melting peak broadened and then split upon the introduction of 10% and 20% AAm shifting simultaneously to the higher-temperature region.Conversely, the melting peak became narrower when methacrylamide units were introduced.
Thermoresponsive Properties, Hydrodynamic and Molecular Weight Characteristics of Bottlebrushes
It is known that MOEGM-AOEGM copolymers exhibit thermoresponsive properties with the possibility of fine-tuning the LCST in a wide range by varying the ratio of hydrophilic (MOEGM) and hydrophobic (AOEGM) units.It was interesting to evaluate how the introduction of hydrophilic amide units would affect the LCST.Cp values were determined using turbidimetry (Table 3).It was surprising to find that in most cases, the addition of up to 20% amide units had little or no effect on thermoresponsive properties, shifting Cp by 1-3 °C in one direction or another.
The critical micelle concentrations (CMCs) of the copolymers were determined using pyrene as a fluorescent probe.Also, pyrene was used as a model of a hydrophobic drug when evaluating the drug loading capacity of the micelles.
Thermoresponsive Properties, Hydrodynamic and Molecular Weight Characteristics of Bottlebrushes
It is known that MOEGM-AOEGM copolymers exhibit thermoresponsive properties with the possibility of fine-tuning the LCST in a wide range by varying the ratio of hydrophilic (MOEGM) and hydrophobic (AOEGM) units.It was interesting to evaluate how the introduction of hydrophilic amide units would affect the LCST.C p values were determined using turbidimetry (Table 3).It was surprising to find that in most cases, the addition of up to 20% amide units had little or no effect on thermoresponsive properties, shifting C p by 1-3 • C in one direction or another.
The critical micelle concentrations (CMCs) of the copolymers were determined using pyrene as a fluorescent probe.Also, pyrene was used as a model of a hydrophobic drug when evaluating the drug loading capacity of the micelles.In general, all copolymers demonstrate low critical micelle concentration due to their amphiphilic nature and high loading capacity values, regardless of the structure or content of the introduced amide groups.This can be attributed to the lack of centers capable of hydrogen bonding with amide groups in pyrene molecule, and the retention of pyrene in the micelles is mainly due to hydrophobic interactions.
Adjusting the hydrophilic-hydrophobic balance through modifications in the monomer structure and composition allows tuning the amphiphilic properties of the resulting molecular brushes and achieving such amphiphilic properties that ensure the formation of unimolecular micelles in aqueous solutions.The hydrophobic core of the unimolecular micelles consists of the hydrophobic main chain and hydrophobic side chains, while the hydrophilic side chains (or hydrophilic blocks of side chains) constitute the outer shell.The flexibility and capacity of macromolecules to form monomolecular micellar structures of the core-shell type depends on the length and composition of the hydrophobic main chain and hydrophilic or amphiphilic side chains.Molecular brushes can form unimolecular micelles in water through self-folding once they reach a certain degree of polymerization.
Copolymers of similar composition with low molecular weight, that have limited flexibility, are compelled to form multimolecular micelles in order to reduce the contact surface of hydrophobic units with water since they are unable to fold.
Compositional homogeneity in polymers and narrow molecular weight distribution are crucial factors for the formation of unimolecular micelles, which in turn promote the unimodality of micelles.The compositional homogeneity was achieved by utilizing the similarly reactive oxyalkylated methacrylates with hydrophilic (MOEGM) and hydrophobic (AOEGM) moieties providing an amphiphilic nature.The RDRP polymerization methodology was employed to ensure optimal MW and dispersity.
In the present work, we tried to evaluate the effect of introducing a small amount of hydrophilic amide units capable of hydrogen bonding on the micelle aggregation number and the possibility of obtaining unimolecular micelles.
The micelle aggregation number in aqueous solutions was determined by analyzing copolymer solutions using static light scattering and dynamic light scattering.To calculate the number of macromolecules in a micelle in aqueous solution, Mw in water was divided by Mw in acetonitrile, assuming the latter as true.
N agg = M W (SLS in water)/M W (SLS in a good or θ-solvent) Table 4 summarizes the data on the molecular weight characteristics of the bottlebrushes.The base copolymer P1, which contained no amide units, formed unimolecular micelles in water by self-folding with an aggregation number of about unity.The DLS data also confirm the existence of narrowly dispersed particles comparable in size to individual macromolecules in aqueous solutions.Comparing Rh values in water and acetonitrile indicates that all copolymers formed fairly dense micelles in water.
All amide-containing copolymers formed micelles with an aggregation number approaching two, with the highest values observed for the acrylic amide (AAm).This is likely a result of the ability to form a greater number of hydrogen bonds in comparison to the N-substituted amide, as well as significant differences in the reactivity of the monomers (when comparing AAm and MAAm), resulting in greater compositional heterogeneity in copolymers P4 and P5 compared to P2 and P3.
Conclusions
Amphiphilic thermoresponsive bottlebrushes composed of oligo(ethylene glycol)containing macromonomers and up to 20% modifying amide comonomers were synthesized through photoiniferter polymerization in a continuous-flow photoreactor.The reaction, carried out at 40 • C under blue light for 1-2 h in DMSO and THF, produced high-molecular-weight copolymers with satisfactory dispersity (Ð < 1.3) and high yields.Considering the maximum achieved yield (91% for P1), this study demonstrates the potential to scale photopolymerization using the proposed setup to produce approximately 50 g of pure polymer over an 8 h day.The copolymers were characterized through various methods, including DSC, DLS, SLS, and turbidimetry.Thermal analysis indicated that the copolymers possessed a glass transition temperature ranging from approximately −60 to −70 • C, with an increase of 7-9 • C upon the introduction of 20% amide units compared to the base copolymer.The melting temperature increased only with the introduction of AAm units while remaining almost constant in the other cases.The unmodified copolymer formed unimolecular micelles at temperatures below the LCST in aqueous solutions, as shown by the DLS and SLS data.The addition of AAm, MAAm, and N-MeAAm units led to an increase in micelle aggregation numbers, with values ranging from 2 to 3 and AAm exhibiting the highest values.This is probably due to the increased susceptibility of AAm to hydrogen bonding and the compositional heterogeneity of such copolymers.
The resulting bottlebrushes possess easily modulated LCST and are capable of forming uni-or bimolecular micelles at very low concentrations.These micelles have sufficiently high loading capacity with respect to pyrene (a model of a hydrophobic drug) that hold potential for targeted drug delivery.
Data Availability Statement:
The data presented in this study are available upon request from the corresponding author.
Figure 4 .Figure 4 .
Figure 4. Structure of the RAFT agent and the mechanism of PI-RAFT polymerization under visible light.
Figure 5 .
Figure 5. Absorption spectra of CDTPA measured using UV-Vis spectroscopy in acetonitrile (0.2 wt.%), and emission spectra of blue LEDs.The peak of the emission spectra of blue LEDs is at 470 nm.
Figure 5 .
Figure 5. Absorption spectra of CDTPA measured using UV-Vis spectroscopy in acetonitrile (0.2 wt%), and emission spectra of blue LEDs.The peak of the emission spectra of blue LEDs is at 470 nm.
X 1 , X 2 , and M W RAFT correspond to initial concentrations of the monomers, RAFT agent, molar weights of the monomers, their conversions, and molar weight of RAFT agent.c Determined by GPC in THF with PSt standard calibration.
Table 3 .
CMC and loading capacity of copolymers.
Table 4 .
Hydrodynamic and molecular weight characteristics of the copolymers.hydrodynamic radii were determined by DLS at 25 and 27 • C in water and acetonitrile, respectively.b absolute weight average molecular weights and second virial coefficients determined by SLS in acetonitrile and water. a | 8,065.4 | 2023-12-31T00:00:00.000 | [
"Materials Science",
"Chemistry"
] |
Liquid Crystal-Based Hydrophone Arrays
We describe a fiber optic hydrophone array system that could be used for underwater acoustic surveillance applications (e.g. military, counter terrorist, and customs authorities in protecting ports and harbors), offshore production facilities or coastal approaches as well as various marine applications. In this paper, we propose a new approach to underwater sonar systems using the voltage-controlled liquid crystals and simple multiplexing method. The proposed method permits measurement of sound under water at multiple points along an optical fiber using the low cost components and standard single mode fiber, without complex interferometric measurement techniques, electronics or demodulation software.
Introduction
Today's underwater sound detection and mapping technology is based on a complex, large and expensive electrical approach [1]. The search for an optical version of the electrical sensors used today has been pursued with increasing vigor over the past decade due to the emergence of a viable fiber-based technology and with accompanying low-cost optical components.
The idea of using an optical fiber as a hydrophone was first published by Bucaro [2]; since then, various fiber-based sensors have been developed [3][4][5][6][7]. These rely either on (1) some interferometric approach including the use of fiber coils and distributed feedback (DFB) fiber-lasers or (2) one of a number of intensity modulation mechanisms, the latter involving physical motion or mechanical effects such as micro bending. However, each of these approaches must overcome its own set of problems -including complexity, cost or sensitivity -before truly practical sensors can be produced.
In this paper, we propose an approach based on liquid crystal cells as variable, broadband reflectors in combination with ceramic hydrophones, standard single mode fiber, and fiber Bragg gratings (FBGs). The proposed method does neither rely on complex interferometric measurement techniques, electronics nor software for demodulating the output signals. This system uses simple demultiplexing of the output light signal based on both amplitude and frequency of light in predefined channels, which are directly linked to the amplitude and frequency of the sound signals measured under water.
Sensing system
The proposed sensing system, illustrated in Fig. 1, consists of two major parts: a sensor head (SH), monitoring pressure changes under water, and a multiplexing system, allowing measurement at distributed points in space by sharing a single broadband source (BBS) with multiple sensor heads. The source provides a broadband unpolarized optical signal to multiple modules, each routing part of the light to the corresponding sensor head. Each one in turn reflects light, modulating its intensity according to the measured pressure, and an FBG ensures that only a specific wavelength (channel) is transmitted to the demultiplexer (DM) from each module. The demultiplexer gathers all the channels reflected from the SHs and directs each channel into the monitoring devices (photo-detectors).
Sensor head
The purpose of the sensor head is to monitor pressure changes under water and to transduce them linearly into a variation of intensity of the optical signal. The sensor head is a hybrid device based on the combination of well-established technologies, namely fiber optics, piezoelectric materials (PZT), and liquid crystals (LC). It utilizes the excellent wave guiding properties of optical fibers to leverage the high electro-optic and piezoelectric constants of LCs and PZTs. The head is composed of a liquid crystal cell, a polarization maintaining (PM) fiber, an in-line polarizer, a standard single-mode fiber (e.g. SMF 28), and a PZT hydrophone equipped with an optically powered amplifier (Fig. 2 The liquid crystal cell itself consists of two sandwiched glass substrates, with their inner surfaces coated with indium tin oxide (ITO) and gold, respectively. Both coatings are used as electrodes thus enabling the creation of electrical fields between the glass substrates. The gold coating is also used as a broadband reflector, allowing the operation of the cell in reflection. The gap between the substrates is filled with a short pitch ferroelectric liquid crystal (FLC) with its helix axis lying on the plane defined by the substrate surface and perpendicular to the electric field (Fig. 3). In this configuration, the cell operates in deformed helix ferroelectric (DHF) mode [8], featuring a linear dependence of the cell reflectivity on the applied electrical field under crossed or parallel polarisers [9,10]. The behavior of the DHF structure in an electric field has been fully characterized and will be discussed in greater detail below. The principle of operation of the sensor head is as follows. Unpolarized light coming from the broadband source is guided by a standard single mode fiber, polarized by an in-line polarizer at an angle to the liquid crystal helix axis and supplied to the cell using a polarization maintaining (PM) fiber. Polarized light impinges at normal incidence on the first substrate, passes through the liquid crystal, reflects off the gold-coated mirror back into the same layer and is collected by the fiber. The presence of an electric field directed along the light propagation direction, induces a change in the birefringence of the liquid crystal and therefore affects the measured reflectance. The proposed configuration corresponds to a parallel polarizer/analyzer geometry and allows to transduce the applied voltage into a modulation of the intensity of the reflected light, which is then detected by the photo-diode.
The electric field required to switch the cell is around a few volts (typically 5 V to 10 V) per m (5 kV/m to 10 kV/m) and is thus much higher than the field directly created by the PZT actuator (ceramic hydrophone). The sensitivity of modern ceramic actuators is -164 dB at 1 V/μPa. The response of this ceramic hydrophone to sea state zero (SS0) and to various underwater noises ranges between 0.1 μV and 0.6 V. Thus, some field amplification is required. This amplification can be accomplished using an optically powered amplifier. The optical signal transmitted through an FBG with a wavelength not satisfying the Bragg condition could be used for powering the amplifier.
Multiplexing
The multiplexing system presented in this work was developed using modules consisting of a 3-dB 2×2 bi-directional coupler, a circulator incorporating an FBG and isolators in the configuration shown in Fig. 1(a). Port 4 of the coupler is connected to the sensor head, and port 1 is connected to the broadband light source via an isolator. Any change in reflectivity at the sensing end would then result in a variation of the signal intensity reflected back to port 2. Port 2 of the coupler is connected to the circulator via port 1, thus enabling the reflected sensing signal to reach the FBG connected to port 2 of the circulator. The FBG is used to selectively reflect back to port 3 of the circulator (signal detection end) the part of the broadband signal matching the FBG's narrow spectral band ( FBG ). As a result, information about the value of the electrical field or voltage can be retrieved by monitoring the relative intensity of the signal reflected back from the sensor head to the FBG and then to the detector or the optical spectrum analyser (OSA). The broadband signal transmitted through the FBG is connected to the photo-detector via the isolator. This part of optical signal is used to harvest energy to power the amplifier connected to the PZT hydrophone. This sensing module allows multiplexing capability by directly connecting a similar module with port 3 of the coupler, as depicted in Fig. 1(b). An optical isolator is used to avoid crosstalk between the modules.
Modeling of the liquid crystal cell
In this section, we describe in more detail the behavior of the DHF mode when the cell is operated in reflection. Adopting a polarization-gratings approach, Kiselev et al. [11] have shown that when the helix pitch, P, is smaller than the incident wavelength, λ, the liquid crystal can be described as a uniform biaxial material. In our case P = 200 nm and λ = 633 nm, so that we feel justified in adopting the same approach. In the geometry illustrated in Fig. 3, where the helix pitch is along x, smectic layers are in the y-z plane and the electric field is along z, the effective dielectric tensor at small fields can be written as ( The effective dielectric tensor depends on the electric field through the electric field parameter , which is proportional to the ratio of the applied and the critical electric field, E/E C . Since both the model and the device only work for fields much smaller than the critical field, E<<E C , in (1) and (2), we expand each term to the second order in , following [11]. All the coefficients appearing in (1) and (2) can be calculated as a function of the liquid crystal parameters, as decribed in [11]. In the x'y'z' reference system of the liquid cristal principal axes, which rotates with the electric field, the effective dielectric tensor can be written in the diagonal form as the latter being the angle between the principal axis with the highest refractive index and the helix axis. In order to calculate the structure reflectance, we place ourselves in the optical axis frame of reference, where the liquid crystal effective dielectric tensor is diagonal and assume that gold is a perfect mirror, i.e. that its refractive index modulus is much larger than all the liquid crystal refractive indices. By using a transfer matrix approach, we can calculate the reflection coefficient for the structure in Fig. 3 in the case of normal incident light along z. Let us assume that the incident light is linearly polarized at an angle with respect to the helix axis x and that LC refractive indices are much larger than the in-plane refractive index difference: We find that the reflectance measured with crossed polarizer/analyzer, is given by while the reflectance with parallel polarizer/analyzer is With (5)- (8), we can calculate the reflectance with the crossed or parallel polarizer/analyzer for each value of the applied electrical field. In the experiments, the voltage V across the LC cell is known, so that we can write / where we introduce the proportionality constant , which is usually treated as a parameter of the theory and can be adjusted to fit the experimental data.
Fabrication
The liquid crystal cells consist of two parallel glass substrates separated by a distance d using spacers (beads) of control sizes: 5 m for the bulk approaches and 10 m for the fiber optic one. One substrate is coated with ITO and the other with gold thus forming two electrodes one of which also acts as a broadband reflector, which allows the cell to be used in reflection. P. N. Lebedev at the Physical Institute of Russian Academy of Sciences developed the DHF liquid crystal (tagged as FLC-576) used in the cell with an helix pitch P 0 = 200 nm [11]. This particular liquid crystal mixture was chosen due to its short helix pitch yielding low scattering of light and allowing DHF mode of operation as P 0 << d [4]. Importantly, the planar alignment of the liquid crystals was achieved using a photo-alignment method [12]. In this approach, both inner surfaces of the substrates were spin-coated with a photo-aligning substance (azo-benzene sulfuric dye SD-1 dried at 155 ℃), and polarized ultraviolet (UV) irradiation (6 mW/cm 2 at 365 nm) of the azo-dye films at normal incidence was used to induce anisotropy.
Experimental setup
Two experimental setups were built to test the theoretical model and characterize the sensor head [ Figs. 4(a) and 4(b)]. The purpose of the measurement was to test the model and to adjust the parameters of the liquid crystal cells used in our experiments. Linearly polarized light from a He-Ne laser at 632 nm at normal incidence was used; the light reflected back from the golden inner surface of the cell was transmitted through the non-polarizing beam-splitter, which directed 50% of light onto the photo-detector. The analyzer was installed and aligned orthogonally to the polarization of the incident light in front of the photo-detector. An aperture was used to block light scattered from the cell, and the cell was placed on a rotation stage allowing us to vary the angle between polarization of the incident light and the helix axis. The principle of operation of the cell as a voltage sensor was tested using the fiber optic setup presented in Fig. 4(b). The unpolarized broadband optical signal emitted (BBS15/16 AFC Technologies Inc.) was launched into port 1 of the optical circulator (6015-3Thorlabs Inc.). The circulator directed the light into port 2, which was connected to the in-line polarizer, transforming the unpolarized light into linearly polarized light. The output of the in-line polarizer, containing the collimated polarization maintaining fiber (C-PM-15 AFW Technology Pty. Ltd.), was directly butted onto the glass substrate of the cell and delivered highly polarized light. The cell was operated in reflection and could be described as a variable reflector. The angle between the polarization of the incident light and the helix axis was adjusted by rotating the collimator placed inside the rotation stage. The polarized light propagating through the birefringent liquid crystal split into two components propagating at different speeds along the ordinary and extraordinary optical axes. At the output of the cell, the two components interfered, resulting in a rotated polarization state. The polarization maintaining fiber, together with the in-line polarizer at the output of the cell, transformed this output polarization state into a variation of the intensity. The optical circulator directed the optical signal from the cell into port 3, which was connected to the variable gain photo-detector used to monitor the variation of the intensity of the light reflected as a function of the voltage. The signal generator (DS340 Stanford Research Systems Inc.) was used to simulate the output of the piezoelectric hydrophone and generate the variable voltage applied to the cell. The applied voltage had a sinusoidal form with frequency variable from a few Hz to 10 kHz. Computer controlled signal generator and photo-detector were used allowing the logging of the cell's response.
Results and discussion
The cell was first characterized to show that the theoretical model could describe its operation using the setup presented in Fig. 4(a). Experimental data are shown in Fig. 5 and compared to the simulation results. Calculations have been performed as described in Section 2.3 using the formulas of [11] and the following parameters: the ordinary and extraordinary refractive indices n o =1.5 and n e =1.69, respectively, the tilt angle =32° and E =0.44. We also scaled the final reflectance by a factor 0.52, in order to take into account the losses, mainly due to scattering of light inside the LC. We see that the model correctly describe the general behavior of the reflectance for different polarizations. In particular, the crossed polarizer reflectance at V=0 is predicted to be maximum for β=45° and to vanish for β=0°. In both these cases, the dependence of the reflectance from the electric field is quadratic at small fields. We also point out that the reflectance is a periodic function of β with a period of 90° so that it is sufficient to consider polarizations in the range -45°<β<45°. It is now interesting to discuss under which conditions the reflectance is a linear function of the applied voltage. This is because in this régime, the cell could be used to convert a time-dependent electrical signal into an optical signal without distortion. Let us assume that the applied voltage has a sinusoidal time dependence, V(t)=V 0 sin(t) and that the frequency is low enough that the cell response can be considered within the static limit. A useful quantity to determine the optimal linear response is the THD, which can be expressed in decibels as where P N is the noise, and P n is the power of the nth harmonic, calculated as the square of the sin( ) n t coefficient in the reflectance Fourier series. The lower the THD is, the more linear the response is.
We report in Fig. 6 Experimental data of the static response of the DHF for the bulk optic approach is presented in Fig. 7 for 12°, corresponding to the most linear response observed experimentally.
The Fourier analysis of the cell response operated at 1 kHz is presented in Fig. 8. The first (signal) and second (ghost) harmonic intensities are plotted as a function of the applied voltage in Fig. 8(a). The ghost intensity stays at or below the -100 dB level up to 0.3 V, while the signal amplitude increases up to -56 dB in the same range. This explains the initial drop in the THD illustrated in Fig. 8(b). Past this point, the ghost signal starts to increase faster than the signal itself causing the THD to increase. As a consequence, the THD goes through a minimum, which in our case is at a voltage of 0.3 V. This behavior is well described by the theory when a noise P N >0 is introduced [ Fig. 8(b)].
The bandwidth of the cell was also investigated. The signal to noise ratio (SNR) as a function of the frequency is presented in Fig. 9 for an applied voltage amplitude of 0.25 V, which corresponded to the THD of approximately -45 dB. At that operating voltage, the SNR dropped by 3 dB from 56 dB to 53 dB over the range of 0 to 5 kHz, which defined the bandwidth of the cell. A sensor head based on the proposed fiber optic approach was also characterized. The static response of the sensor head is presented in Fig. 10, where the angle between the polarization of incident light and the helix axis was adjusted to minimize the THD. The stability of the device was also estimated. The optical signal reflected off the cell was fluctuating by 0.44% for a measured applied voltage fluctuation of ±0.03% (peak-to-peak amplitude of 19.6 V at the frequency of 50 Hz).
Conclusions
A new low cost method for the optical measurement of sound under water at distributed localized points was proposed. This method uses a deformed helix ferroelectric liquid crystal (DHF-LC) cell combined with the PZT ceramic hydrophone as a sensor head for the optical measurement of the sound waves. The use of the ferroelectric liquid in the DHF mode exploited the linear electrooptic response of the liquid crystal cell to variation of the external electrical field. Such cells were built and fully characterized. The optimal parameters of the cell's operation in sensing applications were determined and experimentally measured while the performance of the SH using these parameters were quantified. The comparison of the theory with experimental results revealed excellent agreement. | 4,337.2 | 2012-07-10T00:00:00.000 | [
"Physics"
] |
CONTRAST ENHANCEMENT FOR COLOR IMAGES USING AN ADJUSTABLE CONTRAST STRETCHING TECHNIQUE
: With the growing demand for high-quality color images, efficient yet low-complexity methods are increasingly needed for better visualization. Unfortunately, the low-contrast is one prevalent effect that degrades color images due to various unavoidable limitations. Hence, a new adjustable contrast stretching technique is proposed in this article to improve the contrast of color images. The processing scheme of the proposed technique is relatively simple. It starts by converting the input color image to grayscale. Then, it automatically computes two contrast tuning parameters depending on the pre-determined grayscale image. Finally, it improves the contrast of the degraded color image using an amended version of an existing contrast stretching technique. Accordingly, its input is a color image and a contrast adjustment parameter δ, while its output is a contrast-adjusted color image. The proposed technique is tested by conducting intensive experiments on real-degraded images, and it is compared with four well-known contrast enhancement techniques. In addition, the proposed and comparative techniques are evaluated based on three eminent no-reference image quality assessment metrics. From the performance analysis of the achieved experiments and comparisons, the proposed technique provided satisfying performances and outperformed the comparative techniques in terms of recorded accuracy and perceived quality.
INTRODUCTION
With the growing demand for high-quality color images, efficient yet low-complexity methods are increasingly needed for better visualization. However, the existence of the low-contrast effect can lead to a reduction in the ability of the observer to analyze and interpret important information in digital images [5]. Many reasons can lead to the occurrence of this effect, including low-light environment, faulty imaging device, lack of operator skills, poor environmental conditions and so forth [6]. Such effect can be dealt with using various types of contrast enhancement techniques.
The foremost aim of such techniques is to improve the perceived quality and reveal the latent information of a given degraded image so that it becomes visually better for further analysis and interpretation [7]. Currently, most of the available enhancement techniques involve either high-complexity or histogram based operations. In various situations, histogram modification-based techniques can cause inconvenient contrast enhancement, which eventually gives the processed image an unnatural appearance with visual artifacts [8].
Other than histogram equalization [18], there exist various contrast enhancement concepts including, sigmoid function [19], homomorphic filtering [20], log and power-law transformations [21], contrast stretching [22], retinex theory [23], fuzzy set [24] artificial bee colony [25], and so forth. Despite that, there still exist wide scopes for developing low-complexity techniques that are not histogram-based and can produce satisfactory results. Such techniques can be highly beneficial, especially for systems that have limited resources.
Hence, the proposed technique has been developed with the intention of providing an efficient yet fast process for contrast enhancement of<EMAIL_ADDRESS>www.computingonline.net
Print ISSN 1727-6209 On-line ISSN 2312-5381
International Journal of Computing color images. The processing scheme of the proposed technique is relatively simple. It starts by converting the input color image to grayscale. Then, it automatically computes two contrast tuning parameters depending on the pre-determined grayscale image. Finally, it improves the contrast of the degraded color image using an amended version of an existing contrast stretching technique. Accordingly, its input is a color image and a contrast adjustment parameter δ, while its output is a contrast-adjusted color image.
To achieve this study, three factors have been taken into consideration. The first factor is the utilized image dataset, in which only real-degraded color images are utilized for experimental and comparison purposes. The reasons behind that are many types of imaging devices that have different hardware and software settings are available, as well as, each captured image has different environmental conditions. Thus, using only real-degraded images can allow a good opportunity to assess the true processing ability of the proposed technique.
The second factor is the employed comparison techniques, in which four advanced techniques of scaling in the discrete cosine transform domain by a quadratic mapping function with blocking artifact removal (TW-CES-BLK) [9], non-parametric modified histogram equalization (NMHE) [10], exposure-based sub image histogram equalization (ESIHE) [11], and recursive exposure based sub image histogram equalization (RESIHE) [12] were used for comparison purposes. The third factor is the utilized image quality assessment (IQA) metrics.
Generally, there exist three types of IQA metrics, namely full-reference, reduced-reference and noreference [13]. Accordingly, no-reference metrics require only one image as input, while the fullreference metrics usually require two images as input, which is the degraded or processed image and a reference image. In between these two extremes lie the reduced-reference metrics, which require certain information about the reference image, but not the actual reference image itself, in addition to the degraded or processed image [14].
Since only real-degraded images are utilized, only no-reference IQA metrics were used to measure the quality of the obtained results. Accordingly, three eminent metrics of average local contrast (ALC) [15], colorfulness (CFN) [16], and measure of enhancement (EME) [17] were used for such purpose. The organization of this article is as follows. After introducing an adequate overview in Section 1, the proposed technique is fully explained in Section 2. Afterwards, the experimental results with their related discussions are presented in Section 3. Finally, a succinct conclusion is given in Section 4.
ADJUSTABLE CONTRAST STRETCHING TECHNIQUE
The main idea of the proposed technique is based on the concept of linear contrast stretching (LCS), which has been typically used in various image processing applications [1]. There exist many techniques that can achieve LCS, in which their intricacy varies depending on the utilized concept. One of such LCS techniques can be expressed as [2]: where, W is a given grayscale image; ϒ and Г are contrast tuning parameters; Ŵ is a contrast-enhanced image. Although this technique can adjust the dynamic range of an image, it does not always yield satisfactory results, especially for images with major spatial variation in contrast [1]. Moreover, the values of parameters ϒ and Г are not determined easily, since their determination procedures vary from one application to another.
In addition, this technique has been used basically for grayscale images, but occasionally for color images. Hence, a new adjustable contrast stretching technique is introduced which exploits a modified version of the above LCS technique. In brief, the parameters ϒ and Г are computed automatically in the proposed technique based on the spatial and statistical information of an estimated grayscale version of the input image.
Furthermore, an additional parameter is added to control the amount of contrast enhancement in the resulting image. Finally, the input image is processed using an amended version of Eq. (1). Going into details, the proposed technique starts by computing the relative luminance, which represents the grayscale version of the input color image. The relative luminance can be determined as follows [3]: where, E is the obtained relative luminance image; IR, IG, IB are the red, green, blue channels of the input color image IRGB. For the sake of contrast improvement, the two contrast tuning parameters are determined automatically to be used later in the enhancement process. The used tuning parameters Г and ϒ are computed as follows: where, δ is a parameter that controls the amount of contrast enhancement and it should fulfill δ > 0, in which a higher value leads to further contrast enhancement; Ei is a vector version of image E; Ē is the mean of Ei; n is the number of elements at the longest dimension of Ei; Λ is a regularization parameter that helps to avoid the increase of unnecessary whiteness, in which natural whiteness is produced when Λ=1.4. This value was determined from intensive experiments on various real lowcontrast color images. Parameter Г is computed by multiplying δ by an unbiased sample variance of E [4], while parameter ϒ is determined automatically based on the values of Λ and Г.
The reason behind that is to reduce the number of calculations involved in the proposed technique. Lastly, the contrast of the input color image is processed using the pre-determined tuning parameters with an amended version of Eq. (1). Improving the contrast of image IRGB is achieved using the following equations: where, T is the final output of the proposed technique. In Eq. (1), the reason behind changing the (+) operator to (-) is to provide better contrast for the resulting image. To end with, the subsequent pseudo-code is given to provide a precise description regarding the execution specifics of the proposed technique.
The pseudo-code of the proposed technique. Input: low-contrast color image I Input: parameter δ Set: parameter Λ (default=1.4) Compute the relative luminance using Eq. (2) Compute parameters Г and ϒ using Eq. (3,4) Process the contrast of image I using Eq. (5) Output: contrast-improved color image T
RESULTS AND DISCUSSION
In this section, the experimental preparations, attained results, comparisons and their related discussions are presented to show the true ability of the proposed technique in processing various degraded images, as well as, to compare its processing ability against several advanced contrast enhancement techniques. Thus, the proposed technique is evaluated using a dataset of different real low-contrast color images collected from various digital repositories across the internet. Furthermore, comparisons are made with four specialized enhancement techniques of TW-CES-BLK, NMHE, ESIHE, RESIHE, and the results of these comparisons are evaluated using three eminent no-reference IQA metrics of CFN, EME, ALC.
These metrics can provide valuable information regarding the actual measure of contrast and colors before and after the application of the enhancement process. Accordingly, the ALC and EME metrics measure the local and global contrast, while the CFN metric measures the lucidity of colors for the assessed image. For all the used metrics, higher values indicate better results in terms of contrast and colors. The results of processing various realdegraded images by the proposed technique are displayed in Fig. 1 and Fig. 2. Furthermore, the results of the conducted comparison between the proposed and the comparative techniques are shown in Fig. 3 and Fig. 4. Likewise, Table 1 demonstrates the scored accuracies of the achieved comparisons, while Fig. 5 shows the analytical graph of the average scores of Table 1. From the obtained experimental results in Fig. 1 and Fig. 2, it can be seen the proposed technique delivered satisfactory results, as it improved the drably appearance of the degraded images and provided adequate contrast, acceptable colors with no brightness amplification or visible flaws. In addition, the colors came out conspicuously leading the resulting images to have a more natural appearance. Thus, the processed images became clearer and more suitable for real-life usage. From the comparison results shown in Fig. 3 -Fig. 5 and Table 1, it can be seen that the comparative techniques performed differently, which can be justified in accordance with the variation in the nature of the used images. Regarding the NMHE technique, it gave a moderate performance as the processed images have minor enhancement when compared to the degraded images. In the resulting images, the brightness is increased in certain regions, the contrast is slightly improved and the colors are marginally enhanced.
Regarding the TW-CES-BLK technique, it gave a low performance as the resulting images appeared somewhat different than the degraded images. Accordingly, the brightness increased globally, the contrast is amended and the colors are slightly enhanced. In addition, this technique improved the sharpness of the resulting images, yet introduced the blocking effect, which is considered undesirable in many image processing applications. Regarding the ESIHE and RESIHE techniques, they gave relatively similar performances, slightly in favor of the RESIHE technique. Accordingly, they both improved the local and global contrast, while the colors increased to an above-average level.
Regarding the proposed technique, it performed the best in terms of recorded accuracy and perceived quality, since it provided visually pleasing results with the highest IQA scores. Accordingly, it did not amplify the brightness while improving the contrast and produced ameliorated colors for the resulting images. These facts can be observed by comparing the IQA results of the degraded and the processed images, where there are noticeable differences between such results in favor of the proposed technique. However, like many available image processing techniques, the proposed technique contains one parameter (in this case δ), whose value must be entered manually. Such practice is followed to provide the user with more control over the processing ability of the used technique. As a future work, a suitable method can be developed to calculate the value of δ automatically. Improving the contrast of color images using a low-complexity technique is a challenging task. However, such task is successfully achieved in this study by providing a new technique that can improve the contrast using simple calculations. Finally, it is expected to extend the use of this technique to other existing image processing applications.
CONCLUSION
A new adjustable contrast stretching technique is introduced in this article, in which it is developed based on the concept of linear contrast stretching to improve the contrast of color images using few calculations. Accordingly, it is tested with various real low-contrast color images, compared with four advanced contrast enhancement techniques and the quality of the obtained results is evaluated using three eminent IQA metrics. From the obtained results, it is obvious that the proposed technique provided satisfactory results, as it produced natural contrast images with no visible artifacts and outperformed the comparative techniques by scoring the highest in terms of recorded accuracy. Thus, it is confirmed that the proposed technique is well suited for contrast enhancement of color images and can be further used in many image processing applications. | 3,148.6 | 2018-06-30T00:00:00.000 | [
"Computer Science"
] |
A QCT View of the Interplay between Hydrogen Bonds and Aromaticity in Small CHON Derivatives
The somewhat elusive concept of aromaticity plays an undeniable role in the chemical narrative, often being considered the principal cause of the unusual properties and stability exhibited by certain π skeletons. More recently, the concept of aromaticity has also been utilised to explain the modulation of the strength of non-covalent interactions (NCIs), such as hydrogen bonding (HB), paving the way towards the in silico prediction and design of tailor-made interacting systems. In this work, we try to shed light on this area by exploiting real space techniques, such as the Quantum Theory of Atoms in Molecules (QTAIM), the Interacting Quantum Atoms (IQA) approaches along with the electron delocalisation indicators Aromatic Fluctuation (FLU) and Multicenter (MCI) indices. The QTAIM and IQA methods have been proven capable of providing an unbiased and rigorous picture of NCIs in a wide variety of scenarios, whereas the FLU and MCI descriptors have been successfully exploited in the study of diverse aromatic and antiaromatic systems. We used a collection of simple archetypal examples of aromatic, non-aromatic and antiaromatic moieties within organic molecules to examine the changes in π delocalisation and aromaticity induced by the Aromaticity and Antiaromaticity Modulated Hydrogen Bonds (AMHB). We observed fundamental differences in the behaviour of systems containing the HB acceptor within and outside the ring, e.g., a destabilisation of the rings in the former as opposed to a stabilisation of the latter upon the formation of the corresponding molecular clusters. The results of this work provide a physically sound basis to rationalise the strengthening and weakening of AMHBs with respect to suitable non-cyclic non-aromatic references. We also found significant differences in the chemical bonding scenarios of aromatic and antiaromatic systems in the formation of AMHB. Altogether, our investigation provide novel, valuable insights about the complex mutual influence between hydrogen bonds and π systems.
Introduction
The hydrogen bond (HB) is one of the the most important non-covalent interactions (NCI) in nature. Since its first appearance in the chemistry parlance, back in the second decade of the twentieth century [1], HB interactions have been recognised as key factors determining the properties and structure of a wide variety of molecules and materials. Indeed, the role of HBs is known to affect countless systems, from simple molecular liquids and solids, such as water or hydrogen fluoride, to complex and intricate biomolecules. Furthermore, in recent years, a renewed interest for HB interactions has arisen within the scientific community owing to the importance of these contacts in emerging technologies such as (i) CO 2 capture [2][3][4][5], (ii) rechargeable aqueous zinc [6,7] and aprotic Li-O 2 batteries [8], (iii) photovoltaic cells [9,10], (iv) asymmetric catalysis [11], or (v) hydrogen production [12], among others.
As it often happens in the context of inter-molecular bonding scenarios, the complex interplay between different kinds of interactions drives the global properties of supramolecular systems. Therefore, the combination of HB with other similar or drastically different NCIs is of particular importance. We can consider, for instance, the prototypical example of water clusters. The existence of single HB donors and acceptors in H 2 O clusters has been associated with the mutual strengthening (cooperativity) of HBs (Figure 1a), whereas the occurrence of double HB donors and acceptors has been related with the reciprocal weakening (anticooperativity) of HBs (Figure 1b) [13][14][15][16][17]. Additionally, there are other instances of non-additive effects of hydrogen bonding reported in the literature, e.g., charge assisted HBs [18,19] and ion-dipole contacts [20]. 6 . These two motifs are, respectively, related to cooperative and anticooperative hydrogen bonding effects.
As a general result, the above mentioned cooperative and anticooperative effects are the result of subtle electron fluctuations that accompany the formation of non-covalently bonded systems [21]. Some of these electron redistributions take place through σ bonds and, it is thus common to refer to them as σ-cooperative or σ-anticooperative HB effects. However, such charge transfers might also occur throughout π systems particularly those found in conjugated moieties [14,[22][23][24][25][26][27]. Well-known examples of the interplay between H-bonds and conjugated π systems are Resonance-Assisted Hydrogen Bonds (RAHB) as originally proposed by Gilli et al. [28,29]. RAHBs are understood usually as the result of π-cooperative effects, which considerably strengthen HBs coupled with π bonds. On the other hand, conjugated systems and hydrogen bonds can also reveal anticooperative effects as those found, for instance, in the bicyclic fused rings of malondialdehyde [23,30] or in Resonance-Inhibited Hydrogen Bonds (RIHB) [25][26][27].
Another particularly relevant interplay between H-bonds and π systems can be found in the case of the more recently proposed Aromaticity and Antiaromaticity Modulated Hydrogen Bonds (AMHB) [31,32]. The concept of AMHB was first introduced to rationalise the apparent strengthening or weakening of HB interactions modulated by changes in aromaticity and antiaromaticity in the involved systems. Although clearly intuitive and useful, the ideas of aromaticity and antiaromaticity are built upon elusive and ill-defined chemical concepts, which hinder a quantitative and rigorous analysis. Fortunately, state-ofthe-art wave function analysis methods have proved very useful in the study of electron delocalisation, which is a critical aspect in the study of aromaticity and antiaromaticity. In particular, and in the context of Quantum Chemical Topology (QCT), the Quantum Theory of Atoms in Molecules (QTAIM) [33] and the Interacting Quantum Atoms (IQA) [34] methods have been successfully exploited to investigate the mutual influence of HBs and π systems [22][23][24][25].
In this work, we make usage of the QTAIM and IQA approaches as well as electronic delocalisation indices developed within the conceptual framework of QCT to provide a detailed real-space-based picture of AMHB. For this purpose, we compared the energetics and studied the chemical bonding scenario using QCT in the formation of different AMHB molecular clusters shown in Figure 2. We emphasize the effects of the formation of different molecular clusters on pairwise inter-atomic interactions. For the sake of convenience, and considering the large computational cost of some QCT analyses, derivatives of the simple, but representative, azete and pyridinde molecules will be used as model systems in this work. It should be noticed that these molecules have already been successfully employed in the literature [31,32] as minimal models to study hydrogen bond driven dimerisation phenomena. The manuscript is organised as follows. First, we provide a brief background of the QTAIM and IQA approaches. Then, we discuss the electronic and energetic changes accompanying the dimerisation of a collection of organic scaffolds. Later, we consider the interplay between the above mentioned changes and the aromatic character of the monomers. Lastly, we examine somewhat atypical systems to finally gather the main conclusions of this work.
Real Space Wavefunction Analyses
The QTAIM theory, as originally formulated by Bader [33], is a method of wave function analysis based on the topology of the electron density ρ(r), in which the real space is fragmented in a collection of attraction basins (Ω) induced by the topology of ρ(r). In QTAIM, traditional chemical ideas, such as the concept of chemical groups or fragments, atomic charges or bond orders, emerge naturally without the need of any reference. Moreover, the QTAIM partition can be performed starting either from theoretical (electronic structure calculations) or experimental (high-resolution X-ray diffraction data [35]) determinations of the electron density of the system. This combination of robustness and practicality has made QTAIM to be widely employed to shed light into a large variety of phenomena including catalysis [36][37][38], electrical conductivity [39][40][41] and aromaticity [42][43][44], to name a few.
Based on a 3D partition as that defined by QTAIM, the IQA methodology [34] divides a fully interacting non-separable quantum mechanical system into chemically meaningful interacting entities. The total electronic energies in IQA can be written as a sum of one-body (intra-atomic) and two-body (inter-atomic) terms [34,45], as: where E A self is the energy of atom A, which includes the electron-nucleus attraction, the inter-electronic repulsion, and the kinetic energy within atom A. Additionally, E AB int is the total interaction energy between atoms A and B. This term encompasses all the available interaction terms between the nucleus and electrons within atoms A and B. The constituting terms of the total inter-atomic interaction between two atoms, E AB int , can be regrouped to express the latter as a sum of purely covalent (i.e., exchange-correlation, V AB xc ) and ionic (i.e., classical, V AB cl ) components: Indeed, the IQA energy decomposition provides a particularly convenient way to study and characterise the chemical nature of the interaction among atoms in an electronic system.
Aromaticity
Aromaticity is a multi-factorial concept which is thought to modify and even to determine the structural, energetic, electronic, and magnetic properties of some molecules. Due to lack of an inherent Dirac observable defining it, aromaticity is usually described in terms of its effects on conjugated systems, such as enhanced thermodynamic stability or structural rigidity. Although the idea of aromaticity was conceived solely upon the interpretation of experimental results, the birth of quantum and computational chemistry motivated the development of multiple tools and techniques aiming at its quantitative analysis. One of the most common approaches to study and measure aromaticity is the nucleus-independent chemical shifts (NICS) [46,47] method, as originally proposed in 1996 by Schleyer and coworkers [46]. The NICS approach has been used for several decades to study numerous π skeletons in a variety of fields. Nevertheless, some results obtained through the NICS descriptor have turned to be highly questionable [48][49][50], even contradicting, in some cases, other aromaticity measures based on reactivity [51]. Among many other criteria exploited to quantify aromaticity we can find (i) structural indices, which evaluate bond equalisation [52], (ii) energy decomposition analyses which require a reference molecule [53], and (iii) electronic descriptors which evaluate the amount of electron delocalisation among the atoms forming a cyclic structure. The last-mentioned set includes a number of methods that have been developed relying on the partition of the electronic density offered by QTAIM, such as the Para Delocalisation Index (PDI) [54], the Aromatic Fluctuation Index (FLU) [55], or the Multicenter Index (MCI) [56]. The PDI and FLU approaches provide an estimate of the aromaticity of a system in terms of the electron delocalisation within the cyclic skeleton, whereas the MCI method arises from a generalised population analysis leading to a many-atoms bond index. In the present work, we have made use of some electronic delocalisation-based descriptors, such as the MCI or the FLU indices, to quantify the changes in aromaticity and antiaromaticity of each molecule upon the formation of the corresponding dimer. We have chosen the FLU and MCI indicators to account for the changes in aromaticity and antiaromaticity upon interaction of the monomers under consideration given their proven accuracy and reliability [57]. Furthermore, these methods are fully compatible with the rest of the QCT analyses performed in this report. Unfortunately, the PDI method is not applicable to some of the examined systems herein, because it can only be used for six-membered rings. Finally, we also used the IQA partitioning to study in detail the energetic changes accompanying the generation of the molecular clusters shown in Figure 2, with a particular emphasis on their role in HB formation.
Computational Details
The structures of the hydrogen-bonded dimers in Figure 2 were optimised in the gas phase and the resultant approximate wavefunctions and electron densities were afterwards dumped for further analysis. All geometry optimisations were performed using the ORCA quantum chemistry package version 5.0.3 [58] using the PBE0 hybrid functional [59] along with the Def2-TZVP basis set [60] and the atom-pairwise dispersion correction with the Becke-Johnson damping scheme [61,62]. For the sake of computational efficiency, the Resolution of Identity (RI) approximation was used for the Coulomb integrals with the default COSX grid for HF exchange, as implemented in ORCA [58]. On the other hand, the auxiliary Def2 basis set was used for the RI-J approximation. The combination of such an exchange-correlation functional and basis set has proven [32] suitable for the characterisation of the systems under study. Moreover, DNLPO-CCSD(T)/def2-QZVPP single point calculations were performed on the optimised geometries of the monomers and dimers in order to test the accuracy of our DFT results. An extrapolation to the complete basis set limit was performed through the def2-SVP/def2-TZVP scheme as implemented in ORCA [58] so as to ameliorate the Basis Set Superposition Error (BSSE). The nature of the stationary points (corresponding to local minima of the potential energy surface) was characterised through the computation of the corresponding harmonic frequencies.
QTAIM and IQA calculations were performed using the AIMALL [63] and PROMOLDEN codes [64]. The exchange-correlation energy was partitioned as indicated in reference [65]. Finally, all aromaticity indices discussed along this manuscript were computed using the ESI-3D code [66]. We have denoted the dimers with (i) the hydrogen bond Acceptor Contained within the Ring and (ii) the hydrogen bond Donor Contained within the Ring, as ACR and DCR, respectively. For the ACR azet-2-1H-one (AZH) dimer, we performed a geometry-constrained optimisation to ensure the attainment of the right tautomer of the constituting monomers. On the other hand we observed for the DCR-AZH dimer a structure considerably deviated from planarity as opposed to the rest of the systems. Additional information such as optimised structures, electronic energies, and a more complete survey of IQA and QTAIM (e.g., IQA energy of different groups as well as other QTAIM descriptors) can be found in the electronic supporting information.
General Energetic Changes Induced by HB Formation
We consider first the differences in dimerisation energies of the different systems shown in Figure 2. Table 1 reports the values of ∆∆E, in which ∆E(Y 2 ) is the energy change associated to the process and R is the corresponding reference system used for ACR and DCR, namely the dimers of formamide (NCO) and formamidine (NCN) in the corresponding ACR and DCR form. A negative/positive value of ∆∆E(X 2 ) indicates a stronger/weaker interaction in X···X with respect to the reference complex R···R.
The straightforward comparison of the DFT and CC values for ∆∆E reveals that both levels of theory are in good agreement concerning the sign and magnitude of ∆∆E. These observations indicate that our DFT results offer a reliable picture of the energetics of the binding phenomena under study. As the footnote of Table 1 reports, all the values for ∆E are negative, pointing, as expected, to stabilising dimerisation contacts in all the investigated dimers. Furthermore, the easiness of the complexation seems to be driven, as expected, by the hydrogen bond formation as reflected by the correlation of the binding energies with the ρ at the bond critical point of the HB contacts (see SI Figure S3). We also note that the N-C=N bonding pattern leads to lower binding energies, due to the larger acidity of H atoms bonded to oxygen. The AMHB [32] interpretation of the sign of ∆∆E in Table 1 states that the ACR dimers AZH and AZA display either an increase in aromaticity or a decrease in antiaromaticity, respectively, as a consequence of the formation of the investigated H-bonds. Ditto for the DCR clusters 2HP and 2AP. On the other hand, the ACR complexes 2HP and AZA along with the DCR systems AZH and AZA exhibit the opposite behaviour. We consider now QCT analyses to further dissect these energetic trends. Table 1. Values of ∆∆E, as defined in Equation (3), computed in the DFT and CC approximations described in the main text. NCO and NCN denote formamide and formamidine, respectively, the reference systems shown in Figure 2c. All values are reported in kcal/mol. (4)
Quantum Chemical Topology Analyses
In order to further deepen into the origin of the observed trends in the evolution of the binding energies reported in Table 1, we examined the non-covalent interactions established between both monomers using QCT techniques. For the sake of convenience, the nomenclature shown in Figure 3 will be used to refer to the atoms involved in the intermolecular bonding pattern of these dimers. We first consider the electron redistribution of electron charge due to the formation of the investigated H-bonds. We point out that the formation of an HB is associated with a reorganisation of the electronic density of the moieties involved in this interaction. There is, indeed, a transfer of electron charge from the HB acceptor to the HB donor, with the proton acting as a bridge. For small HB dimers such as (H 2 O) 2 or (HF) 2 , two of the simplest HBs, such charge displacement makes the HB acceptor a better proton donor. Ditto for the HB donor becoming a better proton acceptor. Notwithstanding, the present work deals with dimers where each molecule acts simultaneously as an HB donor and an HB acceptor and hence, there is no effective charge transfer between the monomers. However, the presence of an HB induces a rearrangement of the electron density that interacts with the π clouds in each molecule of the studied systems. Let us start by examining the major changes undergone upon the dimerisation of the non-aromatic reference systems: formamide and formamidine. The complexation process is accompanied by a significant electron redistribution, as reflected by the change in the QTAIM atomic charges, collected in Table 2. The formation of the non-covalent interactions leads, in both cases, to a noticeable electron enrichment of the D and A atoms (between 0.03 and 0.09 electrons) at the cost of decreasing the electron population of the H atom by ≈0.04-0.09 electrons. On the other hand, the central C atom undergoes a noticeable change in its average electron number of −0.09-0.01 a.u. depending on the nature of the acceptor moiety. Table 2. Change in the QTAIM electron populations of the atoms involved in the HB contacts upon the formation of the dimers for (i) the hydrogen bond Acceptor Contained within the Ring (ACR) and (ii) the hydrogen bond Donor Contained within the Ring (DCR) cases. The labelling of the atoms is shown in Figure 3. All values are reported relative to the monomers which were used as reference. Atomic units are used throughout. These observations, and with the particular exception of the bridging C atom, are very similar for both NCO and NCN bonding patterns and evidence a conspicuous rise in the polarisation of the system due to the formation of the corresponding dimers. Such an increase in the local polarisation of the terminal atoms enhances the electrostatic interaction in the HB contacts, as reflected by the large classical components of the A···H interaction as reported in Figure S2.
ACR
We also considered the change in the number of electrons shared among bonded atoms, as measured by the delocalisation index (DI) (see SI Figure S5 for more details), as gathered in Table 3. The D-H bond order decreases significantly (≈0.16-0.25) upon dimerisation, thus weakening the covalent component to the D-H interaction, as evidenced by the prominent destabilisation of ≈30-40 kcal/mol found for V D-H xc . A similar, yet more subtle, weakening of the covalent component can also be observed for the C-A bond. On the other hand, the DI(D-C) is increased by 0.07-0.10 electron pairs, going from a single D-C bond to a slightly higher bond order (≈1.1 in the general case). These results point out that hydrogen bonding reinforces the D-C double bond character at the expense of decreasing that of the C-A interaction. We observed a similar effect in our analysis of RAHB in which the DI corresponding to double bonds decrease while that of single bonds have the opposite behaviour after the formation of the RAHB [22]. This last observation is fulfilled for all the systems and suggests that the formation of the dimers may trigger two opposed effects. Because the A-C bond is contained within the ring in ACR dimers, and ∆DI(C-A) < 0 as indicated in Table 3, we would expect that the formation of the H-bond would decrease the number of π electrons in the associated ACR cyclic structures as represented in Figure 4. On the contrary, the D-C bond is included in the cyclic structures of DCR dimers, and ∆DI(D-C) > 0 (Table 3), then the number of π electrons must increase in the DCR dimers due to the formation of the H-bond. Accordingly, Table S8 indicates that the group energy of the ACR/DCR rings increase/decrease upon the formation of the corresponding dimers. These changes in electron delocalisation affect the aromaticity and antiaromaticity of the investigated systems as discussed below. Table 3. Change in the electron delocalisation index of the atoms involved in the HB contacts ( Figure 3) upon the formation of the dimers with (i) the hydrogen bond acceptor contained within the ring (ACR) and (ii) the hydrogen bond donor contained within the ring (DCR). These changes are computed with respect to the values of the monomers, which were used as references. Atomic units are used throughout.
Perturbation of the Aromaticity of the π Skeleton
We consider now the interplay between aromaticity and antiaromaticity with the inter-molecular HB contacts of Figure 2. Table 4 gathers the change in the aromaticity indices of the intra-molecular π skeleton upon dimerisation, as measured by the MCI and FLU indices (further details about these indices can be found in Section 1 of the SI). Table 4. Change in the MCI and FLU aromaticity indices, along with the change in aromatic/antiaromatic character (Γ), induced by the formation of the dimers in Figure 2. If ∆Γ > 0, there is either (i) an increase of aromaticity or (ii) a reduction of antiaromaticity; vice versa when ∆Γ < 0. Before discussing in detail the changes in the aromatic character of the spectator groups, it may be enlightening to provide a grasp of the FLU and MCI aromaticity indices. The former measures the electron sharing between neighbouring atoms in a ring as well as its similarities between the constituents of the cyclic structure. Thus, a FLU value of zero corresponds to an "ideal" aromatic system, while positive values evidence a deviation from aromaticity. On the other hand, the MCI index measures the collective electron delocalisation along a collection of M centres. As opposed to the FLU, large MCI values suggest a high aromatic character, whereas any other situation usually results in vanishing MCI indexes. Although these metrics were specifically designed to measure aromaticity, they have been successfully used to study antiaromaticity as well [67]. We used the Hückel rule to assign the aromatic or antiaromatic character of the examined monomers as shown in Figure 5. The aromaticity metrics of the monomers (see Table S1) are in agreement with the aromaticity or antiaromaticity label as determined by the Hückel rule. Indeed, the DCR form of AZH and AZA is more aromatic than their ACR counterparts. Likewise, the ACR tautomer of 2HP and 2AP is more aromatic than the corresponding DCR structures. We discuss now how the aromaticity and antiaromaticity of the corresponding monomers change due to the formation of the HB interactions. We mention that apart from the DCR-AZH scaffold, all dimers adopt a nearly fully planar disposition which is key for the delocalisation of π electrons and it is optimal for the formation of the AMHB. Except for a slight discrepancy within the results of (i) the FLU on one hand and (ii) MCI and NICS [32] on the other for the change in the aromatic character between the DCR systems AZH and AZA, there is a good agreement between the computed sign of ∆∆E and the changes of aromaticity and antiaromaticity in the examined systems, as shown in Figure 6. In short, we observe that the condition ∆∆E < 0, i.e., a more favourable formation of the HB with respect to the reference is related with a reduction in the antiaromatic character of the monomers. Correspondingly, when ∆∆E > 0, i.e., a less favourable HB with respect to the reference is accompanied by a reduction of the aromaticity of the monomers. These observations based on QCT are consistent with those of Wu and coworkers [32]. We note some flexibility in the interpretation of these results. Namely, the decrease in aromaticity in the HB formation of the AZA and AZH dimers in DCR configuration was interpreted in Reference [32] as an intensification of antiaromaticity (Figure 7). Similarly, the decrease in antiaromaticity of the DCR systems 2HP and 2AP after the generation of the corresponding dimers was interpreted as an HB reinforced by an increase in aromaticity [32]. This observation suggests that aromaticity and antiaromaticity can be put in a similar scale using electron delocalisation tools within the conceptual framework of QCT.
System
We consider now a further QCT description of the aromatic and antiaromatic moieties considered herein ( Figure 5). We observed important differences concerning the atoms directly involved in the H-bond depending on the aromatic or antiaromatic character of the interacting monomers. For the sake of clarity, we will generally refer to the changes of QTAIM and IQA properties upon dimerisation, with respect to the NCN or NCO reference systems.
As expected, the relative change in the atomic charges and delocalisation indices reported in Tables 2 and 3 have a notable impact on the covalent and ionic components to the total IQA interaction energies in the atoms directly involved in the H-bond. Figure 8 collects the change in the classical and exchange-correlation components of E int for the atoms entailed directly in the inter-molecular contact, reported relative to the NCO or NCN reference systems. The aromatic scaffolds (see Figure 5 Antiaromatic systems have a different behaviour concerning the weakened and strengthened interactions in the inter-molecular region due to the formation of the examined H-bonds. These monomers ( Figure 5) generally strengthen both the classical and exchange-correlation components of the H···A contact further than the reference systems. This fortifying of the HB interaction is accompanied by a noticeable destabilisation of the covalent component of the C-A bond, which, as previously discussed, is more strengthened in the aromatic compounds than it is in the reference compounds.
The Peculiar Case of the AZH (DCR) Dimer
As previously mentioned, all of the dimers with the exception of AZH (DCR) exhibit a planar or quasi-planar structure. However, the lowest local energy minimum found for the last-mentioned compound adopts a distorted conformation. Based on the aforementioned observations, one might conjecture at first glance that such a geometrical distortion could be understood as a way to alleviate the reduction of aromaticity induced by the formation of the dimers. To explore this idea, two additional conformational isomers were studied, as represented in Figure 9. Both the bent-trans bent-cis structures are bona fide local minima with a non-planar geometry. We performed a constraint optimisation in order to obtain the corresponding planar AZH (DCR) isomer. As shown in Table 5, the bent isomers are almost degenerate in terms of energy, being the bent-cis structure slightly more stable by ≈0.5 kcal/mol. On the other hand, the planar structure is ≈7 kcal/mol less stable than the latter. The aromaticity metrics reported in Table 5 indicate that the restriction of the 4membered rings in AZH to remain in a plane would lead to a further reduction of the aromaticity of the dimer. Moreover, the distortion of the low energy (bent) conformations has a dramatic impact on the QTAIM descriptors. Indeed, the changes in the atomic charges and the delocalisation indices are drastically increased, as reflected by the trends in ∆Q and ∆DI values, gathered in Table 6. As it can be seen from the changes in the delocalisation indices in the same chart, the planar isomers lead to a very prominent decrease of the D-H and C-A bond orders while promoting the delocalisation of electrons involved in the D-C and H-A interactions. Further information can be obtained through the analysis of the IQA interaction energies, as gathered at the bottom of Table 6. The trends in the E int energies reveal that distorting the more stable bent geometry stabilises all the pairwise interactions involved between the terminal atoms participating in the binding. This observation is particularly prominent for the D-C and H-A bonds. The interplay between the exchange-correlation and electrostatic contributions also leads to a moderate stabilisation of the D-H and C-A bonds despite the already mentioned decrease in the DI index. Thus, and in agreement with the aforementioned trends, forcing the planarity of the system further boosts the HB contacts as well as the π cloud of electrons through the promotion of the "in-ring" resonant structure. Such an effect is consistent with the enhancement of the anti-aromatic character of the system (top of Figure 7) along with the decrease of the net binding affinities, despite the more favourable hydrogen bonding established between the monomers.
Conclusions
We presented an analysis of aromaticity and antiaromaticity modulated hydrogen bonds using quantum chemical topology tools, namely the QTAIM, the IQA energy partition as well as the electronic delocalisation indicators FLU and MCI. For this purpose, we considered rings containing either the H-bond acceptor (ACR) or the H-bond donor (DCR). Our results show how the formation of the investigated H-bonds can trigger subtle electronic rearrangements with a quite significant impact in the stability and properties of the involved interacting systems. We described large changes in QTAIM charges and electron delocalisation indices along with their accompanying classical and exchange-correlation components of the IQA interaction energies related with the formation of these HB clusters. We also found fundamental differences within the ACR and DCR systems, for example, the weakening and strengthening of double bonds within the cyclic structures of ACR and DCR, a condition which leads to the destabilisation and stabilisation of the rings in these systems. Additionally, we related the enhancement and impairment of the examined Hbonds with respect to non-aromatic (i.e., non-cyclic) structures with changes in the aromatic and antiaromatic character of the system. We observe that reductions in aromaticity can be interpreted as increases in antiaromaticity and vice versa. Therefore, our results indicate that aromaticity and antiaromaticity can be considered on a common scale using QCT tools. Our results also point that the deviation from planarity of specific AMHB clusters could be related with a trend of the system to ameliorate a reduction in aromaticity. Overall, we expect the results of our investigation to provide novel useful insights about the intricate interplay among H-bond and π systems. | 7,035 | 2022-09-01T00:00:00.000 | [
"Chemistry"
] |
Financial Impact Analysis of Carbon Pricing on Geothermal Power Plant Project Investment at PT PLN (Persero)
: Climate change is a significant global challenge, mainly driven by greenhouse gas (GHG) emissions. The energy sector is a major contributor to GHG emissions, accounting for approximately 73% of global emissions in 2022. Within the energy sector, electricity emitted 13 GtCO2 or contributes approximately 35% of global emissions related to energy. To address this challenge, PLN, a state-owned electrical utility in Indonesia, has declared a roadmap to achieve Net Zero Emissions by 2060. The company has also implemented some strategic initiatives to achieve the goal. Carbon pricing is one of the key efforts that enable PLN to receive incentives for reducing GHG emission while also enhancing financial performance. This study examines effects of implementing a carbon trading mechanism on the financial metrics of a 110 MW Geothermal Power Plant project investment. The results demonstrate a 13.58% increase in NPV, a faster payback period from 8.37 to 7.67 years, and a 0.31% rise in the MIRR. These results indicate the potential improvements in project investments financial performance that PLN can achieve while still aligning with global environmental objectives.
INTRODUCTION
Climate change is a substantial worldwide problem that posing threats to the environment, human health, social welfare, and economic development.According to the Intergovernmental Panel on Climate Change (IPCC), a United Nations body for assessing the science related to climate change, rising temperatures lead to extreme heatwaves, changed precipitation patterns, and disturbances to ecosystems, which intensify issues such as deforestation and the loss of biodiversity (IPCC, 2018).These changes increase health hazards, including heat-related illnesses and diseases transmitted by vectors, specifically affecting vulnerable populations (Watts et al., 2021).Extreme weather events have a significant on social welfare, leading to relocation, food shortages, and resource conflicts (Adger et al., 2018).The energy sector is a major contributor to greenhouse gas (GHG) emissions on global scale, accounting for about 73% of emissions in 2022 (International Energy Agency, 2022).The emission are primarily generated by electricity generation, which contributes 35% of these emissions.This emphasizes the need for a transitioning to renewable energy sources that have lower carbon emission.Carbon trading is a key mechanism to encourage this transition, by providing incentives for reducing GHG emissions through the establishment of emission limits and allowing organizations to trade carbon credits.Various carbon pricing mechanisms have been implemented internationally.The European Union's Emissions Trading System (EU-ETS) and China's National Emissions Trading System are widely recognizes examples, although countries like South Korea, Thailand, and Singapore also implemented other strategies customizes to their contexts.Indonesia is now working on implementing carbon pricing programs.The government exploring potential mechanisms through the Carbon Pricing Working Group (CPWG) and integrating these concerns into national development plans (DNPI, 2020; Government of Indonesia, 2016).PT PLN (Persero), Indonesia's state-owned electricity company, plays a vital role in facilitating this transformation.PLN, as a major electricity provider, makes a significant contribution to the country's GHG emissions.The company's objectives is to achieve Net Zero Emissions by 2060, with a primary focus on developing a green ecosystem and reducing reliance on fossil fuels.However, PLN faces challenges such as dependency on conventional energy sources, financial risks caused by fossil fuel market fluctuations, and limited funding.To bridge the gap between its current operations and sustainability goals, PLN is implementing strategies like decarbonize coal and gas plants, expand renewable capacity and its supporting system.This transformation requires substantial investment in renewable
LITERATURE REVIEW 2.1 Energy Transition
Energy transition is broadly defined as the process of shifting from one energy system to another, which usually involves a change in the primarily fuel source or energy technology.This transition often includes shifting away from fossil fuels like coal, oil, and natural gas, and towards renewable energy sources such as wind, solar, hydropower and geothermal.The process of energy transition is complex, often covering technological, economic, and social changes that might span several decades.According to Sovacool (2016), energy transitions are usually prolonged affairs, requiring significant shifts in not only technology but also in political regulations, tariffs, pricing regimes, and user behaviors.The global energy transition is gaining momentum as countries and regions increasingly prioritize the shift from fossil fuels to renewable energy sources in response to climate change and sustainability goals.This transition is marked by a growing demand on renewable energy technologies such as solar, wind, hydropower, and geothermal, together with a gradual phasing out of coal and oil fuel power plants.Renewable energy capacity has been expanding rapidly, driven by developments in technology, decreasing costs, and supportive regulation released by government.In Southeast Asia, including Indonesia, the pace of the energy transition is accelerating, although it faces unique challenges.Indonesia, with its abundant coal resources, has relied heavily on fossil fuels to meets its energy needs.However, the government has recognized the importance of transitioning to a more sustainable energy mix and has set aggressive targets to increase the share of renewables in its energy portfolio.The Indonesian government's energy roadmap outlines plans to achieve 23% renewable energy in the national energy mix by 2025.This strategy primarily focus on the development of geothermal, hydro, solar, and wind energy.The country is also exploring the potential of bioenergy and tidal energy to diversify its renewable energy sources.
Impact on Financial Performance
The energy transition has significant implications for the financial performance of energy companies.As companies shift from using fossil fuels to utilizing renewables energy sources, capital expenditures (CAPEX) are increasingly directed towards the development and deployment of renewable energy infrastructure.This shift often requires substantial upfront investment in new technologies and facilities, such as solar farms, wind turbines, and energy storage systems.While the initial capital outlay can be high, the long-term operating costs associated with renewable energy are generally lower than those of fossil fuel-based systems.Renewables have the Revenue generation is also impacted by the energy transition.Renewable energy projects can benefit from government incentives, carbon pricing mechanisms, and increasing demand for clean energy, all of which can enhance revenue streams.However, energy companies must also navigate market volatility, changes in regulatory frameworks, and the gradual decline in demand for fossil fuels.
In Indonesia, the energy transition presents both risks and opportunities for financial performance, with the potential for enhanced profitability through strategic investments in renewables and participation in carbon trading schemes (Fischer & Newell, 2008).
Challenges and Opportunities
The adoption of renewable energy poses several obstacles for energy companies, especially in countries like Indonesia where fossil fuels have long dominated the energy source.One of the key challenges is the existing infrastructure, which is often designed for the extraction, processing, and distribution of fossil fuels.Replacing this infrastructure to support renewable energy sources can incur significant cost and require substantial amount of time (Mun, 2010).In addition, energy companies must struggle with regulatory uncertainty, as governments continue to adjust laws and incentives regarding the transition to clean energy.This unpredictable situation can have effect on investment decisions and long-term planning.
Another challenge is the variability and intermittency of renewable energy sources such as solar and wind, which require advanced grid management and energy storage solutions to ensure a stable and reliable energy supply.In Indonesia, the geographic diversity of the archipelago presents difficulties in logistic for the project development of renewable energy, especially in remote or underdeveloped areas.
Although facing these obstacles, the energy transition has significant opportunities for both financial and environmental benefits.
Companies that successfully manage the transition might get advantages from a lower operational costs, increase market competitiveness, and opportunities to generate new revenue streams through renewable energy projects and carbon pricing mechanisms (Ekins et al., 2011).Furthermore, the transition also aligns with broader environmental goals by reducing greenhouse gas emissions, improving air quality, and contributing to sustainable development.
The energy transition in Indonesia is not only a corridor to achieving climate goals but also an opportunity to enhance energy security, reduce dependency on fossil fuel imports, and create employment opportunities in the renewable energy sector.Indonesia has the capacity to become a leader in sustainable energy in Southeast Asia by utilizing its abundant renewable resources.This would not only stipulate economic growth but also addressing environmental issues.
Regulatory Frameworks and Policy on Carbon Pricing
Regulatory frameworks and policies on carbon pricing are crucial in facilitating the global energy transition.These frameworks offer economic incentives for the reduction of GHG emissions and encourage investments in renewable energy.
Global Overview
Carbon pricing mechanisms have become a crucial global instrument in the fight against climate change, with various governments and regions apply different approaches to restrict GHG emissions.Some of the more notable mechanisms are carbon taxes, cap-andtrade systems, and emissions trading schemes (ETS). Carbon Taxes: These are direct taxes enforced on the carbon content of fossil fuels.The purpose is to incentivizes businesses and individuals to reduce their carbon footprint.Countries like Sweden and Canada have successfully implemented carbon taxes, resulting in significant reductions in emissions while sustaining economic growth (World Bank, 2023). Cap-and-Trade Systems: Under this system, a limit or cap is set on the total amount of GHG emissions that are allowed.Companies receive or buy emission allowances, which they can trade with one another as needed.Over time, the cap is gradually decreased, increasing the cost of emissions and encourages the use of cleaner technologies.) Indonesia is currently in the initial phases of implementing carbon pricing policies, which are a significant move toward aligning the country's economic growth with its climate goals.These policies are expected to play a crucial role in guiding investment decisions, particularly in the energy industry.
Financial Modelling
Financial modeling is an essential tool for assessing the financial implications of carbon pricing on investment decisions, especially in the energy sector.It use quantitative methods to forecast the financial outcomes of projects, allowing decision-makers to evaluate risks, profitability, and overall feasibility.
Several key techniques being widely used in financial modelling.Discounted Cash Flow (DCF) analysis is widely used method that involves the projection of future cash flows and then discounting their value to their current value using a specified discount rate.The method is useful for evaluating the long-term viability of energy projects, where significant initial investment is required, and returns are realized over an extended period (Damodaran, 2012).Scenario Analysis is another critical approach that enables analysts to assess the influence of various assumptions and external factors on project outcomes.Investors can evaluate the outcome of project financial performance under different market or regulatory conditions by developing multiple scenarios, such as high, medium, and low carbon pricing.This method is crucial in making comprehensive perspective of project's potential risks and making preparations for different future scenarios.Sensitivity Analysis is a method that supplements scenario analysis by examining how changes in specific factors, such as carbon prices, exchange rate, energy price, affect the financial performance of a project.This methodology helps in identification of the key variables that contribute to the value of a project value and the potential risks that may interfere the realization of expected returns.
Application in Energy Projects
Financial models are essential for evaluating the economic feasibility of investments in renewable energy.DCF models are commonly used to assess the net present value (NPV) and internal rate of return (IRR) of projects, providing critical insights into their potential profitability.When analyzing renewable projects, using the DCF method, it is typically involves accounting for initial capital expenditures, operating and maintenance costs, revenue from energy sales, and savings or earnings from carbon credits (Brealey, Myers, & Allen, 2020).Scenario analysis is particularly applicable in energy projects because of the inherent uncertainties associated with energy prices, technological advancements, and regulatory policies.For example, a scenario may simulate several potential outcomes of carbon pricing, allowing investors to understand how stricter carbon regulations could increase the profitability of renewable energy projects by raising the cost of fossil fuel.Sensitivity analysis improve these evaluations by testing the potential effects of various key assumptions, such as the availability of government subsidies or fluctuations in energy demand, could impact project outcomes.This approach helps investors in determining which factors require close monitoring and proactive management.
Carbon Pricing Integration
Incorporating carbon price into financial models is to create accurate prediction of the financial performance of energy projects in a carbon-constrained world.Carbon pricing, whether implemented through taxes or cap-and-trade mechanisms, has a direct impacts on the cost structure of energy projects, especially those that reliant heavily on fossil fuels.The model then can help in forecasting additional costs for carbon emissions or potential revenues from carbon credits, so offering a more comprehensive view of the financial aspects of a project.Integrating carbon pricing requires adjustments to both revenue and cost estimates in the financial model.For example, a DCF model could include additional cash flows related to the sale of carbon credits generated by a renewable energy project.In contrast, the model in a fossil fuel project would take into account the additional expenses incurred as a result of carbon taxes.This integration allows for a more precise evaluation of project's net profitability and the potential risks linked to future regulatory changes.
Capital Budgeting
Capital budgeting analysis to review Net Present Value (NPV), Internal Rate of Return (IRR), and Payback Period of the project under various scenarios.This include calculation of cost of capital using Capital Asset Pricing Model (CAPM) or Dividend Discount Model (DDM).
Net Present Value (NPV)
NPV calculates the difference between the present value of cash inflows generated by the project and the present value of cash outflows, discounted at a specific rate.A positive NPV indicates that the projected earnings (adjusted for time and risk) exceed the initial investment, making the project financially viable.When carbon pricing is incorporated into the model, the cash flows take into account the extra expenses related to carbon emissions or the potential income from carbon credits.This modification can have a considerable impact on the Net Present Value (NPV), as increased carbon pricing can decrease the appeal of fossil fuel projects while increasing the feasibility of investments in renewable energy.The NPV method's adaptability permits the inclusion of different scenarios, such as variations in carbon pricing, energy costs, and regulatory policies.
Internal Rate of Return (IRR) and Modified Internal Rate of Return (MIRR)
The Internal Rate of Return (IRR) is another key metric in capital budgeting which represent the discount rate at which the NPV of all cash flows (both inflows and outflows) from a particular project equals zero.IRR represents the estimated yearly rate of return that a project is estimated to generate.In energy projects, IRR is often used to compare the profitability of different investment options.When carbon pricing is introduced, the IRR can be significantly affected, particularly for projects with high carbon emissions.For renewable energy projects, which generally have lower or zero carbon emissions, the IRR might increase as the cost savings from avoided carbon taxes or revenues from carbon credits improve the overall financial performance of the project.However, one limitation of IRR is that it assumes reinvestment of interim cash flows at the same rate as the IRR itself, which may not always be realistic.Additionally, IRR can sometimes produce multiple values for projects with alternating cash flows, making it less straightforward than NPV in certain situations.The Modified Internal Rate of Return (MIRR) is designed to address some of the limitations of the traditional Internal Rate of Return (IRR).MIRR assumes that positive cash flows are reinvested at the project's cost of capital, providing a more accurate reflection of the project's profitability.MIRR is calculated by combining the present value of cash outflows with the future value of cash inflows, using the cost of capital as the reinvestment rate.As a result, MIRR offers a more realistic and strong measure of a project's financial performance, particularly in scenarios where the traditional IRR might overestimate returns (Damodaran, 2012).In the context of renewable energy projects, the Payback Period tends to be briefer for projects that benefit from subsidies, tax incentives, or carbon credits, as these factors can expedite the recovery of the initial investment.However, the Payback Period fails to consider the concept of time value of money, which restricts its capacity to accurately assess the overall profitability of a project.This restriction is especially significant for energy projects that span a long duration, as the majority of the profits are generated over an extended period.
The Discounted Payback Period method can be used to fix this problem.This approach calculates the time required to recover the initial investment by discounting the project's cash flows to their present value before summing them, provides a more accurate assessment of when an investment will break even in present value terms.
METHODOLOGY
The study adopts a quantitative approach to assess the financial implications of carbon pricing mechanisms on PT PLN's investment decisions.The study uses methods from financial modelling techniques to quantify the impact of carbon pricing, ensuring that the findings are measurable and replicable.
The study uses quantitative data that come from external data collection such as: Financial Reports and statistics data of PT PLN: to evaluate past performance and key indicators to provide a basis for future financial projections. Project Feasibility Documents: Examination of projects case feasibility study to provide technical data which is needed for calculation in the financial modelling. Regulatory Documents: Review of policies and regulations related to carbon pricing, both within Indonesia and globally, to understand the regulatory environment and its potential impact on financial outcomes.
Scenario Analysis:
The scenario analysis is carried out to figure out the NPV of a project under different conditions and to find the key input variables that have a big but uncertain effect on the project's NPV.These variables include capacity factor, escalation of electricity selling price, exchange rate, tariff of steam, and carbon credit unit price.As part of the scenario analysis, we can figure out the NPV for both the worst and best case scenarios to see how the project's value might change.The NPV range shows how risky the project is; a bigger range means there is more uncertainty and risk.
Sensitivity Analysis:
The sensitivity analysis checks how stable the financial models are by changing important factors.This helps to figure out which factors have the most significant effect on financial success and to figure out what risks might come up if these factors change.
Monte Carlo Simulation:
To take into account the fact that key inputs can change and be unclear, Monte Carlo simulation is built into the financial models.For a fuller picture of risks and returns, the analysis runs thousands of simulations with different sets of factors.This gives a probabilistic picture of how the project might turn out.By this methods, investors get information how uncertain factors including carbon price affects the ability of a project to go forward.
Analysis
The study will focus on a 110 MW geothermal power plant development project located in Sumatera Island as a case study.The base case parameters used in the study are as follows: • Project Economic Life: 30 years The results demonstrate a 13.58% increase in NPV, a faster payback period from 12,88 to 9.87 years, and a 0.31% rise in the MIRR as showed in Table 2.The sensitivity analysis evaluated how variations in key variables (exchange rate, steam tariff, escalation of electricity selling price, capacity factor, and carbon credit unit price) affect the project's Net Present Value (NPV).The results as showed in Figure 1 indicate that the exchange rate and steam tariff have the most significant impact on NPV, with a 20% swing in either direction leading to significant changes in NPV.The carbon credit unit price, while influential, has a relatively smaller effect on the overall financial outcome, suggesting that while carbon credits contribute positively to financial metrics, they are not the primary drivers.The analysis are as follows: Exchange Rate: The exchange rate has the most significant impact on the outcome, with a swing of approximately -140% to +140% of base case NPV when adjusted by ±20%.This indicates that fluctuations in the exchange rate can substantially affect the financial performance of the project, possibly due to largest portion of US dollar in the cost of capital. Tariff of Steam (cUSD/kWh): The steam tariff also shows a substantial impact, with a swing of around -120% to +125% of base case NPV.This suggests that changes in the price paid for steam (likely a major input cost for the project) are crucial to the project's profitability. Escalation of Electricity Selling Price (Rp/kWh): The escalation in the electricity selling price affects the outcome but to a lesser extent than the exchange rate and steam tariff.The swing ranges from approximately -85% to +95% of base case NPV, showing that while important, the effect is moderate compared to other variables.This could mean that the project has limited control over pricing, since the electricity tariff is regulated by the government Capacity Factor (CF): The capacity factor, which reflects the actual output of a power plant compared to its maximum potential output, has a relatively moderate impact.The swing ranges from around -45% to +35% of base case NPV.This indicates that the operational efficiency of the power plant is crucial, and maintaining or improving capacity factor is vital for the project's financial viability. Carbon Credit Unit Price (Rp/tCO2): The carbon credit unit price has the smallest impact among the variables analysed, with a minimal swing around -2.5 to +2.5% of base case NVP.This suggests that, within the range considered, fluctuations in carbon credit prices do not significantly affect the project's financial metrics, possibly because the project either generates limited carbon credits, the carbon price is still low, or the revenue from carbon credits are a small part the overall revenue.
Scenario Analysis:
Scenario analysis is conducted to determine the project's Net Present Value (NPV) under different potential scenarios.Initially, key input variables that affect the project's NPV, usually have a high degree of uncertainty but significant impact is identified.In this scenario analysis, capacity factor, escalation of electricity selling price, exchange rate, tariff of steam, carbon credit unit price are considered as key input variables.The scenario analysis explored the financial outcomes under different carbon credit scenarios: without carbon credit, and with carbon credit (considering best case, worst case, and base case scenarios).The results reveal that: Without Carbon Credit: The project has a positive NPV (IDR 861.26 billions), indicating that it is financially viable.The positive NPV suggests that, even without the benefit of carbon credits, the project can generate sufficient cash flows to justify the investment. Base Case with Carbon Credit: The base case scenario with carbon credits results in a positive NPV (IDR 978.24 billions), 13.58% higher than the scenario without carbon credits.This scenario assumes a similar capacity factor to the "without carbon credit" scenario, but includes revenue from carbon credits at a moderate price.This positive NPV indicates that under these more likely conditions, the project is financially viable and the inclusion of carbon credits improves profitability. Worst Case with Carbon Credit: the project shows a significant negative NPV (-IDR 822,86 billions).This outcome is driven by a reduced capacity factor, lower escalation in electricity selling prices, a slightly higher steam tariff, and a lower exchange rate, combined with a relatively low carbon credit unit price.The negative NPV suggests that under unfavorable conditions, even with the inclusion of carbon credits, the project would result in a financial loss. Best Case with Carbon Credit: the project's NPV is extremely high (IDR 6,356,695.24).This scenario assumes the most favorable conditions: the highest capacity factor, the highest escalation in electricity selling prices, the lowest steam tariff, a favorable exchange rate, and the highest carbon credit price.The dramatically positive NPV suggests that under optimal conditions, the project can be extremely profitable, with carbon credits contributing significantly to the financial success.The wide NPV range between the worst and best case (over Rp 7,179 billions) indicates that the project's outcome is highly uncertain and dependent on the input variables.So it is importance to understand the potential risks and rewards associated with the project, and to prepare risk mitigation strategies to ensure the project can withstand difficult conditions.
Monte Carlo Analysis:
The Monte Carlo simulation provided a probabilistic distribution of potential NPVs based on varying inputs, which showed how uncertain the outcomes of the project.The very low probability of a NPV being negative (1.07%) means that the project is very likely to be financially viable.The project is a relatively safe investment because there is a very high probability that the NPV will be positive (98.93%).The fact that the NPV is sure to be higher than the average number also shows how strong the feasibility of the project in a lot of different situations.However, while most scenarios result in positive NPVs, there is a wide range of outcomes, with some scenarios could lead to loses.So it is important to establish a careful risk management, especially for the variables that have the most significant impact on financial outcomes.
Conclusion and Recommendation
The financial analysis demonstrates that carbon credits have a positive impact on the financial metrics of the 110 MW geothermal development project at PT PLN (Persero), which is the subject of this study case.Improvements include a 13.58% increase in NPV, a reduction in the payback period from 8.37 to 7.67 years, and a 0.31% rise in the MIRR.However, the effectiveness of carbon credits in improving profitability is highly dependent on other critical factors such as the exchange rate and steam tariff.The worst-case scenario shows that the project is still vulnerable to bad market conditions, even though the base case situation with carbon credits improves NPV, IRR/MIRR, and payback period.
's Carbon Pricing Policies Indonesia, as a major developing economy with a heavy reliance on fossil fuels, has started to gradually adopt carbon pricing policies as part of its roadmap climate change strategy.The country's regulatory framework includes several key initiatives aimed at mitigating emissions and promoting sustainable development. National Determined Contribution (NDC): As part of the Paris Agreement, Indonesia has made a National Determined Contribution (NDC) which is committed to reduce its GHG emissions.Indonesia aims to achieve a reduction of 29% by 2030 under a business-as-usual scenario, With international assistance, Indonesia is targeting a higher reduction up to 41%.The NDC is a fundamental element of Indonesia's climate policy, provide guidance to reduce emissions across various sectors, including energy (Ministry of Environment and Forestry, Indonesia, 2021). Low Carbon Development Initiative (LCDI): the LCDI is integrated into Indonesia's national development planning, that highlights the importance of energy transition to a ;ow-carbon economy.It outlines strategies for emissions reduction while fostering economic development, which explore the potential implementation of carbon trading and carbon tax systems (Bappenas, 2019). Carbon Pricing Regulation: In 2021, Indonesia introduced Presidential Regulation No. 98, which lays the groundwork for carbon pricing, including carbon trading and carbon tax schemes.This regulation signals Indonesia's commitment to incorporating carbon pricing into its climate strategy, with the goal of achieving net-zero emissions by 2060. Business scale initiatives: To carry out environmentally sound business activities in line with Net Zero Emission initiatives, PLN has issued Board of Directors Regulation Number 161 of 2021 concerning Strategic Policy for Climate Change Management as a guideline and governs of climate change management within PLN (PLN Sustainability Report, 2023 quantifies the duration needed for an investment to generate cash flows that are adequate to recover the initial expenditure.It is useful for evaluating the liquidity risk of a project by offering a clear indication of how quickly an investment can pay for itself (Brealey, Myers, & Allen, 2020).
•
• Depreciation Straight line to zero in 30 years • Loan interest: 0.8% with period 40 years, principal payment started after 10 years grace period Tariff of steam : 7.17 cents USD/kWh • Power Plant Capacity factor : 86% • Escalation of electricity selling price : 3.30% per year • Escalation of steam price: 1.50% per year • Carbon credit unit price : IDR 58,800 per tCO2e • Weighted Average Cost of Capital (WACC) : 9.74% • Income tax: 22% Using the total initial investment of USD 1.895,61 billions and IDR 969.700,60 billions, the 40 years net cash flow for scenario without Carbon Credit consideration and with Carbon Credit consideration are as follows:
12
The result of Monte Carlo Simulation for 1000 calculations of various input of capacity factor, escalation of electricity selling price, exchange rate, tariff of steam, carbon credit price are as follows and showed in Figure2: Descriptive statistics of project's NPV (in billions USD): Probability NPV<0 : 1,07% Probability NPV>0 : 98,93% Probability NPV>average: 100,00% | 6,553.4 | 2024-08-17T00:00:00.000 | [
"Environmental Science",
"Business",
"Economics"
] |
Long range vortex configurations in generalized models with the Maxwell or Chern-Simons dynamics
In this work we deal with vortices in Maxwell-Higgs or Chern-Simons-Higgs models that engender long range tails. We find first order differential equations that support minimum energy solutions which solve the equations of motion. In the Maxwell scenario, we work with generalised magnetic permeabilities that lead to vortices described by solutions, magnetic field and energy density with power-law tails that extend farther than the standard exponential ones. We also find a manner to obtain a Chern-Simons model with the same scalar and magnetic field profiles of the Maxwell case. By doing so, we also find vortices with the aforementioned long range feature, which is also present in the electric field in the Chern-Simons model. The present results may motivate investigations on nonrelativistic models, in particular in the case involving Rydberg atoms, which are known to present long range interactions and relatively long lifetimes.
I. INTRODUCTION
In high energy physics, vortices are planar structures that appear under the action of a complex scalar field coupled to a gauge field under an U (1) local symmetry [1,2]. The first relativistic model investigated was the well-known Nielsen-Olesen one [3], whose gauge field is controlled by the Maxwell term. In this case, the vortex is electrically neutral and engenders quantized flux. The equations of motion that control the fields are of second order. To simplify the problem, it was shown in Ref. [4] that, by using arguments of minimal energy, one can find first order equations that are compatible with the equations of motion. Even though the analytical form of the solutions remain unknown in terms of known functions, one can estimate their behavior out of their core, which is asymptotically dominated by an exponential function.
A distinct possibility to investigate vortices is by exchanging the Maxwell term with the Chern-Simons one, as firstly investigated by Jackiw and Weinberg [5], and by Hong, Kim and Pac [6]. In this scenario, the vortex becomes electrically charged, with quantized charge. A first order formalism may also be developed here and, as in the Maxwell case, only the numerical solutions are found. In a way similar to the Maxwell-Higgs vortices, one can show that the Chern-Simons vortices present asymptotic behavior that are also ruled by an exponential behavior.
In the above standard models, the first order formalism requires the potential to engender a fourth-order power in the scalar field for the Maxwell case in Ref. [4] and a sixth-order power for the Chern-Simons model described in Refs. [5,6]. This means that one does not have the freedom to choose a potential that leads to distinct features. A possibility to circumvent this issue is by including extra functions that depend on the scalar field, additionally to the potential. For instance, in the Maxwell model, one may consider a generalized magnetic permeability. In the Chern-Simons scenario, the magnetic per-meability cannot be modified, since it would break gauge invariance, so one can make use of a function that drives the dynamical term of the scalar field. Over the years, several papers dealing with vortices in generalized models appeared in the literature making use of other types of generalizations, such as the Born-Infeld dynamics and powers of the dynamical term of the scalar field; see, e.g., Refs. [7][8][9][10][11][12][13][14][15][16][17][18]. This brings to light distinct features, such as uniform magnetic field inside the structure, compact vortices and the existence of twinlike models, which are models that support the very same localized solution with the same energy density.
In the study of kinks in (1, 1) dimensions, the standard solutions such as the ones of φ 4 and sine-Gordon models engender exponential tail. For potentials with null classical mass at the minima, the asymptotic behavior is controlled by polynomial functions; see Refs. [19][20][21][22][23][24]. Since the tail of the structure extends farther than the ones of the standard case, they are called long range kinks. Long range structures may also arise in the study of non topological solitons, whose standard model only support power law tails [25][26][27]. A similar behavior also arises in the study of both topological and non topological vortices in models with non minimal coupling [28].
In this work, we seek for vortex configurations that exhibit long range tails in both Maxwell-Higgs and Chern-Simons-Higgs scenarios. We first consider the Maxwell-Higgs model in Sec. II and then, by using a procedure that we will introduce in Sec. III, we show how to obtain a Chern-Simons-Higgs model that support the same scalar and magnetic field configurations of a Maxwell-Higgs one. To illustrate the method, we take a model that support analytical solutions with polynomial tails in the Maxwell-Higgs scenario, found in Ref. [29]. In this case, the Chern-Simons model with the same scalar and magnetic fields requires the addition of awkward functions in the Lagrange density, so we also include a novel model that engender the long range behavior in both sce-narios. We conclude the work in Sec. IV.
Before starting the investigation, we emphasize that the presence of vortices with long range tails in high energy physics may trigger further interest on this kind of configuration, since the distinct tail may ultimately modify the way they interact with one another, leading to a novel collective behavior. This is the main motivation of this work, and we think it can also attract interest to nonrelativistic models, in particular to the case of the Gross-Pitaevskii equation, which is appropriate to describe vortex excitations in Bose-Einstein condensates [30,31]. An interesting possibility relies on the use of Rydberg atoms, which engender very large principal quantum numbers, long range interactions and relatively long lifetimes [32,33]. Another possibility concerns the study of cold and ultracold hybrid ion-atom systems [34].
II. MAXWELL-HIGGS MODEL
We consider a gauge field and a complex scalar field coupled through an U (1) local symmetry in (2, 1) flat spacetime dimensions, with metric η αβ = diag(+, −, −) and action S = d 3 xL, where the Lagrange density is taken with dimensionless fields and coordinates, in the form Here, we have D α = ∂ α + iA α , F αβ = ∂ α A β − ∂ β A α and the overline stands for complex conjugation. In this case, µ(|ϕ|) denotes a generalized magnetic permeability. The equations of motion of the fields ϕ and A α associated to the Lagrange density (1) are in which (z) represents the imaginary part of z, and we have used the notation µ |ϕ| = dµ/d|ϕ| and V |ϕ| = ∂V /∂|ϕ|. Invariance of the Lagrange density (1) under spacetime translations leads to the energy-momentum tensor where (z) denotes the real part of z. In the case of static configurations, we take A 0 = 0 knowing that the Gauss' law for our model, given by the temporal component of Eq. (2b), is compatible with this condition. This makes the vortex being electrically neutral. We proceed the investigation by taking ϕ = g(r)e inθ and A =θ r (n − a(r)), where r and θ are polar coordinates and n is the vortex winding number. The functions g(r) and a(r) are monotonic and must obey the boundary conditions g(0) = 0, a(0) = n, lim r→∞ g(r) = 1, lim r→∞ a(r) = 0.
(5) With this, the terms associated to the dynamics of each field become where the prime denotes the derivative with respect to r. Furthermore, the magnetic field takes the form and the magnetic flux Φ = 2π ∞ 0 rdrB(r) is quantized: The equations of motion (2) with (4) become The energy density ρ ≡ T 00 can be calculated from the energy-momentum tensor (3) and Eq. (4); it takes the form where a(r) and g(r) are the solutions of the equations of motion (9). These solutions, however, are not easy to be obtained, since one must solve second order differential equations that are coupled with one another. To simplify the problem, we make use of the first order formalism developed in Ref. [18], which appears for the stressless condition, T ij = 0. In this case, we get the first order equations The pair of equations for the upper and lower signs are related by a → −a. Here, the potential must be written as to ensure the first order equations (11) are compatible with the equations of motion (9). We may also take advantage of this formalism to use an auxiliar function W (a, g) such that the energy density in Eq. (10) can be expressed in terms of a total derivative, as After integrating the above energy density, we get E = 2π|n|, which is the same for the standard Nielsen-Olesen vortex [3]. For simplicity, from now on we only consider unit vorticity, n = 1. Thus, one must use the positive sign in the first order equations (11).
In this paper we are interested in find vortices with polynomial tails, which we call long range vortices. However, before going further, we review the asymptotic behavior of the standard vortex (µ = 1), which is described by the first order equations To see how the solutions behave far from the origin, we look at the boundary conditions (5) and write a(r) = 0 + a asy (r) and g(r) = 1 − g asy (r). By substituting these functions in the above first order equations and linearizing them, one can show that where λ is a constant that can be adjusted to fit the numerical simulations. We then see these expressions rapidly vanishes as r increases due to the exponential factor. The generalized magnetic permeability, however, has allowed for the presence of different vortex configurations, such as the compact vortices that we found in Ref. [15].
Since we are interested in long range vortices, we first reproduce the analytical solutions found in Refs. [29,35], in our model (1) with the magnetic permeability given by where s is a real parameter such that s ≥ 1. In this case, the potential in Eq. (12) has the form and we must solve the equations in Eq. (11), which become It support the analytical solutions a(r) = 1 1 + r 2s and g(r) = Notice that the tail of these solutions is controlled by a(r) ∝ r −2s and 1 − g(r) ∝ r −2s , which is a distinct behavior from the exponential one found in Eq. (15). The polynomial tail goes slower than the standard one to the boundary value. This shows the longe range character of the vortex. The magnetic field (7) and energy density (13) are They can be integrated to give flux Φ = 2π and energy E = 2π. Notice that both E and Φ do not depend on s, as previously informed.
We now introduce a novel model that supports vortex configurations with long range tails. It is given by the magnetic permeability where l is a real parameter such that l ≥ 1. The case l = 1 recovers the model investigated in Ref. [36], which reproduces, using a generalized magnetic permeability, the standard Chern-Simons solutions a(r) and g(r) [5,6], whose tails are dominated by an exponential function, similarly to the behavior in Eq. (15). So, for a general l, the potential in Eq. (12) becomes This potential is displayed in Fig. 1 for some values of l.
where l denotes the ceiling function. In this model, the first order equations (11) take the form The above first order equations admit the asymptotic behavior The above expressions show that the solutions exhibit a polynomial tail that goes slower to their boundary values as l increases. One may also verify that the magnetic field (7) and the energy density (13) behaves asymptotically as Thus, similarly to the solutions, both the magnetic field and the energy density engender polynomial tails. As in the previous model, the quantities that describe the vortex present a power-law asymptotic behavior which shows the long range behavior of the structure.
Differently from the previous model, here we were not able to find the analytical solutions of the first order equations (23). So, we must use numerical procedures to solve them for each l. In Fig. 2, we display the solutions and the magnetic field B(r) for some values of l. We also calculate the energy density numerically and show it in Fig. 3. One can see that, for l > 1, all the quantities that describe the vortex configuration take larger distances to attain their boundary values when compared to the standard case with exponential tails. Moreover, as l increases, the tails gets larger and larger.
III. CHERN-SIMONS MODELS
We now exchange the Maxwell term for the Chern-Simons one in the Lagrange density (1). In this case, we cannot include a factor depending on the scalar field multiplying the Chern-Simons term, because it would break gauge invariance. Nevertheless, as we have shown in Ref. [18], we need generalized models to find vortices with features that differ from the standard ones [5,6]. Then, we consider the generalized class introduced in Ref. [37], which includes a factor that modifies the dynamical term of the scalar field The equations of motion associated to the above Lagrange density are The energy momentum tensor has the form Here, we cannot take A 0 = 0 as in the Maxwell case because this is not compatible with the equations of motion. So, we take the fields in the form of Eq. (4) with their usual boundary conditions, and A 0 = A 0 (r). One can show that the magnetic field is B = −a /r and its associated flux is given by Eq. (8). Here, we have an additional feature that arises due to the presence of the temporal component of the gauge field: the electric field, whose intensity is |E| = |A 0 |. In this case, the vortex is electrically charged, with charge Q = −Φ. For simplicity, we take unit vorticity, n = 1. From Eqs. (27), we get the following equations of motion Also, the energy density is calculated from the component T 00 in Eq. (28) and takes the form The equations of motion (29) are of second order. To simplify the problem, we follow the first order formalism developed in Ref. [18] to obtain The potential, however, cannot have an arbitrary form because the above equations must be compatible with the equations of motion (29). One can show that the functions K(g) and V (g) are constrained to obey d dg The above equation allows us to write the potential as in which an integration constant always arise in the process, since we are dealing with an indefinite integration. For a general K(g), the first order equations (31) become such that one must choose K(g) and the integration constant to get solutions compatible with the boundary conditions (5). In this case, the energy density is given by By integrating this energy density, we get energy So, the function K(g) and the integration constant that arises in the process also modifies the energy of the vortex.
The simplest example is the standard case, K(g) = 1, investigated in Refs. [5,6]. In this situation, to develop the Bogomol'nyi procedure [4], one must take the |ϕ| 6 potential, given by where v is the symmetry breaking parameter. To ensure that the first order equations (34) support solutions compatible with the boundary conditions in Eq. (5) we set v = 1. By doing so, the model is governed by As one knows, the analytical solutions of these equations remain unknown. The energy density is written as in Eq. (35) with W = a(g 2 − 1). So, from Eq. (36), one has the energy E = 2π. We note here that the first order equation for a(r) in both Maxwell-Higgs and Chern-Simons-Higgs models has the form which is always solved with g = ag/r. Notice that B = −a /r only depends on f (g); this allows us to find models in both scenarios with the same solutions and magnetic field. For f (g) = µ(g)(1 − g 2 ), we get the Maxwell-Higgs model with the presence of the generalized magnetic permeability µ(g). On the other hand, comparing the above equation with the in Eq. (34b), we see that for f (g) = −4g 2 K(g) dg 2gK(g) we obtain the Chern-Simons case. This means that one may relate both models. In particular, for a known f (g), one can show that the Chern-Simons-Higgs model is obtained through One must be careful with this integration, because the integration constant must be properly chosen to make the above function be non negative in the interval where the solution g(r) exists, i.e., g ∈ [0, 1], as stated in the boundary conditions (5). Moreover, it must also lead to non-negative finite energy. In this case, the potential can be calculated from the right equation in (31) and Eq. (39); it is simply given by and the function W (a, g) in Eq. (35), involved in the energy, is calculated in terms of f (g) as Let us consider the standard case, K(g) = 1, investigated in Ref. [5,6]. As we have commented before, in this case one gets the potential in Eq. (37) with v = 1 to match the boundary conditions (5). We can substitute this in Eq. (31) or use Eq. (34) to obtain the first order equations (38). Comparing this with Eq. (39), one can show that f (g) = 2g 2 (1 − g 2 ). By using Eq. (40), we get K(g) = |1−g 2 |/ C − 2g 2 + g 4 and the potential V (g) = g 2 1 − g 2 C − 2g 2 + g 4 . Notice there is an integration constant, C, in these expressions. Nevertheless, K(g) has singularities for C < 1, which we avoid here and the potential is V-shaped for C > 1. So, we take C = 1, which recovers the standard case, K(g) = 1, and is the only choice that leads to a smooth potential. Now, we use the procedure to obtain a Chern-Simons model that engender the same analytical solutions in Eq. (19). For s = 1, the functions involved in the model lead to infinite energy, so we only consider s > 1, for which , where 2 F 1 (α, β; λ; z) denotes the Hypergeometric function of parameters α, β and λ, and argument z. Also, we have chosen the integration constant C = 2s/(s − 1) to obtain a simpler expression. The function W (a, g) in the energy density that appears in Eq. (35) is given by Eq. (42), which leads to So, for s > 1, the energy is given by Eq. (36) and has the form E = 2π 2s/(s − 1). Notice that, even though the procedure works, it leads to exotic potentials, with the presence of a Hypergeometric function. We carry on with the investigation and use the same method to get a Chern-Simons model that support the same solutions a(r) and g(r) of the Maxwell-Higgs model that we have introduced with the magnetic permeability (21). In this case, these solutions obey first order equations (23) and their associated magnetic field, all of them displayed in Fig. 2. Thus, we have f (g) = 2g 2 1 − g 2 l .
As we remarked below Eq. (21), the case l = 1 leads to solutions with exponential tails with a form similar to the one found in Eq. (15), since both a(r) and g(r) are exactly the same of the standard Chern-Simons model [5,6], for any well-defined K(g). Note however, that here, differently from the Maxwell-Higgs model described by the magnetic permeability (21), we have the presence of an electric field due to a nonvanishing temporal gauge component such that both of them depend on the form of the function K(g). Since the purpose of our paper is to deal with long range vortices, we do not discuss the case l = 1 with detail, using it only to compare the new solutions to the standard ones.
To obtain the function K(g), which controls the dynamical term of the scalar field in the Lagrange density, one must use Eq. (40). It leads to Notice there is an integration constant α that appears in the process. It must be non-negative to ensure the above function is real. As we have shown in Eq. (41), K(g) determines the potential, which is given by The function W (a, g) in Eq. (42) associated to the energy has the form which makes the energy in Eq. (36) be given by The potential in Eq. (46) has a set of minima located at g = 1 and at the origin, regardless the value of α. Nevertheless, as we will show in this paper, the case α = 0 is special, so we deal with it later. First, we take α > 0.
As we have used the method in Eq. (40) to find the Chern-Simons model, the solutions a(r) and g(r), and the magnetic field are the same of the Maxwell case; see Fig. 2. However, we are now dealing with a vortex in the Chern-Simons scenario, so we also have the presence of A 0 , which gives rise to an electric field, E(r). It also modifies the energy density, which now depends on K(g) as one can see in Eq. (35). Since we only know the numerical solutions, we estimate the asymptotic behavior of these quantities using the results for the tail of a(r) and g(r) in Eq. (24) substituted in Eqs. (29b), (29c) and (35): Thus, all of these quantities present a polynomial tail that is controlled by l, with l ∈ (1, ∞). An interesting feature, is that A 0 tends to a non-null constant, such that A 0 → 2α/(l + 1) for r → ∞. An interesting case for positive α is α = l, as it leads to vortices with fixed energy in Eq. (48), E = 2 √ 2 π, regardless the value of l. The function K(g), the potential V (g) and the other involved quantities can be calculated straightforwardly by taking α = l in Eqs. (45)-(49). The potential can be seen in Fig. 4 for some values of l. We then turn our attention to A 0 , which gives rise to an electric field, E(r). They can be calculated from Eqs. (29b) and (29c) and are displayed in Fig. 5. From the graphic of A 0 , one can see that, for l → ∞ and r → ∞, A 0 → √ 2. We also plot the energy density (35) in Fig. 6.
We now deal with the special case, α = 0. In this case, we get from Eqs. (45) and (46) that . (50b) The above potential is displayed in Fig. 7 for some values of l. One can show that d m V /dg m | g=1 = 0 for m = 0, . . . , (3l − 1)/2 . We note here that l = 1 recovers the standard Chern-Simons model [5,6], which arises for K(g) = 1 and V (g) = g 2 1 − g 2 2 and engender solutions with exponential tails in a similar form of Eq. (15). To calculate the energy for a general l, one can show the auxiliar function W (a, g) in Eq. (42) has the form The energy of the solutions in this scenario is given by Eq. (36), which leads to E = 2π 2/(l + 1). Notice this result is different from the energy obtained in the Maxwell-Higgs model, which is constant, as one can find below Eq. (13). As stated before, even though the solutions and magnetic field are the same of Fig. 2, here we have novel features. The function A 0 can be calculated from Eq. (29b) and the intensity of the electric field |E| = |A 0 | from (29c). The energy density can be calculated from Eq. (35). Since we only know the numerical solutions, we estimate the asymptotic behavior of these quantities using the results for the tail of a(r) and g(r) in Eq. (24): So, as l increases, these quantities get a larger tail, which shows the long range behavior of the vortex. We then use the numerical solutions of (23) and plot A 0 and the intensity of the electric field |E| in Fig. 8. The energy density from Eq. (35) is shown in Fig. 9. Notice that the behavior of this case (α = 0) is different from the one in Eq. (49). The tail of the aforementioned quantities is larger here, as the powers of r are smaller in the case α > 0.
IV. CONCLUSION
In this work, we have investigated the presence of vortices with a long range behavior in Maxwell-Higgs and Chern-Simons-Higgs models. We have used the formalism developed in Ref. [18] to find a manner to calculate the energy without knowing the explicit solutions and first order equations that are compatible with the equations of motion that dictate the form of the field.
Considering a pair of solutions a(r) and g(r), associated to a vortex with magnetic field B(r) that obey a specific class of first order differential equations, we have developed a method to find Maxwell and Chern-Simons models that support them. This allows us to make a connection between the two aforementioned scenarios. One must be careful, though, since the Chern-Simons model brings an extra degree of freedom, the electric field, when compared to the Maxwell case. By using the above procedure, we have found novel vortex configurations that support polynomial tails. As one knows, the standard vortex considered in each scenario in Refs. [3,5,6] engenders a tail that dies out exponentially. Since our vortices go slower to their boundary conditions, we called them long range vortices. The presence of long range vortices has specific interest: they describe localized excitations that attain distinct collective behavior, when compared to standard vortices. In this sense, they lead to scenarios that are different from the standard situation, and may foster the study of long range vortices in the case of nonrelativistic systems like the Bose-Einstein condensates, which are known to support vortex excitations. Another issue of interest concerns the problem examined in Ref. [38], connecting conformal quantum mechanics models and equations of the KdV hierarchy. It suggests to inquire about the possibility to relate vortices with long range tails to models that admit analytic solutions in the form of vortices with exponentially dying tails. Moreover, the above results motivate us to investigate other systems, with relativistic or nonrelativistic matter, to find new systems and solutions that engender the novel long range behavior that we have found in the present work. In the nonrelativistic case, in the case of Bose-Einstein condensates with Rydberg atoms, for instance, one knows that atoms with very large principal quantum number engender long range interactions and relatively long lifetimes, and this can be used to process quantum information and may induce the presence of vortices with long range tails. Since the experimental and theoretical studies are now bringing these possibilities into play, the search for models that support long range excitations is a topic of current interest [32][33][34][39][40][41][42][43][44]. | 6,384 | 2020-07-07T00:00:00.000 | [
"Physics"
] |
Scars in Dirac fermion systems: the influence of an Aharonov--Bohm flux
Time-reversal ($\mathcal{T}$-) symmetry is fundamental to many physical processes. Typically, $\mathcal{T}$-breaking for microscopic processes requires the presence of magnetic field. However, for 2D massless Dirac billiards, $\mathcal{T}$-symmetry is broken automatically by the mass confinement, leading to chiral quantum scars. In this paper, we investigate the mechanism of $\mathcal{T}$-breaking by analyzing the local current of the scarring eigenstates and their magnetic response to an Aharonov--Bohm flux. Our results unveil the complete understanding of the subtle $\mathcal{T}$-breaking phenomena from both the semiclassical formula of chiral scars and the microscopic current and spin reflection at the boundaries, leading to a controlling scheme to change the chirality of the relativistic quantum scars. Our findings not only have significant implications on the transport behavior and spin textures of the relativistic pseudoparticles, but also add basic knowledge to relativistic quantum chaos.
Introduction
Time-reversal (T -) symmetry is fundamental and has substantial implications in physical systems [1][2][3][4]. In general, to break the T -symmetry for a microscopic process one needs to involve magnetism [5]. Without loss of generality we consider a prototype model that is widely used in both classical dynamics and quantum chaos: the billiard system [6][7][8][9][10]. For example, a classical picture for a system to break the T -symmetry is a charged particle moving in a magnetic field, whose time-reversed orbit is no longer a solution of the system [11,12]. In quantum physics, T -symmetry breaking can be more subtle that the time-reversed trajectory can be the same but the phase of the action integral can be different, such as the Aharonov-Bohm (A-B) effect [13,14]. The ferromagnetic perturbator in electromagnetic wave analog of Schrödinger equation introduces a mechanism to break T -symmetry in microwave billiards [15,16].
Mathematically, the novel T -symmetry breaking is because the Hamiltonian with the confinement potential, which has to be a scalar 4-potential energy [35], does not commute with the time reversal operator. Consequently, the boundary condition imposed by the confinement potential also does not commute with the time reversal operator. Beside this, Berry and Mondragon provided a semiclassical understanding by considering the phase difference of the plane waves traveling in one direction of the periodic orbit and its time-reversed counterpart [35]. They found that for orbits with even number of bounces, the accumulated phase difference between the clockwise and counterclockwise orbit is an integer multiple of 2π, which does not break the time reversal symmetry; only the orbits with odd number of bounces have an additional π in the accumulated phase difference, therefore distinguishes the counterclockwise motion from the clockwise motion, and breaks the T -symmetry. The quantum counterparts of the classical orbits are the quantum scars, which show unusual concentration of the quantum wavefunction on the unstable classical periodic orbits [49][50][51]. Following this picture, Xu et al. investigated the quantum scars in this system, and found an intriguing difference between quantum scars with odd number of reflections at the boundary and those with even reflections, in accordance with the above rationales [41]. These odd-period scars for the Dirac billiard are then named as chiral scars. The chiral property is closely related to the overall phase change difference of scars. Although the results show distinct difference for the even and odd scars, the T -breaking mechanism from either semiclassical or microscopic perspect is not fully understood. It has been noted in Ref. [52] that by considering reflection of the planar Dirac spinor wave at the boundary interface of a straight potential jump, there will be a nonvanishing probability current density along the boundary even when the scalar 4-potential energy goes to infinity. Furthermore, the current flow is orientated, i.e., it is fixed to the positive y direction, which is independent to the incident angle that whether it is downward or upward, although the magnitude of the current will be affected. Thus the time-reversed orbit of the planar spinor wave will result in an asymmetric current at the boundary, which breaks the T -symmetry, in accordance of the non-commutable relation between the T -operator and the boundary condition [35].
Here in this paper we revisit this system from both the semiclassical and microscopic aspects to investigate the mechanisms of T -symmetry breaking by scar current analysis and magnetic response, which compensates the rationales of Berry and Mondragon [35] with more physical understandings. Furthermore, it provides a controlling scheme which can switch the chiral scars with the non-chiral scars, and also an exact semiclassical formula for the phase accumulation that can be used for level prediction of the relativistic scars, which agrees with the numerical calculations well. In particular, we consider the chaotic Dirac A-B billiard with a vanishing inner radius. Therefore, we introduce an additional phase caused by the magnetic flux, and in the mean time the orbits, thus the scars, are not perturbed. An experimentally feasible setup would require a finite inner radius. However, insofar as the inner region for the magnetic flux does not intersect with the orbit of the scar, it has little influence to the scar.
Model and methods
To be concrete, the chaotic Dirac A-B billiard is as follows. The system consists of a single massless spin-half particle with charge q confined by hard walls (infinite mass confinement) in a heart-shaped or Africa domain (w plane) whose classical dynamics is chaotic, and threaded by a single line of magnetic flux Φ at the origin. The position of the line of magnetic flux is a singular point. Therefore, we exclude this point by considering an inner disk of infinite mass potential with a vanishing inner radius centered at this point. Thus the flux can introduce a modulating phase, in the meantime, as it is just a single point on the 2D billiard, it does not exert much spatial perturbations to the scarring states. The billiards in the w = u + iv plane can be conformally transformed from a unit disk on the complex z = x + iy plane, where for the heart-shaped billiard b = 0.49, c = δ = 0, and for the Africa billiard b = c = 0.2, δ = π/3. Please note that with the above parameters these two billiards have chaotic classical dynamics [53,54]. For the magnetic flux, we choose a non-divergent gauge in which the lines of the vector potential A are the contours of a scalar function F (u, v): and F satisfies that ∇ 2 uv F = −2πδ(u)δ(v) [13]. The Hamiltonian for the confined Dirac particle iŝ where v F is Fermi velocity,σ = (σ x ,σ y ) andσ z are the Pauli matrices, V (u, v) = 0 within the billiard, and V (u, v) = ∞ outside the confinement region. The Dirac equation in the billiard can be written as where Ψ = [Ψ 1 , Ψ 2 ] T is the spinor wavefunction, and the boundary condition is [35] where s is the coordinate that describes the arc length of the boundary, starting from the cross point of the boundary with positive u-axis; θ(s) is the angle to the positive u-axis for the normal vector at s. When being acted upon by the Hamilton operatorĤ again, Equation (4) becomes where α = qΦ/(hc) and k = E/( v F ). Note that the term iασ xσy ∇ 2 uv F is particular to the Dirac A-B billiard, which is not present in the Shrödinger A-B billiard [13]. However, since ∇ 2 uv F = −2πδ(u)δ(v), it is singular at the origin and is zero otherwise. Practically, by setting a inner disk with radius ξ ≪ 1 of infinite mass potential, the billiard region that we are interested excludes this singular point. Note that the inclusion of the A-B flux can have two different types of boundary conditions around the singular point. Except introducing an infinite mass boundary for the inner disk and letting the radius go to zero, which is relevant for our case where the A-B flux only contributes to a global phase, there is a different setup for the boundary condition of the A-B flux in the quantum field theory where further interactions need to be considered to calculate the vacuum energy [55,56]. Then in the ξ → 0 limit, it has little perturbations to the wavefunctions. Therefore, in the following treatment to solve the eigenvalue and eigenfunctions, this term has been omitted.
Changing back to the disc region in the z-plane r = (x, y) is a straightforward procedure based on w(z). We obtain where the last term includes the nonuniform part |w ′ (z)| 2 originated from the chaotic boundary in the w plane. In particular, F can be chosen as F (r) = − ln |r| in the z plane, so in polar coordinates the above equation can be written as To solve the above equation, we expand Ψ in terms of eigenfunctions ψ lm (r, θ) of the circular Dirac A-B billiard of the unit disc with a vanishing inner radius (Appendix A), whose corresponding eigenvalues are µ lm , with l and m relevant quantum numbers. We have where c lm are the expansion coefficients. Substituting Equation (7) into Equation (6) leads to ν lm where ν lm = µ lm c lm , and The angular integration in Equation (9) can be calculated analytically, which yields Substituting I into (9) and integrating over variable r (we use the simplified form of radical function in Appendix A instead of that in Equation (9)), we can obtain the M matrix. Equation (8) can be written in the form of eigen-equation: MV n = λ n V n , where k n = 1/ √ λ n , c n,lm = V n,lm /µ lm . Correspondingly, we can get the eigen-energy as E n = v F k n of the original chaotic Dirac A-B billiard, and the eigen-state in the w plane can be obtained from that in the z plane: Ψ n (u, v) = Ψ n (x(u, v), y(u, v)), and Ψ n (r, θ) = Σ lm c n,lm ψ lm (r, θ).
Results
Once the eigenstates are obtained, we plot each of them and identify those localized on classical orbits-the scarring states. As proposed in Ref. [41], we use η to characterize the wavevector difference between the repetitive scars on the same orbits, which is defined as where [x] denotes the largest integer less than x, k 0 is the wavevector for a scar setting as the reference point, k n is the wavevector for repetitive scars on the same orbit, δk = 2π/L and L is the orbital length. Typically, η has the values of either close to 0 or 1. However, for scars on odd orbits (chiral scars) the feature is that η can take values around 0.5 [41]. This 0.5 value of η has been argued as due to the time-reversal symmetry breaking of the scars on odd orbits [41], which semiclassically has been proposed by Berry and Mondragon [35], that the spinor plane waves with odd number of bounces have an additional π in the phase difference between counterclockwise and clockwise orbits while the plane waves with even number of bounces have not. Note that the phase change here is caused by the boundary-spin interaction at the boundary. During each collision, the phase difference between the counterclockwise reflection and its time reversed counterpart has an additional π contribution. This phase pi leads to the spin polarization at the boundary. Also, we can see that for a scar on an orbit with even number of reflections, the spin-boundary interaction contributes to an integer multiple of 2π for the phase difference of the counterclockwise orbit and its clockwise counterpart. Thus for these orbits, the time-reversal symmetry is preserved. However, for the scars with odd number of reflections, the boundary phases contribute an additional π, leading to the T -symmetry breaking and also a chiral signature of the scar (Details about the local and global phase changes are discussed in Appendix B).
Current analysis of scars
To investigate the phase of the scarring eigenstates, we examine their local current flows. The current operator is given bŷ and the local current for state Ψ(w) can be defined as the expectation value ofû [35]: A systematic investigation of the local current flow for scarred states indicates that the current of most scars has a definitive orientation, either clockwise or counterclockwise, as shown in figure 1 (a) and (d) for period-3 orbit and period-4-II orbit, respectively. We estimated the relation between scar wavevector difference η and the scar orientation defined by its current flow. In figure 1 the scarring states with counterclockwise flow are marked as orange up triangles and those with clockwise flow are marked as blue down triangles. It is found that for even bounce scars, the wavevector difference η is always 0 or 1, regardless of relative current orientation [figure 1 (e)]; while for odd bounce orbit, when two scars have the same current orientation, η = 0 or 1, while if two scars have opposite current orientation, then η = 1/2, as shown in figure 1 (b), indicating T -symmetry breaking from the semiclassical point of view. This current orientation analysis confirms that η = 1/2 is resulted from the π phase difference of the opposite current orientation of odd bounce scars.
Scar chirality change by magnetic flux
A natural question is that can this phase be compensated by the magnetic flux? In particular, we consider a magnetic flux α (in units of magnetic flux quanta φ 0 ≡ hc/q) and a winding number W of a certain orbit around this flux, the phase gain caused by the magnetic flux is 2πW α. For a time reversed orbit, W changes sign, thus the phase difference between these two orbits with opposite orientation is 4πW α. Therefore, for the case of W = 1, if α = 1/4, then it will introduce a π phase difference. If the phase exerted by boundary-spin interaction in spin is equivalent to that caused by the magnetic flux, then in the case of W = 1 and α = 1/4, the odd orbit scars will lose its chiral character, while the even orbit scars will become chiral. As shown in figure 1, when there is no magnetic flux, η attains 0.5 value for the period-3 scar, indicating the chirality of this scar. However, when α = 1/4, the data points of η ∼ 0.5 have been disappeared, leading to a superficial time-reversal preservation. While for the period-4-II scar, the data points of η ∼ 0.5 do not present for α = 0 but emerge for α = 1/4. This indicates that although originated from different mechanism, the boundary-spin interaction induced phase is equivalent to that of magnetic flux. It is noticed that for scars without chiral nature, the two flow orientations are mixed. While for scars with a chiral nature, i.e., period-3 scars with α = 0 and period-4-II scars with α = 1/4, the scars with different orientation are well separated. One set of the scars attains a 0.5 value for η, while the other set attains values of 0 or 1. Figure 2 plots the same quantities as in figure 1 but for two period-5 scars. Surprisingly, η for α = 0 and α = 1/4 appear the same. A more detailed examination reveals that, for the period-5-I orbit, the flux is outside and not circulated by the orbit, therefore the flux has no effect to this scar. However, for the period-5-II orbit, it circulates the flux twice, i.e., W = 2, thus when α = 1/4 the phase difference between the counterclockwise orbit and the clockwise orbit is 4πW α = 2π, which does not change the chirality of the scars. figure 3). (+,-) denote counterclockwise and clockwise orientation, respectively.
Semiclassical theory of scars
Phenomenologically, as the phase caused by the boundary-spin interaction is equivalent to that by the magnetic flux, we can include it in the phase shift formulae [9,[57][58][59], where the action S = p · dq = k · dq + q c A · dq [13], W is the winding number encloses the flux, σ is the Maslov index that related to the conjugate points along the orbit and is canonical invariant [60]. Here in the heart-shaped billiard, σ equals to the number of reflections along the complete orbit [61]. The infinite mass (or hard wall) reflection only contributes phase in the spin term, thus has no contribution to the Maslov index, and 2πβ represents the phase accumulation of spin reflection at the boundary, whose value depends on the particular orbit and current orientation. Note that because of the chiral effect caused by spin boundary interaction, there is a π difference in the term 2πβ between the reversed odd orbits (Appendix B). For semiclassically allowed orbits the phase accumulation around one cycle should be multiple integers of 2π, i.e., ∆Φ = 2πn, n = 1, 2, · · · to ensure that the wavefunction is single-valued. Thus In the case of zero magnetic flux (α = 0), we define Γ = mod(kL/2π, 1) = mod(σ/4 − β, 1), which relates the semiclassical quantity σ (the number of conjugate point on the orbit) and β from the relativistic quantum dynamics. Here we list the values of parameters σ, β and Γ (via mod(σ/4 − β, 1)) in Table 1 for different orbits. Alternatively, the values of Γ can be obtained numerically through mod(kL/(2π), 1) from the eigenwavevectors of the corresponding scars. The results are shown in figure 3. We can see that the Γ values obtained from numerical calculations agree with the semiclassical theory well. Table 1.
Magnetic control of scars
Now we examine the wavevector changes of scars tuned by a magnetic flux at the origin. The wavevector difference of reversed scars of the same type is denoted as where n is an integer, and ∆β = 1/2 for odd orbits. Thus whenever |2W α| = 1/2 for an orbit, the corresponding scars will interchange between chiral and non-chiral characters, as demonstrated in figure 1. From Equation (15), for a scar with wavevector k 0 at α = 0, as the magnetic flux α is increased, the same scar would appear if the wavevector approximately follows as β depends only on the orbit and is fixed to a particular value for a given orbit. The (17). One can see that the numerics follow the theory well. Note that Equation (17) holds for both odd periodic and even periodic orbits. The difference, however, comes from the initial k 0 value. From figure 4 it is clear that for the scars on any orbit, there are actually two sets of scars, one with counterclockwise flow, i.e., W = 1, where k decreases linearly with increasing α; the other with clockwise flow that W = −1, where k increases with increasing α. For each set, if one fixes the magnetic flux and examines the eigenstates, the scar repeats itself when ∆k = 2π/L approximately holds. However, when there is no magnetic flux, the two sets of odd periodic scars intersect each other, leading to ∆k = π/L if the flow orientation is not distinguished. But if we regard the two sets are different scars, then for each set, we recover ∆k = 2π/L. For the even period scars, the two sets appear parallel to each other, i.e., they may appear at the same set of k 0 values with 2π/L intervals, although at each k 0 , typically only one scar can be found. The wavevector k for the scar goes down as α increases for W = 1, while it goes up for W = −1. Therefore, the two lines cross each other at certain points. For the period-3 scar, the cross points are α = 0.25 (corresponding to a π phase difference) and α = 0.75. It is noted that at the cross point, for some of the scars it is difficult to identify the flow orientation. While for the period-4-II scar, the cross points are at α = 0 and α = 0.5. For the period-3 scar, if α is shifted by 0.25, then the k-α relation will behave similarly to that for the period-4-II scar. Thus the behavior of period-3 scars at α = 0.25 is similar to that of the period-4-II scars at α = 0, and vice versa. In this sense, the magnetic flux interchanges the chiral and nonchiral nature of the period-3 scar and the period-4-II scar by exerting a flux of α = 0.25. Now the effect of the boundary induced phase β is quite clear, e.g., compared to the period-4-II scar, it shifts the overall pattern of the period-3 scar leftwards from α = 1/4 to α = 0, with all other features kept except k 0 and L taking different values. Figure 5 shows the k-α relation for another four typical states: the period-5-I scar, the period-5-II scar, a period-2 bouncing ball scar, and an edge state. Since the period-5-I scar [figure 5(a)] and the period-2 bouncing ball scar [figure 5(c)] do not circulate the flux, e.g., W = 0, thus k does not change with α, which agrees with the data. For the period-5-I scar, the state with counterclockwise flow and that with clockwise flow succeeds to each other, i.e., one row with counterclockwise flow (orange up-triangle), then next row with clockwise flow (blue down-triangle) at an wavevector interval ∆k = π/L, and vice versa. For the period-2 bouncing ball scar, since there are no specific orientation of the flow, they are represented by gray squares and the wavevector difference between the neighboring rows is ∆k = 2π/L. For the period-5-II scar [ figure 5(b)], as W = ±2, the slope is larger, and the cross points are at α = 1/8, 3/8, 5/8, 7/8, i.e, four cross points instead of two for the W = ±1 cases. Therefore, for the period-5-II scar, it will lose chirality at α = 1/8 rather than α = 1/4 for the period-3 scars. For the edge state [figure 5(d)], since it always has a counterclockwise flow at the boundary, the time-reversed state is no longer a solution of the system. Therefore, W can only take the value of 1, and consequently, in the figure of k-α relation, there is only one set of the lines that k decreases with α and the wavevector difference of neighboring lines is about ∆k = 2π/L. Similar results are also obtained in the Africa billiard which has no reflection symmetry (Appendix C).
Experimental realization
Experimentally, such a novel T -breaking effect can be investigated using topological insulators (TI). In particular, consider a 2D surface supporting the edge states of a 3D topological insulator, whose quasiparticles can be described by the 2D massless Dirac equation (with a 90 degrees rotation of the spins). The mass confinement can be realized by depositing a ferromagnet insulator cap layer on top of the TI outside the billiard (or quantum dot) region [62][63][64], where the exchange coupling Vσ z induced by the ferromagnet insulator can serve as the mass confinement. Although for simplicity the theoretical treatment requires the mass potential goes to infinity, in realistic cases, as far as the energy of the concerned states is much smaller than the gap, the phenomenon would be basically the same. For applying the magnetic flux, in general, the area of the flux threading the surface can be finite, insofar as it is not on the orbit of the scar. For typical scars such as the period-3 and period-4-II scars shown in figure 1, as they have a large interior, they are less likely to be affected by opening a hole in the middle to exert the magnetic flux.
Discussions and conclusion
Through extensive computations and physical analysis of the chaotic Dirac A-B billiard, the whole picture of the mechanism of T -symmetry breaking emerges. To be specific, for the Dirac billiard confined by the infinite scalar 4-potential, or mass potential, the Hamiltonian does not commute with the T -operator, as the confinement mass potential will acquire a sign change after the T -operation, which can be corroborated by fact that the boundary condition derived from the mass potential confinement does not commute with the T -operator too. From the local physical interaction point of view, each reflection at the boundary breaks the time-reversal symmetry as it contributes to an oriented flow at the boundary whose direction is independent of the incident angle. Furthermore, as the spin of a free Dirac particle is polarized along its momentum, the reflection at the boundary induces the boundary-spin interaction, thus each reflection is accompanied with an additional phase φ in the action integral of the particle. The reversed orbit will acquire another phase φ at this point. The phase difference between the counterclockwise reflection and its time reversed reflection at the same boundary point has a π contribution. Therefore, for a scar on an orbit with even number of reflections, the total effect of these phases contributes to an integer multiple of 2π for the phase difference of the counterclockwise orbit and its clockwise counterpart. Thus for these orbits, the time-reversal symmetry is preserved. However, for the scars with odd number of reflections, the boundary phases contribute an additional π, leading to the T -symmetry breaking and also a chiral signature of the scar. A natural question is that can this boundary-spin interaction induced phase be compensated by a magnetic flux? The answer is yes. As we have demonstrated, the π phase difference between the counterclockwise and clockwise orbits with odd number of reflections can be annihilated completely by a properly added magnetic flux, i.e., the chiral scar loses its chirality, while the non-chiral scars can attain the chirality under certain cases. However, depending on the location of the flux threading the billiard, the winding number for an orbit around this flux can be highly nontrivial. As we show, for a given A-B billiard, the winding numbers can be zero, one, two, and so on, which has significant implications in their response to the flux. The underling rationale is that, phenomenologically, the boundary induced phase can be included into the action integral. Insofar as it is in the action integral, it loses the complexity when generating it, and is equivalent to the phase terms caused by the path integral of the momentum, and thus to the phase from the magnetic flux. Note that besides the scars on the periodic orbits, there is another class of states, edge states, that always have a counterclockwise flow localized at the boundary, which breaks the time-reversal symmetry as their time-reversed states are no longer solutions for the system. These states have nonzero wavefunctions at the boundary, in contrast to zero wavefunctions at the boundary for the Shrödinger billiard with infinite confinement potential.
For the Dirac billiard system, the chirality is fundamentally related with the timereversal symmetry. The time-reversal operator changes the sign of the confinement potential V and the direction of local flow for the scarring states. The parity operation is effectively the combination of time-reversal operation and mirror reflection. From the semiclassical point of view, for a particular scar, if the billiard has a reflection symmetry, e.g., the heart-shaped billiard, since the mirror reflection becomes identical operation, then the parity operation becomes equivalent to the time-reversal operation. Thus if the system or the state is invariant under the parity operation, it will also be invariant under time reversal operation, such as for the even period scars that at a given energy level the flow orientation can be either clockwise or counterclockwise. For odd period scars, both the parity symmetry and the time-reversal symmetry are broken, arousing a chiral signature for these scars and at a given energy level only one orientation is allowed. While for billiards without a reflection symmetry, for instance, the Africa billiard, one can consider a billiard of its mirror image, and for scars on one given orbit, the corresponding scar under parity operation has the reverse orientation. Note that our results can be generalized to more divergent physical pictures, e.g., particle-hole symmetry, negative potential, mirror reflection and their combinations, where the chirality still exists, although the spin behavior can be different. For the details of the system's behavior under symmetry operations, please refer to Appendix D.
Our complete understanding of the T -breaking of the system leads to a control mechanism of the chiral scars, which can interchange chiral scars and non-chiral scars, although the applied magnetic flux for different scarring orbits can be different. This subtle T -breaking phenomena by the odd periodic orbits and the edge states can have significant implications on the transport behavior and spin textures of the relativistic pseudoparticles [62], or distinct magnetic response that could be applicable in quantum information devices, e.g., relativistic qubits [64]. Our finding thus provides concrete grounds for both novel applications of the newly discovered 2D relativistic materials and the basic knowledge of relativistic quantum chaos. To solve the chaotic Dirac A-B billiard with vanishing inner radius, we need to solve the eigenstates of the circular A-B billiard used as basis for conformal mapping. In particular, the system we shall study contains a single massless spin-half particle with charge q confined by hard walls (infinite mass confinement) in a circular ring domain with inner radius ξ → 0. The billiard system is threaded by a single line of magnetic flux Φ at the origin. We choose a non-divergent gauge in which the lines of the vector potential A are the contours of a scalar function F (r) = − ln(|r|), Note that ∇ · A = 0 and ∇ × A =nΦδ(r),n is the unit vector normal to the z plane. The Dirac equation can be written as where ψ = [ψ 1 , ψ 2 ] T . And the boundary condition is where s is the arc length of the boundary, starting from the cross point of the boundary with positive x-axis; θ(s) is the angle to the positive x-axis for the normal vector at s. For a circularly symmetric ring boundary, we have whereĴ z = −i ∂ θ + ( /2)σ z is the total angular momentum operator. We can choose the simultaneous eigenstates ofĤ andĴ z : So, the solutions of (A.2) has a general form that can be written as where l = 0, ±1, ±2, · · · and N is the normalization factor. The Dirac equation in polar coordinate is By canceling χ in Equation (A.6), we get the Bessel's differential equation where R = µr and ν = l − α. φ(r) can be wrritten as a linear combination of the Bessel function of the first kind J ν (R) and the Bessel function of the second kind N ν (R), i.e., where β is a coefficient and can be determined by the boundary conditions. χ(R) satisfies the following equation Employing the recursive relation of Bessel functions, we obtain Then the inner and outer boundary conditions lead to By solving the above equations, β is given as where the eigenvalue µ (and thus E = v F µ) can be obtained by solving the equation Equation (A.12) can be simplified by the special properties of the Bessel functions listed below, i.e., For ν being an integer, the right hand side of Equation (A.12) is finite. Since both N ν+1 (µξ) and N ν (µξ) diverge as ξ goes to zero, (J ν+1 (µ) − J ν (µ)) must be zero. So we have (1). ν = integer, For ν not being an integer, N ν can be expressed as a linear combination of J ν and J −ν , so Equation (A.12) can be simplified as In the ξ → 0 limit, we can get (2).
For the simplified equations (A.13)-(A.18), we can get the eigenvalues µ lm (α), where the magnetic flux α can be regarded as a control parameter, l and ν are related by ν = l − α, and m represents the mth solution for a given l.
Once the µ lm (α) is obtained, substituting it back into Equation (A.11), we can get the corresponding β lm (α). Substituting these two quantities back to equations (A.4), (A.8) and (A.9), we can obtain the corresponding eigenfunction ψ lm (α): In particular, we can get the simplified expressions for the eigenfunctions by appropriate approximations as following.
For ν not being an integer, the asymptotic behavior of the first class Bessel Function is based on which we can get the following approximations.
Appendix B. Physical process of each local reflection
In this section, by employing the model of plane wave reflection at a straight potential, we shall show that the wave at the boundary is an eigenfunction forŜ y with an eigenvalue of /2, regardless of the incident angle. That is, whether the incident wave is coming upwards or coming downwards, the spin always points up (or counterclockwise), indicating chirality. Therefore, a time-reversed wave will not result in a time-reversed spin polarization at the boundary, leading to T -breaking. The origin of spin polarization can be understood by analyzing the phase change at each reflection. We found that for each local reflection, the difference for the phase change during a reflection and its time-reversed counterpart has an additional π contribution, which is also an indication of T -breaking. Therefore, each reflection at the boundary breaks the time-reversal symmetry.
With these results, we further provide a complementary understanding of the global phase change difference of even and odd closed orbits discussed by Berry et al. [35].
To gain insight into the boundary effect on the spin, we employ the model of planewave reflection on a straight boundary, which has been discussed in details in [35,52], the schematic diagram is shown in figure B1 (for generality we take V > E in area 2). Here we briefly list their results as Equations (B.1-B.7). The wave (incident plus reflected) in the plain area can be written as and the transmitted wave in the potential area is where R, T are the reflection and transmission coefficients, respectively, the incident wave vector k 0 = (k cos θ 0 , k sin θ 0 ) and the reflected wave vector k 1 = (k cos θ 1 , k sin θ 1 ), V q−EK . Matching the two waves at x = 0 and using the convention to relate the incident and reflected directions by specularity [35] where θ(s) = 0 for the special case we consider here. We can obtain where the parameters γ and λ are defined through Also, the transmission coefficient is given by Note that the above convention in Equation (B.3) actually implies the change from θ 0 to θ 1 by rotating the angle counterclockwisely ( figure B1). If the change of the angle is made by rotating clockwisely, there will be an additional 2π at the right side of Equation (B.3), and an additional phase π in the plane wave in the second term of Equation (B.1) for there is a prefactor 1/2. But the final results are unchanged. Another important property of the refection coefficient R is that R(θ 0 ) = R(−θ 0 ), i.e., it is the same for the forward or backward incidence. For finite V > E, R is not a constant but a position dependent function. And as V goes to infinity, R becomes 1. In the following unless otherwise specified we assume E > 0, V → ∞ and R = 1. Spin orientation. The wave-function on the boundary in figure B1(a) is Figure B1. Incident and reflected plane waves (black arrow) and the spin (red arrow) corresponding to the superposition of waves at the boundary.
The y-direction spin operator isŜ y = ( /2)σ y . It is straightforward to verify that both Ψ 1 and Ψ 2 are eigen-functions ofŜ y , with the same eigen-value /2: That is, the two opposite incident cases have the same spin orientation on the boundary! We can see the spin always points to the counterclockwise direction regardless of the incident angle (this also can be seen by the boundary condition). This indicates that each local collision breaks the T -symmetry due to the interaction between spin and the infinite mass boundary.
Note that although in the configuration space the probability density current on the boundary have the same orientation for the two opposite incident directions, the magnitudes are typically different [52].
Additional π phase for reversed reflection. Now, we can carefully study the phase change of the spinor wavefunction during one reflection under the special condition V → +∞ and the corresponding R = 1. Suppose the incident angle is θ 0 and the reflected angle is θ 1 , as shown in figure B1(a). These two angles are related by Equation (B.3). So, according to Equation (B.1) the phase difference between these two directions can be written as If we reverse the reflection direction, the incident and reflected angles are labeled as θ ′ 0 and θ ′ 1 , where θ ′ 0 = −θ 0 + 2 θ(s) + 2nπ, n is an integer, as shown in figure B1(b). The phase difference between these two directions is Note that θ(s) = 0 and the additional 2nπ has no observable effect here, which can be ignored. For each collision, the phase change of a pair of two opposite incident directions changes a minus sign as well as an additional phase π, which ensures the spin polarization at the boundary. Global phase change. Now let us consider the global phase changes based on the local phase change relation for each reflection. For a closed orbit with the initial incident angle θ 0 based on Equation (B.9), closure means that the whole phase change is where m is an integer. If we reverse the initial direction of the same orbit based on Equation (B.10), the global phase change satisfies where N is the total number of reflections. So the phase difference between the two reversed orbits caused by boundary is We can see that for odd bounces there will be a π difference in phase between the reversed orbits caused by boundary, while for even bounces there are no phase differences (ignore the 2π change). This is in agreement with the analysis of Berry et al. [35].
Appendix C. Africa A-B Dirac billiard.
To confirm our understanding of the mechanism of T -breaking and the magnetic response of relativistic scars, we also analyzed the scars in Africa billiard, a chaotic billiard without geometric symmetry. To obtain the eigenvalues and eigenstates, we did the same calculations (Equations (1-10)) as in the heart-shaped billiard. Once the eigenstates are obtained, we plot each of them and identify those localized on classical orbits-the scarring states. Then we plot the current flows of the scars and find that for most scars the current has a definitive orientation, as illustrated in figure C1 (a), (d), (g) for odd period scars and figure C1 (j), (m) for even period scars. We use η (defined in Equation (11)) to characterize the wavevector difference between the repetitive scars on the same orbit. Figure C1 shows η for the scars with counterclockwise flow marked as orange up triangles and those with clockwise flow marked as blue down triangles. First, we consider the zero magnetic flux case. It is found that for even bounce scars, the wavevector difference η is always 0 or 1, regardless of relative current orientation [ figure C1 (k,n)]; while for odd bounce scars, when two scars have the same current orientation, η = 0 or 1, and if two scars have opposite current orientation, then η = 1/2, as shown in figure C1 (b,e,h). This current orientation analysis confirms that η = 1/2 is resulted from the π phase difference of the opposite current orientation of odd bounce scars.
Can magnetic flux change the scar chirality in Africa billiard? The answer is yes! By adding a single line of magnetic flux with α = 1/4 in the origin, the data points of η ∼ 0.5 have been disappeared for odd period scars [ figure C1 (c,f,i)], leading to Figure C1. The current of scars (a, d, g, j, m), and the corresponding η values at α = 0 (b, e, h, k, n) and α = 1/4 (c, f, i, l, o). The first to the fifth rows are for the period-3-I scar, period-3-II scar, period-5 scar, period-4-I scar and period-4-II scar separately. The orange up-triangles are for scars with counterclockwise flow, the blue down-triangles are for scars with clockwise flow, and the gray squares are for scars whose current orientation is hard to distinguish. The reference state is chosen (arbitrarily) from the scars with clockwise flow.
the lost of chirality and the superficial time-reversal preservation. While for the even (period-4) scars, the data points of η ∼ 0.5 do not present for α = 0 but emerge for α = 1/4 [ figure C1 (l,o)]. The interchange of chirality between even and odd period scars indicates that although originated from different mechanism, the boundary-spin interaction induced phase is equivalent to that of magnetic flux. It is noticed that for scars without chiral nature, the two flow orientations are mixed. While for scars with a chiral nature, i.e., odd period orbit scars with α = 0 and even period scars with α = 1/4, the scars with different orientations are well separated. One set of the scars attains a 0.5 value for η, while the other set attains values of 0 or 1. In order to have a complete understanding of the chirality and associated phase, we investigate the magnetic response of scars in a flux interval 0 ≤ α ≤ 1 (The system is periodic for magnetic flux varying from 0 to 1). From Equation (15), for a scar with wavevector k 0 at α = 0, as the magnetic flux α is increased, the same scar would appear if the wavevectors approximately follow where β does not appear for it is fixed to a particular value for a certain oriented orbit. We have varied the magnetic flux systematically, and for each case, identify the scars on the same orbit in a certain wavevector (energy) range and identify their flow orientation. The corresponding wavevector versus magnetic flux for the same type scars in figure C1 are plotted in figure C2. The dashed lines are from Equation (C.1). We can see that the numerics follow the theory well. From figure C2 it is clear that for the scars on any non-zero closing area orbit, there are actually two sets of scars, one with counterclockwise flow, i.e., W = 1, where k decreases linearly with increasing α; the other with clockwise flow that W = −1, where k increases with increasing α. Therefore, the two lines cross each other at a certain point, depending on the initial wavevetor value at α = 0. For the period-3-I and period-3-II scar [ figure C2 (a,b)], the cross points are α = 0.25 (corresponding to a π phase difference) and α = 0.75, where the chirality is completely missing. While for the period-4-I and period-4-II scars [ figure C2 (d,e)], the cross points are at α = 0 and α = 0.5. For the period-3 scar, if α is shifted by 0.25, then the k-α relation will behave similarly to that for the period-4 scar, which indicates the accumulated phase difference is π for the scars travelling along a complete period with opposite orientation. Here, we should note that for the period-5 scars [figure C2 (c)], the k − α relation is similar to that of period-3 scars, as it effectively circulates the flux only one time after a complete orbit. By comparing the different magnetic response of period-5 scar in heart-shaped and Africa billiard, we can see the topological position of the magnetic flux is of vital importance. For period-2 bouncing ball scar [figure C2 (f)], as it does not circulate the flux, e.g., W = 0, thus k does not change with α, which agrees with the data.
For Africa billiard, the effect of the boundary induced phase β and magnetic phase on scars is similar to that in heart-shaped billiard. This indicates that our understanding of the T -breaking mechanism and the origin of chiral signature in the infinite mass confined billiard is independent of the particular shape of the billiard, although the chirality of the scars can be affected by the number of reflections, the position and magnitude of magnetic flux.
Appendix D. Negative energy, negative potential, mirror symmetry and chiral symmetry Here, we shall provide a comprehensive description of the spin behavior in three cases (and their combinations): negative energy, negative potential and mirror symmetry.
Negative energy (−E), positive potential (V ) and V > E > 0. Considering the action of antiunitary operator =σ xK onĤ therefore if Ψ is an eigenstate (especially a scar state) ofĤ, then it transforms to which is also an eigenstate ofĤ with energy −E [35]. For the states corresponding to E and −E with the same potential V , the probability density distribution is the same: P = ψ * 1 ψ 1 + ψ * 2 ψ 2 . Also, the in-plane probability density current is given by u = v F σ = 2v F [ℜ(ψ * 1 (r)ψ 2 (r)), ℑ(ψ * 1 (r)ψ 2 (r))], which indicates that the probability density current as well as the in-plane spin behavior at −E is the same as that at E [Equation (13)]. Thus if Ψ is a scar state, the current of scars of these two cases will be the same. Especially, according to Equation (B.8), if we haveŜ y Ψ = ( /2)Ψ at the boundary (V → ∞), we can also obtainŜ y Ψ ′ = ( /2)Ψ ′ . Furthermore, we can get the local expectation value ofσ z . In the positive energy (E) case while in the negative energy (−E) case By comparison, we can see that the values of σ z are opposite for E and −E cases. Note that for E > 0, we can get the explicit local average of σ z at the boundary interface with potential V by using Equation (B.2) and (B.7), We can prove that σ z ≥ 0. Especially, when V → +∞, σ z = 0. Similarly, for E < 0, we have σ z ≤ 0 and σ z = 0 when V → +∞. We now investigate the spin behavior at the boundary and, most importantly, compare the accumulated phase difference of the scar orbits with respect to E and −E by employing the plane wave model as proposed in Equations (B.1-B.4). Note that the helicity is σ · p/|p| = −1 at −E compared with the positive energy case where σ · p/|p| = 1 for the free particle, which means although the current orientation is the same for E and −E, the momentum of the free particle is in reversed current direction in −E case. The wavefunction in the plain area at −E is as illustrated in figure D1(a), where we adopt θ 0 and θ 1 as the spin direction of the free particle, θ 0 + π and θ 1 + π as its wavevector direction. The transmitted wave in the potential area is where the wavevector −k 0 = (−k cos θ 0 , −k sin θ 0 ) and −k 1 = (−k cos θ 1 , −k sin θ 1 ), V q+EK . Using the convention (B.3) and matching the wavefunctions Ψ 1 and Ψ 2 at the boundary, we can obtain the formula of R and T . Especially in V → +∞ limit, we can get R = 1. Now, we can verify the spin orientation at the boundary using the convention Equation (B.3) and R = 1, and we havê Thus the spin points to the positive y-axis direction at the boundary in the −E case with V → ∞. Furthermore, we can get the phase change for the scar state with counterclockwise current where m is an integer. By comparing Equation (D.8) with Equation (B.10), we can see that the accumulated phase of the two orbits with the same current orientation corresponding to E and −E is the same. For the reversed orbit with clockwise flow, the incident and reflected angles are defined as θ ′ 0 and θ ′ 1 , which are the same as that defined in figure B1. By using the relations Equation (B.9) and Equation (B.10), we can get the phase change where N is the number of reflections along the orbit. This is the same as Equation (B.11). So the accumulated phase difference between the reversed orbits caused by the boundary at −E case is For odd orbit (N is odd), there is an additional π difference between the counterclockwise state and the clockwise state, thus the chiral scars still exist.
Positive energy (E), negative potential (−V ) and V > E > 0. The time reversal operator is defined asT = iσ yK . Under the action ofT ,Ĥ transforms tô and the eigenstate Ψ ofĤ transforms to We can see the probability distribution is the same for Ψ and Ψ ′ : We can also obtain the local expectation value of σ z which is the same as that in (−E, V ) case (Equation (D.3)) and opposite to (E, V ) case (Equation (D.2)). Now, let us examine the spin orientation at the boundary in the framework of plane wave and then calculate the accumulated phase along the periodic orbit in the negative potential billiard. The wavefunction in the free area can be written as as illustrated in figure D1(b). And the transmitted wave in the potential area has the form 16) where the incident wave vector −k 1 = (k cos θ ′ 0 , k sin θ ′ 0 ) and the reflected wave vector −k 0 = (k cos θ ′ 1 , k sin θ ′ 1 ), K = k sin θ ′ 0 and q = Matching the waves at the boundary and using the specularity (B.3), we can obtain the reflection and transmission coefficients. Especially when we take V → −∞, we can get R = −1. From Appendix B, we know that a π phase in the reflection wave can reverse the spin orientation. So, at the boundary it is still an eigenfunction ofŜ y but with an eigenvalue of − /2, regardless of the incident angle. According to Equation (B.11), the whole phase change caused by boundary along the clockwise orientation is whereas the phase change along the counterclockwise direction is and the eigenstate Ψ ofĤ changes into Therefore, Ψ ′ is the eigenstate ofĤ ′ = v Fσ ·p − V with negative energy −E. The probability is still the same as that in (E, V ) case, i.e. P = ψ * 1 ψ 1 + ψ * 2 ψ 2 , while the current orientation is opposite, Also, the local expectation value ofσ z is which indicates that σ z is the same as that in (E, V ) case.
To confirm the spin behavior at the boundary and obtain global phase change of spin along a complete periodic orbit, we use the plane wave model with the wave in the free area written as The schematic diagram is shown in figure D1(b) and the transmitted wave in the potential area is in the form where the wave vector whereĤ ′ Ψ ′ = EΨ ′ . The probability distribution is P = ψ * 1 (−x, y)ψ 1 (−x, y) + ψ * 2 (−x, y)ψ 2 (−x, y), which is also symmetric about y axis. The local current can be written as u ′ = 2v F [ℜ(ψ 1 (−x, y)ψ * 2 (−x, y)), ℑ(ψ 1 (−x, y)ψ * 2 (−x, y))] = 2v F [ℜ(ψ * 1 (−x, y)ψ 2 (−x, y)), −ℑ(ψ * 1 (−x, y)ψ 2 (−x, y))]. (D.29) Thus u ′ x (x, y) = u x (−x, y), and u ′ y (x, y) = −u y (−x, y). This indicates that the current of the scars with the same energy of these two systems is in the same winding orientation, as shown in figure D2. The accumulated phase difference around a complete periodic orbit of these two systems with the same winding direction can be obtained as follows (the schematic diagram can be seen in figure D2). First, for the odd orbits of the system with HamiltonianĤ as shown in figure D2(a), the accumulated phase [35] is While for the systemĤ ′ as illustrated in figure D2(b), the accumulated phase along the complete orbit is where we have used the angle relations θ ′ 0 = −θ 0 and θ 2j−1 = π − θ 2(M −j+1)−1 + 2nπ. Closure means ∆ o = Kπ (K is integer), as a result of which we can obtain the accumulated phase difference of these two orbits 32) which illustrates that the accumulated phase difference of these two orbits are multiple integers of 2π. For the even orbits, the accumulated phase for the system with HamiltonianĤ is while for the systemĤ ′ , by using θ 2j−1 = − θ 2(M −j+1) + π + 2nπ, we can get The accumulated phase difference is still an integer multiple of 2π. However, if in figure D2(b) the current flow has an opposite orientation and it is an odd orbit scar, then it will have an additional π phase difference compared with the scar in figure D2(a). While for even orbits this π phase difference does not appear. Parity operation. Here, we consider the parity operation with respect to the x axis. The parity operator isP =R yσx . Under its action, the HamiltonianĤ transforms toĤ and the eigenstate Ψ ofĤ transforms to Ψ ′ =P ψ 1 (x, y) ψ 2 (x, y) = ψ * 2 (x, −y) ψ * 1 (x, −y) , (D. 36) whereĤ ′ Ψ ′ = EΨ ′ . It is noticed that after the parity operation, beside the mirror reflection with respect to the x axis, the confinement potential changes sign, indicating parity symmetry is broken. Effectively, the parity operation is equivalent to the mirror reflection together with the time-reversal operation, which changes the sign of V . The probability distribution is P = ψ * 1 (x, −y)ψ 1 (x, −y) + ψ * 2 (x, −y)ψ 2 (x, −y), which is also symmetric about x axis. The local current can be written as u ′ = 2v F [ℜ(ψ 1 (x, −y)ψ * 2 (x, −y)), ℑ(ψ 1 (x, −y)ψ * 2 (x, −y))] = 2v F [ℜ(ψ * 1 (x, −y)ψ 2 (x, −y)), −ℑ(ψ * 1 (x, −y)ψ 2 (x, −y))]. (D.37) Thus u ′ x (x, y) = u x (x, −y), and u ′ y = −u y (x, −y). The schematic diagram of the scar current is shown in figure D3. For the heart-shaped billiard, V (x, y) = V (x, −y), so the HamiltonianĤ ′ is the same as that under T operation (Equation (D.11)). This indicates that T and parity operation have the same effect for the heart-shaped billiard, and the system is invariant under the combination of P and T operation ( =R yσx · iσ yK = −R yσzK ). For Africa billiard, the scar current orientation of H ′ is opposite to the original system. If we rotate the Africa billiard in figure D3 (d) by π, we can get the same geometric shape as the billiard in figure D2 (b). The difference is the sign of the potential, thus the current direction is opposite for these two cases. Note Table D1. The spin properties of different combinations of three operations. E is the energy of the system, V is the potential and M is the mirror reflection with regard to y-axis, I is without the M operation. R is the reflection coefficient in the planewave model with convention Equation (B.3); helicity is defined asσ ·p/|p| in the free billiard domain (V = 0), if helicity is ±1, the direction of the wavevector is the same (opposite) with the current orientation. Scar current means the current orientation in the billiard domain with ± indicates the same (opposite) orientation as the (E, V , I) case. Spin orientation is the direction of spin at the boundary interface and + represents positive -1 1 -1 1 -1 1 -1 that we can also use the parity operatorP ′ =R xσy , which gives the parity operation with respect to y axis. The action ofP ′ equals to the combination ofP and a rotation by π. The mirror operator in fact is the combination of parity and time-reversal operators, i.e., =R xK =R xσy ·σ y K. Furthermore, we have considered all the combinations of ±E, ±V , with or without M, the results are summarized in table D1. | 13,367.2 | 2017-01-13T00:00:00.000 | [
"Physics"
] |
Enhanced Subkiloparsec-scale Star Formation: Results from a JWST Size Analysis of 341 Galaxies at 5 < z < 14
We present a comprehensive search and analysis of high-redshift galaxies in a suite of nine public JWST extragalactic fields taken in Cycle 1, covering a total effective search area of ∼358arcmin2 . Through conservative (8σ) photometric selection, we identify 341 galaxies at 5 < z < 14, with 109 having spectroscopic redshift measurements from the literature, including recent JWST NIRSpec observations. Our regression analysis reveals that the rest-frame UV size–stellar mass relation follows Reff∝M*0.19±0.03 , similar to that of star-forming galaxies at z ∼ 3, but scaled down in size by ∼0.7 dex. We find a much slower rate for the average size evolution over the redshift range, R eff ∝ (1 + z)−0.4±0.2, than that derived in the literature. A fraction (∼13%) of our sample galaxies are marginally resolved even in the NIRCam imaging (≲100 pc), located at ≳1.5σ below the derived size–mass slope. These compact sources exhibit a high star formation surface density ΣSFR > 10 M ⊙ yr−1 kpc−2, a range in which only <0.01% of the local star-forming galaxy sample is found. For those with available NIRSpec data, no evidence of ongoing supermassive black hole accretion is observed. A potential explanation for the observed high [O iii]-to-Hβ ratios could be high shock velocities, likely originating within intense star-forming regions characterized by high ΣSFR. Lastly, we find that the rest-frame UV and optical sizes of our sample are comparable. Our results are consistent with these early galaxies building up their structures inside out and being yet to exhibit the strong color gradient seen at lower redshift.
Introduction
In a hierarchical Universe, dark matter starts collapsing at initial density peaks, giving rise to the underlying structure.Baryons then start accreting in the dark matter potential wells and forming stars.Depending on the initial conditions of the gas and dark matter halos, the appearance of the resulting system may differ dramatically (e.g., Mo et al. 1998;Bullock et al. 2001b).
Within this context, the size of galaxies is a fundamental and essential proxy for understanding galaxy formation and evolution.Galaxies occupy a relative narrow portion of the size and stellar mass/luminosity plane.The distribution of galaxies within this so-called size-stellar mass/luminosity relation, the average size growth rate across cosmic time, and the distribution of other structural parameters, such as the Sérsic index and axis ratio, are key diagnostics of early galaxy formation.
In the local Universe, the large statistics enabled by the Sloan Digital Sky Survey have revealed a fundamental relation of galaxy structures and sizes with mass (e.g., Kauffmann et al. 2003).For example, it has been found that local early-type and late-type galaxies follow different slopes in the size-mass plane (e.g., Shen et al. 2003;Guo et al. 2009;Simard et al. 2011;Cappellari 2013).Such differences are believed to arise from a combination of initial conditions, evolutionary paths, and environmental influences.
In contrast, observations of galaxies at higher redshifts have revealed a variety of galaxy morphologies (Conselice et al. 2004;Wuyts et al. 2011;Guo et al. 2012;Szomoru et al. 2012).These observations indicate active physical processes within and between galaxies, establishing the structural sequences seen in the local Universe.Prior to JWST, the Hubble Space Telescope (HST) pushed the frontier of the fundamental galaxy size-mass relation to z ∼ 7 (Bruce et al. 2012;Mosleh et al. 2012;van der Wel et al. 2012van der Wel et al. , 2014;;Morishita et al. 2014;Allen et al. 2017;Yang et al. 2021).Beyond that redshift, however, the investigation has been severely limited by spatial resolution (∼1 kpc), as well as the limited number of infrared filters at >2 μm, which are critical for robustly inferring galaxy stellar masses.As such, the effort beyond that redshift has been largely limited to small samples of relatively luminous galaxies (Oesch et al. 2010;Ono et al. 2013;Holwerda et al. 2015) or a small volume through a strong gravitational lens (Kawamata et al. 2015(Kawamata et al. , 2018;;Yang et al. 2022a).
These observable properties are believed to reflect the initial conditions of the gas and dark matter halos from which galaxies form.However, over time, such fundamental properties are susceptible to contamination through a sequence of stochastic, nonlinear physical processes, including mergers.Therefore, a detailed characterization of galaxy size becomes imperative, as this may offer insights into not only the physical mechanisms at work, but also their interplay with the interstellar medium (ISM; e.g., Marshall et al. 2022;Roper et al. 2022) and the nature of dark matter (e.g., Shen et al. 2024).
Early results from Cycle 1 have already demonstrated the remarkable capabilities of JWST, with its red sensitivity and resolution, revealing early galaxy morphologies down to scales of 100 pc, throughout rest-frame UV to optical wavelengths (e.g., Finkelstein et al. 2022;Naidu et al. 2022;Yang et al. 2022b;Huertas-Company et al. 2023;Morishita & Stiavelli 2023;Robertson et al. 2023;Tacchella et al. 2023;Treu et al. 2023).Given the substantial number of observations completed in Cycle 1, this study aims to undertake a comprehensive analysis of galaxy sizes, offering the first large-scale systematic study of the galaxy size-mass relation at z > 5. To achieve this, we perform an analysis based on consistently reduced data from several publicly available extragalactic fields observed during Cycle 1, encompassing a total effective area of ∼358 arcmin 2 .This extensive coverage enables us to construct a robust sample comprising 341 galaxies within the redshift range of 5 < z < 14.
The paper is structured as follows: we present our data reduction in Section 2, followed by our photometric analyses in Section 3. We then characterize the structure of the identified galaxies and infer the distributions of their structural parameters in Section 4. We investigate the inferred physical properties and discuss the origin of early galaxies in comparison with lower-z galaxies in Section 5. We summarize our key conclusions in Section 6.Where relevant, we adopt the AB magnitude system (Oke & Gunn 1983;Fukugita et al. 1996), cosmological parameters of Ω m = 0.3, Ω Λ = 0.7, and H 0 = 70 km s −1 Mpc −1 , and the Chabrier (2003) initial mass function.Distances are in proper units, unless otherwise stated.
Data
We base our analysis on nine public deep fields from JWST Cycle 1.For all fields, except for the GLASS/Ultradeep NIRSpec and NIRCam ObserVations before the Epoch of Reionization (UNCOVER) and JWST Advanced Deep Extragalactic Survey (JADES)-GOODS (GDS) fields, where the final mosaic images are made publicly available by the teams, we retrieve the raw-level images from the MAST archive and reduce those with the official JWST pipeline, with several customized steps as detailed below.We then apply our photometric pipeline, borgpipe (Morishita 2021), on all mosaic images to consistently extract sources and measure fluxes.Our final high-z source candidates are selected by applying the Lyman-break dropout technique (Steidel et al. 1998), supplemented by photometric redshift selection, as implemented by Morishita & Stiavelli (2023).
Uniform Data Reduction of NIRCam Images
In each field, we start with raw (uncal.fits)images retrieved from MAST.We use Grizli (ver 1.8.3) to reduce the raw images to generate calibrated images (cal.fits).In this step, in addition to the official pipeline's DETECTOR step, Grizli includes additional processes for flagging artifacts, such as snowball halos, claws, and wisps.
We then apply bbpn14 on the calibrated images, to subtract 1/f noise.The tool follows the procedure proposed by Schlawin et al. (2021).Briefly, bbpn first creates object masks; then it calculates the background level in each of the four detector segments (each corresponds to the detector amplifiers) and subtracts the estimated background; it then runs through the detector in the vertical direction and again subtracts the background estimated in each row (which consists of 2048 pixels minus masked pixels); last, to compensate for any local oversubtraction of sky near, e.g., bright stars or large foreground galaxies, bbpn estimates a spatially varying background and subtracts it from the entire image.
After the 1/f-noise-subtraction step, we align the calibrated images using the tweakreg function of the JWST pipeline.For large-mosaic fields (i.e., those of Public Release Imaging for Extragalactic Research, or PRIMER, and the Cosmic Evolution Early Release Science, or CEERS, Survey), we divide the images into subgroups beforehand and process each of those separately, to optimize computing speed and memory usage.In those fields, images are split into subgroups based on the distance of each image to the other images.We here set a maximum distance of 6¢ for images to be in the same subgroup.We ensure that the images taken in the same visit (i.e., eight detector images for the blue channel and two detector images for the red channel of NIRCam) are grouped together, as their relative distance should remain consistent in the following alignment step.
The images in each subgroup are aligned on a filter-by-filter basis.We provide tweakreg with a set of images associated with the source catalogs, generated by running SExtractor (Bertin & Arnouts 1996) on each image.This enables us to eliminate potential artifacts (such as stellar spikes and saturated stars), which are often included by the automated algorithm in tweakreg, and to secure the alignment calculation by using only reliable sources.For all subgroups, each image is aligned to, when available, the WCS reference of a single, contiguous source catalog taken from large-field-of-view ground-based imaging (see below).This is to avoid alignment issues in some fields and/or subregions, e.g., misalignment in overlapping regions caused by insufficient reference stars.It is noted that tweakreg estimates a single alignment solution for images that are taken in the same visit and applies it coherently to those images, such that the distance between the imaging detectors remains the same for all visits.
Once the images are aligned to the global WCS reference, we drizzle and combine the images into a (sub)mosaic using the pipeline step IMAGE3.The pixel scale and pixel fraction for drizzling are set to 0.0315 and 0.7 for all filters.For the fields that have multiple subgroups, we then create a single mosaic using the Python function reproject.Last, to eliminate any residual shifts, we once again apply tweakreg to a set of multiband mosaics, but based on the source catalog generated using the F444W mosaic.The images are resampled after the final alignment in the same pixel grid using reproject.
JWST NIRCam Extragalactic Fields
To ensure our selection of high-redshift sources is as consistent as possible, we consider fields that have images in at least six filters (F115W, F150W, F200W, F277W, F356W, and F444W).Some fields have additional blue filters (F070W and F090W) and several medium bands (F300M, F335M, F410M, F430M, and F480M), which extend the search range toward lower redshift and improve photometric redshift estimates.When spectroscopic redshift measurements are available (from either ground or recent JWST observations), we include and use them for photometric flux calibration (Section 3.1), as well as for sample selection (spec-z supersede dropout or photo-z).
PAR1199
The PAR1199 field (11:49:47.31,+22:29:32.1)was taken as part of a Cycle 1 Guaranteed Time Observations (GTO) program (PID 1199; Stiavelli et al. 2023) in 2023 May and June, attached as coordinated parallel to the NIRSpec primary, and released immediately without a proprietary period.The NIRCam imaging consists of eight filters, including F090W and F410M, with a total science exposure of ∼16 hr with the MEDIUM8 readout mode.Due to scheduling constraints, the parallel field falls in a field where only shallow (1-2 orbits) HST Advanced Camera for Surveys F606W and F814W images are available.In addition, because of a bright star that is located near the edge of Module A, the sensitivity limit of the blue channel is slightly shallower in this module compared to the other Module, due to the increased scatter light and artifacts.The impact is found to be less in red filters.Galactic reddening is E(B − V ) = 0.024 mag (Schlegel et al. 1998).The images are aligned to bright sources in the Pan-STARRS Data Release 2 (DR2) catalog (Chambers et al. 2016).The effective area in the detection image is 10.1 arcmin 2 .
J1235
The J1235 field (12:35:54.4631,+04:56:8.50) is one of the low-ecliptic-latitude fields that were observed as part of a commissioning program used for NIRCam flat field (PID 1063; PI: Sunnquist).Among the fields available from this program, the J1235 field offers deep NIRCam coverage by 10 filters, including F070W, F090W, F300M, and F480M.Two separate visits were made in 2022 March-April and May.During the first visit, the telescope mirror alignment was not complete, and thus we exclude the data taken during that visit.With the second visit, the exposure time goes as deep as 5.8 hr in a single filter (with 50.9 hr in total for the entire field), making it one of the deepest NIRCam multiband fields.Galactic reddening is E(B − V ) = 0.024 mag.
The total field coverage extends to 34 arcmin 2 ~in the effective detection area, about ∼3.7 NIRCam footprints; however, some filter images are shifted from others, making the effective area for dropout selection dependent on the target redshift.The NIRCam images were aligned to bright sources in the Pan-STARRS DR2 catalog.
North Ecliptic Pole Time-domain Field
The NIRCam imaging in the North Ecliptic Pole Timedomain field (17:22:47.896, +65:49:21.54;Jansen & Windhorst 2018) was taken as part of a GTO program (PID 2738; Windhorst et al. 2023).The imaging data used here were taken and immediately released after the first epoch of the visit, consisting of eight filters, including F090W and F410M.For the WCS alignment of our JWST data, we use a publicly available catalog that consists of sources observed with the Subaru/Hyper Suprime-Cam instrument (Miyazaki et al. 2018) as part of the Hawaii EROsita Ecliptic pole Survey (PIs: G. Hasinger & E. Hu). 15 Galactic reddening is E(B − V ) ∼ 0.028 mag.The effective area in the detection image is 16.9 arcmin 2 .
Primer-UDS and COSMOS
Two large NIRCam mosaics are scheduled in a Cycle 1 General Observer program, PRIMER (PID 1837; PI: Dunlop).PRIMER observed two extragalactic fields, the CANDELS UKIDSS Ultra-Deep Survey (UDS) and COSMOS fields (Grogin et al. 2011;Koekemoer et al. 2011).The visits are configured with a consistent set of filters (eight filters, including F090W and F410M) and exposure time.In this study, we use the data taken during the first visit in both fields, which cover most of the entire planned fields.However, several images were identified as failed guide star acquisitions and were removed from our reduction.
The UDS mosaics are aligned to the SPLAXH SXDS catalog (Mehta et al. 2018).We include the spec-z measurements available in the same catalog.The COSMOS mosaics are aligned to the COSMOS 2020 catalog (Weaver et al. 2023).We include spec-z measurements made by the Keck/DEIMOS instrument and published in Hasinger et al. (2018).Galactic reddening is E(B − V ) ∼ 0.023 and 0.017 mag, respectively.The effective areas in the detection images are 128.2arcmin 2 and 93.9 arcmin 2 .
Next Generation Deep Field
The Next Generation Deep Extragalactic Exploratory Public (NGDEEP) survey (PID 2079; PI: Finkelstein; Bagley et al. 2023b) is a deep spectroscopic + imaging program using the NIRISS WFSS as the primary mode and the NIRCam imaging attached as coordinated parallel in the HUDF-Par2 field (Stiavelli 2005;Oesch et al. 2007;Illingworth 2009).In this study, we use the epoch 1 NIRCam imaging data, whereas the epoch 2 imaging is currently scheduled for early 2024.
The NIRCam field consists of six filters.Due to the use of the DEEP8 readout mode for many of the NIRCam exposures, a small portion of the final images are severely contaminated by cosmic-ray hits, which moderately reduces the effective field area and increases the contamination fraction in the high-z source selection.We include spec-z measurements made by the VANDELS collaboration (Pentericci et al. 2018).Galactic reddening is E(B − V ) ∼ 0.007 mag.The effective area in the detection image is 10.2 arcmin 2 .
CEERS
The CEERS Survey (Bagley et al. 2023a;Finkelstein et al. 2023) is an Early Release Science program (PID 1345; PI: Finkelstein).The data set consists of eight NIRCam filters in the Extended Groth Strip field, previously studied with HST, including as part of CANDELS.The images are aligned to the WCS of the HST F606W image released by the CEERS team (HDR1), which was originally aligned to the GAIA Early Data Release 3 WCS.
We include spec-z measurements from multiple studies (Skelton et al. 2014;Momcheva et al. 2016;Roberts-Borsani et al. 2016;Larson et al. 2022) as well as recent JWST spectroscopy studies (Arrabal Haro et al. 2023a, 2023b;Fujimoto et al. 2023;Harikane et al. 2024Harikane et al. , 2023;;Kocevski et al. 2023;Larson et al. 2023;Nakajima et al. 2023;Sanders et al. 2023;Tang et al. 2023).The effective area in the detection image is 117.5 arcmin 2 .Borsani et al. 2022c).In this study, we utilize the public imaging data made available by the GLASS-JWST team (Merlin et al. 2022;Paris et al. 2023).Their reduction processes include several customized steps, to eliminate detector artifacts.The data set consists of eight NIRCam filters, including F090W and F410M.We use the public lens model by Bergamini et al. (2023a) to correct lens magnification of the background sources.We include spectroscopic measurements made available in the literature (Braglia et al. 2009;Owers et al. 2011;Schmidt et al. 2014;Richard et al. 2021;Bergamini et al. 2023b), including those from recent JWST observations (Roberts-Borsani et al. 2022b, 2022c;Jones et al. 2023;Mascia et al. 2023;Morishita et al. 2023).Galactic reddening is E(B − V ) ∼ 0.013 mag.The effective area in the detection image is 48.4 arcmin 2 .We hereafter refer to the field as A2744, for the sake of simplicity.
JADES-GDS
We include a deep field from JADES (Eisenstein et al. 2023;Robertson et al. 2023;Tacchella et al. 2023).As of the time of writing, NIRCam imaging data in one of the deep fields in the GOODS-South field (3:32:39.3,−27:46:59) are publicly available (Hainline et al. 2023;Rieke 2023).We retrieve the fully processed images and spectroscopic catalogs made available by the team.The data set consists of nine NIRCam filters, including F090W, F335M, and F410M.We include spectroscopic sources listed in the microshutter assembly (MSA) spectroscopic catalog (Bunker et al. 2023), as well as those from the VANDELS survey (Pentericci et al. 2018).Galactic reddening is E(B − V ) ∼ 0.007 mag.The effective area in the detection image is 26.7 arcmin 2 .
Photometry
The photometric catalog in each field is constructed following Morishita & Stiavelli (2023), using borgpipe (Morishita 2021).Briefly, we first prepare a detection image for each field by stacking the F277W, F356W, and F444W filters weighted by each of their rms maps.Source identification is made in the detection image using SExtractor (Bertin & Arnouts 1996).Fluxes are then estimated for the detected sources with an r = 0 16 aperture.For the aperture flux measurement, images are point-spread function (PSF)-matched to the PSF size of F444W beforehand.The image convolution kernels are generated by using pypher (Boucaud et al. 2016) on the PSF images generated by webbpsf (Perrin et al. 2014).
We follow the standard procedure used in the literature (Trenti et al. 2012a;Bradley et al. 2012;Calvi et al. 2016;Morishita et al. 2018;Morishita & Stiavelli 2023), including correction for Galactic extinction and rms scaling that accounts for correlation noise in drizzled images.The limiting magnitudes of the images are measured by using the same aperture size and reported in Table 1.The aperture fluxes of individual sources are then corrected by applying the correction factor C = f auto,F444W /f aper,F444W universally to all filters, where f auto,F444W is the FLUX_AUTO of SExtractor, measured for individual sources.With this approach, the colors remain those measured in apertures, whereas the total measurements derived in the following analyses (such as stellar mass and star formation rate) represent the total fluxes (see also Section 3.3).Last, since several fields have a number of spectroscopic objects across a wide redshift range, we run eazypy, a Python wrapper of the photometric redshift code eazy (Brammer et al. 2008), to fine-tune the fluxes across all filters.A set of correction factors for all filters is derived in each field, from the redshift fitting results for those with spectroscopic redshift.The derived correction factors are found to be <2% relative to the pivot filter, here set to F150W in all fields, only requiring minor correction.
Selection of High-redshift Galaxy Candidates
In what follows, we present our selection of high-redshift galaxies and galaxy candidates.To construct a robust photometric sample, we adopt a twofold selection method, which has been established in our previous studies (Morishita et al. 2018;Ishikawa et al. 2022;Roberts-Borsani et al. 2022a) and is described in the following two subsections.
Lyman-break Dropout Selection
Here we identify dropout sources in four redshift ranges.For those detected at signal-to-noise ratio (S/N) >4 in the detection band, we apply one of the following criteria.
F070W-dropouts (5.0 < z < 7.2): 2 .0 , 090, 070 < z 6; set = F115W-dropouts (9.7 < z < 13.0): 2 .0 , 150, 115, 090, 070 < z 10; set = where the S/N is measured in an r = 0 16 aperture.In each redshift range, we ensure secure selection by requiring an S/N > 8 detection in a filter that covers the rest-frame UV (∼1600 Å, but not including the blue side of the Lyman break).This stringent requirement ensures high completeness (>50%) and reliable size measurements (Appendix C).For the source to be selected as dropout, we require the nondetection of fluxes (S/N < 2) in all available dropout filters (listed as subscripts above).Furthermore, to secure the nondetection, we repeat the nondetection step with a smaller aperture, r = 0 08 (∼2.5 pixels).Note that a photometric selection is not attempted for fields where no dropout filter is available (but see Section 3.2.3).
A major difference from the conventional Lyman-break technique in the literature (e.g., Bouwens et al. 2023) is that our selection method above does not cut samples based on the color of the rest-frame UV, but only on the strength of the Lyman break.The choice is made to preserve as many potential sources as possible and make the selection comprehensive-for example, in a conventional color-cut selection, sources may be dismissed for their color that barely miss the selection window, even if the color is consistent within the photometric uncertainty.This also means that the fraction of low-z interlopers misidentified as high-z sources is likely increased with regard to the standard technique.Therefore, we further secure the sample in the following step.
Photometric Redshift Selection
We here secure the dropout sources by applying photometric redshift selection to the dropout sources selected above.This is to minimize the fraction of low-z interlopers, such as galaxies of relatively old stellar populations (e.g., Oesch et al. 2016) and with dust extinction or foreground dwarfs (e.g., Morishita et al. 2020).Such interlopers are often distinguished by their distinctive red color, readily discernible in our wavelength coverage with NIRCam.
To estimate photometric redshifts, we run eazy with the default magnitude prior (Figure 4 in Brammer et al. 2008).The fitting redshift range is set to 0 < z < 30, with a step size of ( ) z log 1 0.01 + = . By comparing photometric redshifts with spectroscopic ones, we find that the template library provided by Hainline et al. (2023) offers an improved photometric redshift accuracy over the default (v1.3) template library, and thus in this work we adopt the former.
Following Morishita et al. (2018), we exclude sources that satisfy p(z < z set ) > 0.2, i.e., the total redshift probability at z < z set is greater than 20%, where z set is set separately for each redshift selection range (see Section 3.2.1).To eliminate potential contamination by cool (T-/L-/M-type) stars (i.e., brown dwarfs), we follow Morishita (2021) and repeat the phot-z analysis with dwarf templates.A set of dwarf templates taken from the IRTF spectral library (Rayner et al. 2003) is provided to eazy and fit to the data with redshift fixed to 0. The fitting result is inferred for every photometric source that is unresolved (see Section 4.4), and the source is removed if the χ 2 /ν value is smaller than the one from the galaxy template fitting above.We have excluded 60 sources in this step.
Last, we visually inspect all sources that pass the two selections above.In this step, we exclude any suspicious sources whose flux measurements may be significantly affected, including those with residuals of CRs, close to/ overlapping with a brighter galaxy (caused by deblending), misidentified stellar spikes, and those near the detector edge where a part of the source is truncated.We have discarded 342 sources.
Spectroscopic Sample
In addition to the photometric sample constructed above, we include those with spectroscopic redshift confirmed by previous spectroscopic observations, as described in Section 2.2.We add sources when their spectroscopic redshift is within the redshift range defined for each selection window and when they satisfy detection criteria in the detection (S/N >4) and rest-frame UV (>8) bands.
The addition of spec-z sources aids in particular the F070Wdropout sample, which would need F070W as a nondetection filter.All fields except for J1235 do not have the filter coverage, leaving the sample size relatively small without spectroscopically confirmed objects.On the other hand, this could introduce a potential bias toward strong-line emitters.However, in Section 4.2, we investigate this in our size-mass analysis and find that the addition of spec-z sources does not impact any of our final conclusions.
Size Measurements
Our primary analysis is based on the size measurement of galaxies.Following standard practice, we adopt the Sérsic profile (Sérsic 1963): where the size is characterized by the effective radius, R e , which encloses half of the total light of the galaxy, n is the Sérsic index, and b n is an n-dependent normalization parameter.We model the 2D light profile of each galaxy using galfit (Peng et al. 2002(Peng et al. , 2010)).We follow Morishita et al. (2014Morishita et al. ( , 2017) ) for detailed procedures, with a few modifications to accommodate efficiency and accuracy.Briefly, for each galaxy, we first generate image cutouts (here set to 151 × 151 pixels in size, equivalent to 4 8 × 4 8) of the original (i.e., pre-PSFmatched) science map, rms map, and segmentation map.We fix the Sérsic index n to 1, a value that is found to offer a reasonable fit to high-z Lyman-break galaxies in the literature (Shibuya et al. 2015;Ono et al. 2023a;Yang et al. 2022a).As a test, we repeated the analysis with n as a free parameter and indeed found its distribution to be centered around ∼1.However, this led to an increased fraction (∼13%) of unsuccessful fits, where the solutions either did not converge or converged to unrealistic parameters (i.e., n < 0.2 or n > 10).To mitigate potential bias from this constraint, we reevaluate the uncertainty of each size measurement by adding the difference in R e resulting from the two procedures (n fixed and n free) in quadrature.Consequently, those with n deviating significantly from 1 incur a larger uncertainty in the size measurement.
The PSF image generated by webbpsf of the corresponding filter is fed to galfit, for convolution of the model profile at each iteration.The PSF image is generated for each field by retrieving the Optical Path Difference files of the observed date. 16As reported in several studies (Ono et al. 2023a;Ito et al. 2023;Tacchella et al. 2023), we find that the default output of webbpsf exhibits a narrower PSF profile compared to observed stars in our reduced images.This is potentially caused by, e.g., drizzling/resampling of the actual images, as well as jitter in pointing, which could have a more significant effect in a longer exposure.We therefore find an optimal PSF that describes the actual PSF size of our image by tweaking the jitter_sigma parameter of webbpsf.To do this, we visually identify unsaturated, bright stars that do not have any companion within <50 pixels.We then fit these stars with various PSF models by using galfit and select a model that offers the minimum χ 2 value.While it is ideal to repeat the analysis and determine the jitter value in each field, some fields do not have sufficient numbers of stars that can be used for this.We thus adopt the median value that is determined by the stars in all fields for each filter.Figure 1 shows an example radial flux profile of an actual star, compared with those of webbpsf generated with different σ jitter values.The final jitter_sigma value is set to ∼0 022 for the blue channel filters and ∼0 034 for the redder channel.
Neighboring sources that are close to and relatively bright compared to the main galaxy are fit simultaneously, while the rest of the sources in the stamp are masked using the segmentation map generated by SExtractor above.We include any neighboring source at distance d from the main galaxy when its flux is above the limiting flux, defined as nei,lim main s nei g = a with γ = 0.8, α nei = 2.0, and d s = 60 pixels.
We run galfit in the order of the target source magnitudes.The fitting results of the primary galaxy are continuously stored, so that the parameters for the repeated galaxies are fixed to the previously determined values when they appear in the cutout of a fainter galaxy later in the fitting session.
For each galaxy, we repeat the fit in two filters that correspond to the rest-frame UV and optical wavelengths.We then inspect all fitting results to ensure the measurements.We have flagged 24 sources that show significant residuals, e.g., from multiple clumps within the defined segment region and/ or clear features of interaction with nearby sources.These flagged sources are excluded from statistical analyses in the following sections.The measured sizes are presented in Appendix B.
Physical Properties Inferred by Spectral Energy Distribution Analysis
We infer the spectral energy distribution (SED) of the individual galaxies through SED fitting using photometric data that cover 0.6-5 μm.We use the SED fitting code gsf (ver1.8;Morishita et al. 2019), which allows flexible determinations of the SED by adopting binned star formation histories, also known as nonparametric.gsf determines an optimal combination of stellar and ISM templates among the template library.For this study, we generate templates of different ages, [10,30,100,300,1000,3000] Myr, and metallicities at an increment of 0.1 by using fsps (Conroy et al. 2009;Foreman-Mackey et al. 2014).A nebular component (emission lines and continuum) that is characterized by an ionization parameter [ ] U log 3, 1 Î -is also generated by fsps (see also Byler et al. 2017) and added to the template after multiplication by an amplitude parameter.The dust attenuation and metallicity of the stellar templates are treated as free parameters during the fit, whereas the metallicity of the nebular component is synchronized with the metallicity of the stellar component during the fitting process.
The posterior distribution of the parameters is sampled by using emcee for 10 4 iterations, with the number of walkers set to 100.The final posterior is collected after excluding the first half of the realizations (known as burn-in).The resulting physical parameters (such as stellar mass, star formation rate, rest-frame UV slope β UV , metallicity, dust attenuation A V , and mass-weighted age) are quoted as the median of the posterior distribution, along with uncertainties measured from the 16th to 84th percentile range.The star formation rate of individual galaxies is calculated with the rest-frame UV luminosity (∼1600 Å) using the posterior SED.The UV luminosity is corrected for dust attenuation using the β UV slope, which is measured by using the posterior SED, as in Smit et al. (2016): The attenuation-corrected UV luminosity is then converted to the star formation rate via the relation in Kennicutt (1998): Last, we correct both stellar mass and star formation rate measurements to the total model magnitude derived by galfit, as in Morishita et al. (2014), by multiplying the correction factor:
Overview of the Final Sample
In Table 2, we report the number of final sources selected in each field and dropout selection window.From all fields, we identify 101 F070W-dropout sources (81 of which are spec-zconfirmed), 196 F090W-dropout sources (22), 42 F115Wdropout sources (5), and 2 F150W-dropout sources (1).As we present in the following sections, the sample spans a wide range of stellar mass ( * M M 6.8 log 10.4
) and absolute UV magnitude (−23 M UV / mag − 17).The final sources are identical when a larger aperture (0 32) is adopted for the source selection in Section 3.2.1.
In Figure 2, we show the distribution of the final sample in the stellar mass-star formation rate plane.Despite the wide mass (more than 3 orders of magnitude) and redshift (5 < z < 14) ranges, our galaxies are found to distribute along a sequence, suggesting that most of our sample galaxies are a typical star-forming population.To compare the locations of our galaxies with those at lower redshift, we derive a linear regression, with the slope fixed to 0.81, i.e., the one for z = 5
Table 2
Numbers of the Final Sources in Fields
While the full details of the selected sources will be presented in a forthcoming paper, we highlight two F150W-dropout galaxies (z 13).One, JADESGDS-30934, is spectroscopically confirmed to be at z = 13.2 (Curtis-Lake et al. 2023).The other object, PRIMERCOS-38203, is a newly identified photometric candidate source at z 13.8 1.0 in the PRIMER-COSMOS field (Figure 3).Despite the relatively shallow depth in the field, the source exhibits clean nondetection in F090W, F115W, and F150W and high-S/N detection in F200W (S/N = 9.2) and in the IR detection band (S/N = 16.1).The observed UV magnitude (M UV = − 20 mag) and the derived stellar mass ) are both moderate and comparable to other galaxy candidates at these redshifts (e.g., Bouwens et al. 2023;Finkelstein et al. 2023;Morishita & Stiavelli 2023).While the F200W-dropout selection covers up to z ∼ 18.6, none is identified at z > 14 in our selection.The number density estimates of the identified sources will be presented in a forthcoming paper.Another potentially interesting source is PRIMERUDS-121885 at z 10.9 0. .This object has relatively red F356W −F444W color (0.76 mag), which implies the presence of old populations.However, we caution that the source is identified in PRIMERUDS, which is relatively shallow among the fields.
In addition, at the redshift of the source, the F444W flux could also be attributed to strong Hβ+[O III] emissions, which would lead to a smaller stellar mass.
Last, we observe a mild enhancement of sources at z ≈ 7-7.6.This is partially attributed to strong Hβ+[O III] emitters being more sensitive to our selection, due to the two medium-band filters (F410M and F430M), by their making the photometric redshift relatively more constrained.Besides, there is an overdensity of emitters identified in the same redshift range in one of the fields (K.Daikuhara 2023, in preparation.).
Size-Stellar Mass Distribution of Galaxies at 5 < z < 14
In Figure 4, we show the distribution of galaxies in the sizemass plane for the four redshift ranges.In each redshift panel, we show the size measured in the filter that corresponds to restframe ∼1600 Å i.e., F115W for the F070W-dropout, F150W for the F090W-dropout, F200W for the F115W-dropout, and F277W for the F200W-dropout selection.We adopt the effective radius measured along the major axis, to mitigate the effect by inclination.
The measured size spans a broad range, R log kpc 2 e ~to 0.3.Remarkably, at * M M log 9 < , many galaxies are characterized by R e < 0.3 kpc (<0 07), which is below the resolution limit afforded by HST/WFC3-IR.Thus, the spatial resolution of NIRCam, ∼0.11-0.14kpc, is essential to study typical star-forming galaxies at these redshifts (see also Section 4.4).
We investigate their distribution on the size-mass plane by linear regression analyses.By following Shen et al. (2003;also Ferguson et al. 2004;van der Wel et al. 2014), we parameterize size by a log-normal distribution as a function of stellar mass and redshift: where we describe the intercept as We adopt a pivot mass M 0 = 10 8 M e .The model distribution, ( ( ) , prescribes the probability distribution for observing R log e for a galaxy with the stellar mass M * with an intrinsic scatter of logR e s .By making the intercept B(z) a function of redshift, we are able to evaluate the size distribution with a consistent slope determined by the entire sample.Although the redshift evolution of the slope is of interest, previous studies have reported that little changes over a much broader cosmic time range than what is explored in this study (e.g., van der Wel et al. 2014;Shibuya et al. 2015).As we demonstrate below, our findings also reveal no significant evolution in slope from these studies, supporting our employing a single slope across the entire redshift range.
The regression is determined by using emcee (Foreman-Mackey et al. 2014), with the number of walkers being set to N = 50 and the iterations to 10 4 .We only include those not flagged in the structural fitting analysis in Section 3.3.Measurement uncertainties in size, stellar mass, and redshift are used in the calculation of likelihood.The derived regression is shown in Figure 4, along with the measured sizes in four redshift panels.In Figure 5, we also show the redshift-corrected size, ( ) R z 1 e z + a , as this represents the actual variants evaluated in the fitting.We report the determined parameters in Table 3.
The derived slope, ∼0.20 ± 0.03, is similar to what was found in Mosleh et al. (2011;∼0.3)for Lyman-break galaxies at z ∼ 3.5 and van der Wel et al. (2014; 0.18-0.25)for late-type galaxies at z ∼ 0.3-3, despite the latter being observed in different rest-frame wavelengths (but see the following and Section 5.3).The slope of the size-mass relation would reflect the interplay between the intrinsic compactness and concentrated dust attenuation in the core of massive star-forming galaxies at high redshift (Roper et al. 2022).In fact, negative slopes of the size-mass and size-luminosity relations have been found in cosmological simulations, e.g., the BlueTides (Marshall et al. 2022), IllustrisTNG (Popping et al. 2022;Costantin et al. 2023), FLARES (Roper et al. 2022), and THESAN (X.Shen et al. 2023, in preparation) simulations.Our finding of a positive slope is consistent with the idea that some massive galaxies in our sample may already possess a moderate amount of dust in their cores.We note that the derived intrinsic scatter, , is relatively large (∼0.3) compared to those at lower redshift in the literature (∼0.2).The scatter in size distribution ought to reflect the initial conditions of the dark matter halos, such as the distribution of the spin parameter (e.g., Bullock et al. 2001a).Thus, the observed larger scatter would imply the presence of galaxies that experienced nonlinear behavior, such as mergers and other dissipative processes, and deviate from the distribution predicted by a simple galactic disk formation model (e.g., Mo et al. 1998).In fact, we observed a number of galaxies that are barely resolved in the NIRCam images, some of which are located far below (>1.5σ) the derived regression (Section 4.4).We repeat the regression analysis by excluding these unresolved sources, but with the slope fixed to the one derived above.The derived scatter from this regression is ∼0.25, moderately reduced from the analysis of the full sample (Table 3).
Our sample includes both spectroscopic and photometric sources.In particular, the majority (∼90%) of the F070W-d sample is spectroscopic, due to the lack of F070W filter coverage.To investigate the impact of the spectroscopic sources, we repeat the regression analysis by only using photometric sources.We find a consistent result (α = 0.23 ± 0.04, β z = −0.10 ± 0.32, and α z = −0.56 ± 0.33) within the uncertainty range.The slight decrease in α z (+0.14; stronger z-evolution) can be attributed to the fact that the spectroscopic sources are smaller in size than the photometric sources and they dominate the lowest-redshift bin.These differences in the regression result do not change our conclusion.
Redshift Evolution of Galaxy Sizes
In Figure 6, we show the redshift trend of the UV sizes derived through the regression analysis above.We note that the redshift evolution represents, by design, for the mass-corrected size, a .We find that the derived redshift evolution, ( ) z 1 z µ + a with α z ∼ −0.4 ± 0.2, is much less significant than the one derived for the rest-frame UV (∼2100 Å) size of Lyman-break galaxies at 0 < z < 7, with α z = −1.2± 0.1 (Mosleh et al. 2012; see also Oesch et al. 2010;Shibuya et al. 2015, who found a similar value).
The primary cause of the discrepancy can be attributed to the redshift range of the sample.The aforementioned studies derived the redshift evolution by including lower-redshift galaxies (z 1 in Mosleh et al. 2012;Shibuya et al. 2015; and 2 < z < 8 in Oesch et al. 2010).Indeed, Curtis-Lake et al.
(2016) found a much slower evolution, α z , for the Lymanbreak galaxy (LBG) sample at 4 < z < 8 and argued that the discrepancy is partially attributed to a stronger evolution in UV sizes at z < 5 (see also Oesch et al. 2010, who found little size evolution from z ∼ 7 to 6).
To investigate this, we derive the redshift evolution by combining the median sizes presented in Figure 6 and in Mosleh et al. (2012; for 1 < z < 7), and we do find a stronger evolution (α z ∼ −1.2).This supports the idea that the redshift evolution of the average UV size is much slower at z > 5 than lower redshifts.However, we note that the outcome is likely subject to a range of systematic factors, such as the weighting of each size measurement, the inclusion or exclusion of specific data points, and potential sample mismatches.
In Figure 6, we also show the evolution of the rest-frame optical sizes derived for low-mass ( * M M log 9 ~) late-type galaxies at 0 < z < 2 (van der Wel et al. 2014).Interestingly, et al. 2014).We note that the size trend of both previous studies is also scaled to the same pivot mass.Median sizes (large symbols) are derived in each redshift window.Those identified as compact (Section 4.4) are shown by magenta symbols.the extrapolated sizes from the fit to our redshift range exhibit a significant offset, even after accounting for the mass difference between the two samples.Although the offset might be attributed to differences in the bands used for size measurements (i.e., rest-frame 5000 Å in van der Wel et al. 2014), we find that our galaxies, on average, do not show a significant offset between the two bands.This reinforces our earlier interpretation that size evolution could be more pronounced at z 5, primarily driven by the buildup of massive bulges and/ or outskirts, which would enhance color gradients, as seen at lower redshifts.We discuss this in more detail in Section 5.3.
Identifying Blue Compact Sources
In our size analysis above, we have identified a number of compact sources that are near the resolution limit of NIRCam.Previous studies using NIRCam data also reported a few of these compact sources (Castellano et al. 2022;Naidu et al. 2022;Ono et al. 2023a;Yang et al. 2022a).We define those that satisfy either of the following as compact: where ΔR is the difference of the measured size from the inferred size by the linear regression for the stellar mass and redshift of the source.Note that R e used here is the apparent size.
With these criteria, we find 44 compact sources from our sample (∼13%), including 15 that are spectroscopically confirmed.We then exclude five of the classified compact sources, whose SEDs are better fitted with a brown dwarf template over galaxy templates (Section 3.2).The classified compact sources are marked in Figure 4. Individual cutout images are presented in Appendix D.
In Figure 5, we show the distribution of the normalized size, . The distribution clearly shows an excess at smaller size when compared to the distribution of the remaining noncompact sources.We note that the fraction of identified compact sources is well above the number expected at 1.5σ for a normal distribution (i.e., 6.7%).The compact sources follow a similar distribution as other extended sources in physical properties, such as stellar mass, star formation rate (see also Figure 2), and β UV , except for the star formation surface density (Section 5.1).The ISM properties of the compact sources are further investigated in Section 5.2.
High Star Formation Efficiency Revealed by NIRCam Imaging
The star formation (rate) surface density, is known as an excellent proxy for inferring the current mode of star formation.In Figure 7, we show the distribution of Σ SFR of our sample.Overall, the median values of our galaxies are consistent with previous studies with HST (Oesch et al. 2010;Shibuya et al. 2015).However, we observe a large scatter along the vertical axis.In fact, we find a large fraction of galaxies of log 1.5 SFR S , comparable to local ULIRG/starburst (Scoville et al. 2000;Dopita et al. 2002;Kennicutt & Evans 2012) and high-z submillimeter galaxies (e.g., Daddi et al. 2010).Such high-Σ SFR galaxies were not reported in previous HST studies at similar redshift (Oesch et al. 2010;Ono et al. 2013;Holwerda et al. 2015), except for a few cases in cluster lensing fields (e.g., Kawamata et al. 2015;Bouwens et al. 2022).Only ∼6% of the low-z sample is found at Σ SFR > 1 M e /yr/kpc 2 ; the majority of our sample is located above the same value.
Obviously, Σ SFR estimates are limited by imaging resolution, and thus the previous estimates for smaller galaxies identified in HST data often remained as lower limits (e.g., Morishita 2021;Fujimoto et al. 2022;Ishikawa et al. 2022).
The observations in Figure 7 also demonstrate the potential for JWST NIRCam observations to probe star formation activity at low stellar masses (M 10 8 M e ).This has so far been limited to indirect probes, such as studies based on follow-ups of gamma-ray burst afterglows to quantify the fraction of detected host galaxies (e.g., Tanvir et al. 2012;Trenti et al. 2012b;McGuire et al. 2016).Those studies hinted at the existence of the population of sources now directly probed by NIRCam.
Implications for Star Formation Efficiency
The compact sources identified in Section 4.4 dominate the upper range, log 1.5 SFR S . We note that this does not stem from their star formation rates, which are, on average, comparable to those of more extended sources.Instead, it is attributed to their compact nature.These compact sources' physical characteristics are particularly intriguing, as negative feedback is likely more effective within confined systems.The observed high values thus imply efficient gas fueling within these compact sources, potentially facilitated by processes such as the loss of angular momentum through mergers (see also Section 5.2).
In addition, the mass dependence of Σ SFR could hint at the efficiency of star formation, as the system's mass is tightly linked to the regulation of star formation.In the right panel of Figure 7, we show the distribution of Σ SFR as a function of stellar mass.Of particular interest is the high mass range, ∼10 9 M e .In this mass range, the shocked gas remains hot (e.g., Birnboim & Dekel 2003;Dekel & Birnboim 2006;Stern et al. 2021), resulting in a reduced star formation efficiency, as seen at lower redshifts.The observed high values for our sample thus imply that our sources still hold a high efficiency within this mass regime.We calculate the median massdoubling time by t = M * /SFR and find ∼15-90 Myr for our sample.These considerably small mass-doubling times suggest that some galaxies in our sample could evolve to * M M log 11 ~by z ∼ 5, provided that the efficiency remains similar in the following ∼0.5-1Gyr.
Implications for the Growth of Galaxies
At these early cosmic times, comparing the radius-mass relationship of galaxies as a function of the age of the stellar population can reveal how galaxies grow (Figure 8).In hierarchical structure formation, galaxies grow through the merger of smaller-mass galaxies.The merger process results in the infall of stars, but also allows new star formation to occur through the shocking of the infalling gas.If galaxies grow predominantly by cold gas accretion (e.g., Dekel & Birnboim 2006), one would expect their size to evolve predominantly through secular evolution, which would be driven by dynamical friction between the stars and the accreted gas, i.e., galaxies would become smaller as they built up their stellar mass.We find two clear trends in our sample of high-z galaxies: 1.Both radius and age increase with stellar mass.Highmass galaxies harbor older (∼ several 100 Myr) stellar populations compared to lower-mass galaxies, which harbor populations of age ∼10-100 Myr. 2. The scatter in radii at fixed mass is larger by 0.2 dex for low-mass galaxies than for high-mass galaxies.
Since there is no selection effect that prevents the selection of small, high-mass galaxies, the implication of these two observational trends is that galaxies form inside out.Small, low-mass galaxies have a large scatter in their sizes, likely due to a combination of gas accretion and merger events.As they form stars, the stars get scattered due to three-body interactions, resulting in a growth in size and stellar mass.That is consistent both with the size evolution with cosmic time and the radius and stellar age evolution with mass.Although the sensitivity of the data at the present time is not adequate to constrain minormerger rates at these redshifts, future deeper surveys will help develop this hypothesis further.
Comparison with Local Lyman-break Analogs
It is illustrative to compare the luminosity surface density of these galaxies with similar objects in the local Universe (Hoopes et al. 2007;Overzier et al. 2009;Shim & Chary 2013).Although objects with such high surface densities of star formation exist at z ∼ 0, less than 0.01% of galaxies in the Sloan sample have Σ SFR > 1 M e yr −1 kpc −2 .Even at higher redshifts, the fraction remains small (∼6% at 0.3 < z < 3.5; Skelton et al. 2014).In contrast, the fraction is ∼100% in our sample.The z ∼ 0 objects have a median mass of 10 8.9 M e and span the full range of metallicity, from subsolar to supersolar.They are not particularly biased toward young ages, as determined from the strength of the Balmer break.Morphologically, we see some evidence of mergers in the high-z sample, with ∼20 % of galaxies having nearby 5, but the symbols are color-coded by the age ( t log ) derived in our SED analysis.The median age in each grid is shown by the larger symbols (squares), with the symbol size scaled by the number of galaxies in the grid.In the inset, the scatters in size measured at each mass bin are shown.The error of each scatter measurement is estimated by bootstrapping.Clearly, the radius and age of the stellar population increase, while the scatters in size decrease, with increasing stellar mass.
companions (besides the 24 galaxies flagged in the size analysis).Taken together, the implication is therefore that the high surface densities are gas-rich galaxies, with the gas being relatively concentrated toward the nucleus of the galaxy, indicating that the late stages of the merging process are driving the inflow of gas toward the nucleus of the galaxies.
Nature of the Compact Sources-NIRSpec Spectroscopic Analysis
We here investigate the nature of the compact sources identified in Section 4.2, specifically aiming to observe if they exhibit any evidence of active galactic nuclei (AGNs).Traditionally, point-source-like morphologies of UV-bright sources have been considered as evidence of AGNs.However, it has been shown that the compactness does not always indicate the presence of AGNs (e.g., Morishita et al. 2020) and vice versa (see, e.g., Matthee et al. 2023, for faint AGNs with extended features).As such, a comprehensive approach is required to answer the question.
A subset of our sample galaxies have spectroscopic coverage by NIRSpec/MSA, taken as part of CEERS, GLASS-ERS, and JADES.Following Morishita et al. (2023), we reduce the MSA spectra using msaexp.17For the extracted 1D spectrum of each source, we fit the line profiles of Hβ and [O III] doublets with a Gaussian, after subtracting the underlying continuum spectrum inferred by gsf in Section 3.4.The total flux of each line is estimated by integrating the flux over the wavelength range of 2 × FWHM derived from the Gaussian fit.In the following analysis, we include sources with measured line S/Ns above 3 for the [O III] λ5007 line (N = 51); when Hβ is not detected above the same significance, we quote the 3σ flux limit measured at the wavelength of the line over the same line width derived for [O III] λ5007 .
In Figure 9, we show the sample in the mass-excitation (MEx) diagram, a conventional diagnostic for AGNs and starforming galaxies (Juneau et al. 2011).Among the 51 objects, we find eight sources located within the MEx AGN region defined by Juneau et al. (2014).One of the sources, CEERS7-18822, was previously reported to have a tentative (∼2.5σ) broad component in Hβ (Larson et al. 2023), agreeing with the classification here.CEERS7-18822 exhibits extended structures in F115W, which confirms our classification of this source not to be compact.
None of our compact sources are classified as MEx-AGNs.While JADESGDS-18784 is located near the MEx border, with log [O III]/Hβ = 0.83 ± 0.34 (classified as a MEx-AGN within the uncertainty range), we do not confirm any features that immediately support the presence of AGNs (i.e., broadline features or high-ionization lines).None of the other compact sources show AGN signatures either, except for CEERS6-7832, which was reported to have a broad (∼2000 km s −1 ) component in its Hα line (Kocevski et al. 2023;as CEERS_1670). 18owever, this does not completely rule out the absence of AGNs.First, the discriminating power of the MEx diagram could be lower near the transition region.As shown by Juneau et al. (2014), for the range of the log [O III]/Hβ line ratios probed here (0.4), there could be 10%-30% of AGNs present even inside the MEx-star-forming region at * M M log 10 ~.For example, the aforementioned CEERS_7832 is found in the MEx-star-forming region.The accuracy of the AGN classification at a lower mass range is not known, due to the lack of data in the previous study.We also note that the observed ratio for our sample (log [O III]/Hβ 0.6) is relatively high, compared to the star-forming galaxies of a comparable mass in Juneau et al. (2014; see also Shapley et al. 2015).Such a high ratio can still be achieved by a stellar-only configuration, but requires, e.g., high electron density (Reddy et al. 2023).
We note that given the evolving ISM properties at these redshifts, there is likely a shift of the MEx boundary toward a higher line ratio, as is the case for z ∼ 0 to z ∼ 2. As such, it is still possible that any of our MEx-star-forming sources near the border may host an AGN, if not a broadline AGN.Furthermore, as has been demonstrated in the local Universe (Ho et al. 1997), it is possible to bury a low-luminosity Seyfert or LINER-type nucleus in a galaxy without detecting a component of broadline emission.Vice versa, some lowermass MEx-AGN sources here could turn to be MEx-starforming, due to the potential redshift evolution of the boundary.
Nonetheless, the absence of clear AGN evidence implies that the observed high ratios are driven by the high surface density of star formation.When we fit high values of [O III]/Hβ >3 with radiative shock models (e.g., MAPPING III; Allen et al. 2008), we find that shock velocities of a median of 500 km s −1 are required.Ratios of ∼10 can only be achieved with high shock velocities, which could be correlated with the high surface density of star formation (or strong AGN activity).Achieving the observed high star formation surface density (>100 M e /yr / kpc 2 ) is considered challenging, due to the presence of negative feedback.Such a high density is expected only in an extreme environment, where an abundance of gas is available, and/or in (post-)merging systems, where gas could rapidly fall in.By comparing with a numerical simulation, Ono et al. (2023a) found that such a compact galaxy is in a temporary compact star-forming phase triggered by recent major mergers.Roper et al. (2022) found a large fraction of blue compact (∼100-300 pc) galaxies and also found these galaxies to have little contribution from AGNs in the FLARES simulations (see also Marshall et al. 2022;X. Shen 2023, in preparation).Future work will compare the derived physical parameters from the emission lines with the measured star formation rate density.
Comparison of Sizes in Rest-frame UV and Optical
In this study, we have analyzed the sizes of galaxies at a restframe wavelength of ∼1600 Å.In Section 4.2, we found that the average sizes of our galaxies are much smaller than those predicted from the extrapolation of van der Wel et al. (2014) at the corresponding redshift.A possible explanation for the discrepancy could be the rest-frame wavelengths where the size is measured in the previous study (i.e., ∼5000 Å).We investigate this by repeating our size analysis, but in a filter that corresponds to ∼5000 Å, i.e., F356W for the F070Wdropouts and F444W for the F090W-dropouts.For the other two ranges at higher redshift, we use the reddest filter available (F444W), which corresponds to ∼4000 Å and ∼3000 Å, respectively.In Figure 10, we show the distribution of the size difference in these two wavelengths.We find that the difference of size in the two filters is negligible on average and thus conclude that the wavelength difference cannot explain the offset seen in the extrapolated size of van der Wel et al. (2014) observed in Figure 6.Instead, we speculate that the extrapolation of their relation may not persist in the redshift range far beyond their probed redshift range z < 3.
In fact, we have seen in Section 4.2 that the size evolution is much slower (α z = −0.4)than the one found in previous studies of rest-frame UV size (∼ −1.2; Mosleh et al. 2012;Shibuya et al. 2015).While a comprehensive analysis covering a wide redshift range would be necessary, we attribute the observed conflict to the buildup of complicated structures in galaxies, such as a central massive bulge and young starforming disk.At lower redshifts, we observe a more pronounced difference in sizes between different wavelengths, driven by radial color gradients within the systems (e.g., Vulcani et al. 2014).Similarly, Shibuya et al. (2015) found that at z = 1.2-2.1, the average UV size is smaller than the optical size by ∼20% in the low-mass regime (∼10 9 M e ), while the trend is reversed at the high-mass end (10 11 M e ; see also Szomoru et al. 2012;van der Wel et al. 2014).
Figure 10.Distributions of rest-frame UV-to-optical size ratio for three redshift bins.The first two panels compare the rest-frame UV to optical (∼5000 Å) sizes, whereas the last panel is at a shorter wavelength (∼3700 Å), due to the filter availability.Only resolved galaxies in the two filters are included.The F150W-dropouts are not shown, as neither of their sizes are resolved in F444W.Each distribution is fit with a Gaussian (red line), with the mean position indicated by a dashed vertical line.The difference with the extrapolated size in the rest-frame optical of van der Wel et al. (2014; observed in Figure 6) is shown in each panel (dotted vertical line); the difference is measured to be 3.2σ, 2.7σ, and 3.7σ, respectively.
On the other hand, the resemblance of galaxy UV and the optical sizes of high-redshift galaxies is not unexpected, given the rapid assembly of the stellar content (see also Yang et al. 2022b;Treu et al. 2023).For our galaxies, we have estimated the mass-doubling time to be 100 Myr in Section 5.1.This timescale is comparable to or even smaller than the star formation timescale that the UV tracer is sensitive to (Murphy et al. 2011;Flores Velázquez et al. 2021).Consequently, the stellar content detected by the UV will predominate over the total content integrated over the full star formation history, which is probed by observations at optical wavelengths.The majority of our galaxies are actively building their structures inside out, developing rapidly enough to maintain coherence.
Last, we investigate the rest-frame optical size of the compact sources identified in this work.Of 44, find that 13 (∼29%) show resolved morphology in optical filters.This is opposite to the general trend discovered above and implies that a fraction of these compact sources are likely experiencing a secondary burst, in a relatively small area, after the initial buildup of stellar structure.In particular, this is relevant to the stochastic nature of star formation in early galaxies, which may have a non-negligible effect on the observed UV magnitude measurements and consequently UV luminosity functions.
Summary
In this study, we have identified 341 galaxies at 5 < z < 14 in legacy fields of JWST and analyzed their rest-frame UV and optical sizes through JWST NIRCam images.The imaging data used here were collected from several public programs in Cycle 1, resulting in a combined effective area of 358 arcmin 2 .With a robust (8σ) selection of 341 galaxies, made possible by the unprecedented area coverage provided by JWST, we have conducted the first systematic exploration of the size-mass relation of galaxies in the first billion years.The key findings are as follows: 1.The slope of the size-mass relation was derived via linear regression analyses and found to be α ∼ 0.2, similar to those of star-forming galaxies at z < 3, but scaled down in size by 0.4 dex.The derived intercept was found to evolve with ∝(1 + z) −0.4 , a much slower evolution than those found in previous studies of rest-frame UV sizes of galaxies at lower redshifts.2. By using the results from our linear regression analysis, we identified 44 compact sources that are marginally resolved in NIRCam imaging.These compact sources account for ∼13% of the full sample presented here.3. We found that our sources overall have a high star formation surface density (1 M e yr −1 kpc −2 ), with the newly identified compact sources being as high as ∼300 M e yr −1 kpc −2 .We demonstrated that the absence of a clear declining trend indicates that the star formation efficiency may remain high even in the high-mass range; if the observed high efficiency remains similar in the following ∼0.5-1Gyr, some of our sources would evolve to ∼10 11 M e by z ∼ 5. 4. For 51 sources with available NIRSpec/MSA data, we investigated their ISM via the [O III]-to-Hβ line ratio.None of the compact sources are confidently classified as AGNs in the MEx diagram; however, the nature of the compact sources remains to be conclusively elucidated in a future study.A potential explanation for the observed high line ratios is high shock velocities, driven by intense star formation characterized by high Σ SFR . 5. We found that the sizes in the rest-frame and optical wavelengths are on average consistent.We attributed this to the short mass-doubling time (i.e., 1/sSFR < 100 Myr) of our sources, implying that they are actively building their structure coherently and are thus dominated by young stars.
With the unprecedented resolution and sensitivity provided by JWST, this work has demonstrated a comprehensive size analysis of galaxies in the first billion years.Of particular interest are the newly discovered compact populations.Specifically, the physical mechanisms that maintain the observed high star formation rate in such compact systems, which are likely under the strong influence of negative feedback, remain an open question.Recent JWST observations have identified a number of faint AGNs (Matthee et al. 2023;Onoue et al. 2023) and a complex of dusty AGN + young stellar populations (Akins et al. 2023;Furtak et al. 2023;Labbe et al. 2023), suggesting a greater prevalence of AGNs in high-z galaxies than previously thought.These emerging findings raise a caveat that the estimated star formation rate from our SED analysis may not accurately represent the intrinsic value, even though our spectroscopic analysis on the subsample did not find any immediate signatures of AGNs.Future spectroscopic follow-ups of the compact sources will provide further insights into their nature and their potential impacts in a broad cosmological context.NGDEEP has a comparable depth to JADESGDS, it has a relatively small area (a single NIRCam pointing).In addition, the NGDEEP field does not allow photometric selections at z < 9.7, the redshift range where the majority of the compact sources are identified.
Figure 1 .
Figure1.Radial flux profiles of webbpsf with various σ jitter values (dashed lines) generated for the JADESGDS F115W image, where the adopted profile (σ jitter = 0.022) is color-coded in cyan.The profile of an example bright star taken from the image is shown for comparison (red solid line).The inset shows the distribution of χ 2 /ν of the same star fitted by webbpsf for different σ jitter values.
is the best-fit total magnitude derived by galfit and m total is the total magnitude derived in Section 3.1, both measured in the rest-frame UV filter of interest for the target redshift range.Sources flagged in the galfit results are set C galfit = 1.The inferred physical properties are presented in Appendix B.
Figure 2 .
Figure 2. Sample distribution in the star formation rate-stellar mass plane (circles).Those with spectroscopic redshift (blue squares) and flagged as compact (magenta; Section 4.4) are marked accordingly.Those flagged in galfit results (Section 3.3) are shown by open symbols.The SFMS slope at z = 5 (black dashed lines; Speagle et al. 2014), the linear regression slope derived for our entire sample, ( ) * M M logSFR 0.81 log 10 0.31 8 = + (with the slope fixed to that of Speagle et al. 2014 at z = 5), and the derived range for the intrinsic scatter (0.37 dex; shaded regions) are shown.
Figure 3 .
Figure 3. Example of galfit fitting results in 2D images (left: original; middle: model; and right: residual).Top: PRIMERCOS-38203, one of the highest-redshift sources in our sample.Middle: JADESGDS-30934 (spectroscopically confirmed to z = 13.2;Curtis-Lake et al. 2023), an example of those classified as compact (Section 4.4).Bottom: F090W-dropout source that is flagged in the galfit fitting analysis.Flagged objects are not included in our statistical discussion.
Figure 4 .
Figure4.Distribution of our sample galaxies at 5 < z < 14 in the stellar mass-size plane (gray circles), in four redshift panels.The linear slope derived by regression analysis for the full sample (orange solid lines, with the shaded regions covering the 1.5R log e s range from the median slope; Table3) is shown.For comparison, the slope derived for late-type galaxies at z ∼ 2.7 (van der Wel et al. 2014; black dashed lines) is shown.Those classified as compact (Section 4.4) are shown by magenta symbols.Those with spectroscopic redshift are marked by open blue squares.The two horizontal hatched regions show the physical size of FWHM/2 for NIRCam (dark gray) and HST/WFC3-IR F160W (light gray) at the median redshift of the sources in each redshift window.
Figure 5 .
Figure 5.The same as in Figure 4, but for the redshift-corrected size ( ( ) R z 1 eff, maj
Figure 6 .
Figure6.Redshift evolution of rest-frame UV size of our galaxies (gray circles).The size of the individual galaxies shown here is corrected to the pivot mass (Section 4.2).The redshift evolution of the intercept, ( ) z 1 z µ + a , is shown (orange dashed line), along with those derived in two previous studies (the green line is forMosleh et al. 2012 and the black line is forvan der Wel et al. 2014).We note that the size trend of both previous studies is also scaled to the same pivot mass.Median sizes (large symbols) are derived in each redshift window.Those identified as compact (Section 4.4) are shown by magenta symbols.
Figure 7 .
Figure 7. Left: Star formation surface density (Σ SFR ) of our final sample (gray circles) as a function of redshift.The median values at each dropout redshift window are shown by larger symbols (squares).The upper limits for Σ SFR by spatial resolution limit, when assuming a star formation rate of 10 M e yr −1 , are shown for the corresponding JWST NIRCam filters (solid curved lines) and HST WFC3-IR F160W (black dotted line).The fit derived for LBGs in the HST data (Shibuya et al. 2015; cyan dashed line) is shown.Right: Σ SFR as a function of stellar mass.In the background, we show the density contour for the distribution of galaxies at 0.3 < z < 3.5, taken from the 3DHST catalog(Skelton et al. 2014;van der Wel et al. 2014).Those outside the lowest contour level are shown individually (crosses).Only ∼6% of the low-z sample is found at Σ SFR > 1 M e /yr/kpc 2 ; the majority of our sample is located above the same value.
Figure 8 .
Figure8.The same as Figure5, but the symbols are color-coded by the age ( t log ) derived in our SED analysis.The median age in each grid is shown by the larger symbols (squares), with the symbol size scaled by the number of galaxies in the grid.In the inset, the scatters in size measured at each mass bin are shown.The error of each scatter measurement is estimated by bootstrapping.Clearly, the radius and age of the stellar population increase, while the scatters in size decrease, with increasing stellar mass.
Figure 9 .
Figure 9. Left: distribution of 51 galaxies at 5 < z < 9.5 that have robust [O III] λ5007 /Hβ measurements by NIRSpec MSA, in the MEx diagram (circles).For those with Hβ detected at S/N <3, lower limits for the ratio are shown (triangles).Compact galaxies (as defined in Section 4.4; magenta) and two sources from Kocevski et al. (2023) and Larson et al. (2023; cyan stars) are marked separately.The solid lines are shown to indicate the regions dominated by star-forming galaxies (bottom left) and AGNs (top right) of z ∼ 2 (Juneau et al. 2014).Right: distribution of the normalized size ( R e R log e s D, an indicator that is used to define compactness in Section 4.4) as a function of the distance to the MEx border separation line.
Note.Limiting magnitudes measured in empty regions of the image with r = 0 16 apertures.a Mosaic images have been created using the first epoch data that are available as of 2023 June.
Note. "-d" represents dropout.The numbers of spectroscopically confirmed sources (see Section 3.2.3)are shown in brackets.
Table 3
Size-Mass Relations of Galaxies at 5 < z < 14: Linear Regression Best-fit Coefficients | 16,340.8 | 2024-02-21T00:00:00.000 | [
"Physics"
] |
Surface Magneto-Optical Kerr Effect Study of Magnetization Reversal in Epitaxial Fe ( 100 ) Thin Films
During the last years, the magneto-optical properties of ferromagnetic thin films and multilayers have attracted much attention due to their applications in a variety of technological devices, such as [1-3]: magneto-optical sensors, high efficiency magnetic recording heads, magnetic access random memories, etc. One of the most interesting phenomena occurring at the sub-Nano metric level is the Surface Magneto-optic Kerr Effect (SMOKE). Originally discovered by Kerr in 1876 [4], this phenomenon originate from the circular birefringence induced by a magnetic field when linearly polarized light interacts with a reflecting surface. In a ferromagnetic (FM) the intensity of the magneto-optical (MO) reflected signal is proportional to the sample magnetization. In the case of a thin film, SMOKE technique can be used as standard probe in the study of hysteresis curves [5], magnetic domains and magnetic anisotropies [6,7] and in applications in magnetic recording and high density storage [8]. Although the magnetization reversal in FM thin films has been widely studied, still an object of interest since it is the key to understand the micro magnetic processes in matter, while also permitting fundamental studies in magnetism. Besides this, when the FM material is in contact with a semiconductor give rise to hybrid structures offering possibilities for a range of new applications.
INTRODUCTION
During the last years, the magneto-optical properties of ferromagnetic thin films and multilayers have attracted much attention due to their applications in a variety of technological devices, such as [1][2][3] ]: magneto-optical sensors, high efficiency magnetic recording heads, magnetic access random memories, etc.One of the most interesting phenomena occurring at the sub-Nano metric level is the Surface Magneto-optic Kerr Effect (SMOKE).Originally discovered by Kerr in 1876 [4], this phenomenon originate from the circular birefringence induced by a magnetic field when linearly polarized light interacts with a reflecting surface.In a ferromagnetic (FM) the intensity of the magneto-optical (MO) reflected signal is proportional to the sample magnetization.In the case of a thin film, SMOKE technique can be used as standard probe in the study of hysteresis curves [5], magnetic domains and magnetic anisotropies [6,7] and in applications in magnetic recording and high density storage [8].Although the magnetization reversal in FM thin films has been widely studied, still an object of interest since it is the key to understand the micro magnetic processes in matter, while also permitting fundamental studies in magnetism.Besides this, when the FM material is in contact with a semiconductor give rise to hybrid structures offering possibilities for a range of new applications.
Experimentally, magneto-optic techniques can be used in three basic configurations, depending on the orientation of the applied field with respect to the plane of incidence: longitudinal, polar, and transversal.In the longitudinal configuration (LMOKE) the magnetic field is applied parallel to the plane of incidence and in the plane of the sample.Here, the reflected signal is directly proportional to the component of the magnetization along the magnetic field.In the polar Kerr effect of PMOKE, the field is parallel to the plane of incidence and perpendicular to the plane of the film, and the signal proportional to the out-of-plane magnetization.In the transversal configuration (TMOKE), the magnetic field is perpendicular to the plane of incidence, and in the plane of the film.In this case, the dependence of the MO signal on the magnetization is more complicated, since is a non-linear function of both in-plane magnetization components.In practice, MO methods are very simple and of easy implementation in comparison with other magneto metric techniques, such as VSM or SQUID, and in contrast with these, allow us to measure all magnetization components independently.
EXPERIMENTAL PROCEDURE
Fe films within thicknesses in the range from 70 Å to 250 Å were grown by dc magnetron sputtering onto commercial electronic grade MgO (001) wafers, with cleavage edge direction [110].Magnetrons assure a continuous magnetic field of the order of 10 Oe during all the growth process.Before deposition, the substrates were cleaned in ultrasound baths of acetone and ethanol for 10 min, and then dried in nitrogen gas flow.Neither a buffer layer on the substrate nor a cover layer on the magnetic film was used.The base pressure of the system prior deposition was 2.0 × 10 -7 Torr.
The films were deposited in a 3.4 × 10 -3 Torr argon atmosphere in the sputter-up configuration, with the substrate at a distance of 9 cm from the target.The purity of the Ar gas and the Fe target was 99.999% y 99.9%, respectively.The substrate temperature was maintained at 130°C, with a supplied electric power of 20 W. The film thickness was controlled using a calibrated quartz crystal, with deposition rate of the order of ~1 Å/s.The crystallographic quality of the Fe films was tested by X-Ray Diffraction (XRD) in a Siemens D5000 diffractometer, with Cu K α radiation in the Bragg-Brentano configuration.
The hysteresis curves of the Fe(t Fe )/MgO(001) films were measured by SMOKE in the longitudinal configuration.In this configuration the MO signal is directly proportional to the component of the magnetization parallel to the applied field.The measurements were performed at room temperature and with magnetic fields up to 1kOe.The samples were irradiated with a He-Ne (632.8 nm) radiation linearly polarized at 45° with respect to the plane of incidence (θ p = 45°), and modulated at an angle θ m = 0° with a photoelastic modulator at a frequency ω=50° kHz.The angle of incidence was fixed at 60° were the MO absorption of Fe is maximum.Before detection, the signal passes through an analyzer in the 2ω mode in order to select the corresponding magnetization component.The hysteresis loops were taken at several positions of the applied magnetic field with the help of a goniometer, which allowed us to rotate the samples plane.
RESULTS AND DISCUSSION
Figure 1 shows a high-angle θ-2θ x-ray diffraction pattern for a 100 Å thick Fe film.Besides the peak from the substrate, the only observed Bragg peak from the Fe film is at 2θ=65.38°, which is associated with a reflection from the Fe(200) plane.The measured lattice parameter was a 0 =2.852Å.Compared to the bulk Fe lattice parameter of 2.866 Å, the compressive strain is -0.49%.These observations indicate the high crystalline quality of the films with a well-defined growth orientation in the ( 100 [110]In the case when the magnetic field H || [100], the magnetization curve is asymmetric and characterized by two irreversible transitions fields, with a remanence M R /M < 1.0.In films with thicknesses t Fe >100 Å, the hysteresis loops show a symmetric profile at all field positions with equivalent easy and hard axes.To fully understand the magnetization reversal of iron thin films, we use a model in which the magnetization rotates coherently with magnetic free energy ( ) In whereθ and ϕ are the polar and azimuthal angles of the magnetization vector, respectively; M is the saturation magnetization; K 1 is the first order cubic magneto crystalline anisotropy constant; K u is the uniaxial in-plane anisotropy constant; ϕ H is the azimuthal angle of the applied field with respect to the [100] direction; M eff is the effective magnetization defined by 4πM eff = 4πM-2K N /M, and K N the perpendicular anisotropy constant.The origin of the uniaxial anisotropy in these films is the compressive strain observed from the XRD spectra.The theoretical hysteresis loops are calculated taking the component of the magnetization parallel to the applied field 0 ( ) cos( ), Where ϕ 0 is the equilibrium position of the magnetization in the film plane, calculated numerically taken the minimum of the magnetic free energy for each field orientation.The best match between the experimental curves and the model is obtained using the parameters listed in Table 1.The agreement between theory and experiment is reasonable for samples with thicknesses t Fe > 100Å, where only a single transition is observed.However, for thinner films the theoretical curve departs from the experimental hysteresis loop at H || [100], where two irreversible transitions are observed.This is an indicative that in these films, the magnetization rotation is non-coherent.Other studies in epitaxial Fe/GaAs films suggest that the switching of the magnetization is determined by the rotation of Néel and Bloch domain walls, under the action of uniaxial and magnetocrystalline anisotropies [9].The variation of the anisotropy fields with respect to the film thickness is shown in Figure 4.The solid lines in panels in Figures 4a and b are functions following a dependence 1/t Fe as expected for interface and surface effects.The inset in Figure 4b is the rate of change between the in-plane uniaxial and magneto crystalline constants, K u /K 1 .
t This allows us to quantify the effects of the magnetic anisotropies on the magnetization reversal process in Fe thin films.As the film thickness is decreased below a critical thickness of about 100 Å, the value of K u /K 1 increases rapidly from ~0.025 up to ~0.26.This establishes a competition between the uniaxial and cubic anisotropies, and as a result a non-coherent rotation of the magnetization occurs due to a transition from Néel-to-Bloch domain wall motion, as explained [9,10] .This behavior is of interest since provides a tunable switching property that can be of importance in micro magnetic device applications.On the other hand, in films thicker than 100 Å where the magneto crystalline anisotropy overwhelms the uniaxial anisotropy, the magnetization switching is coherent and mainly determined by the rotation of Bloch domain walls.
CONCLUISON
We have studied the magnetization reversal in single crystalline Fe/MgO(001) thin films using in-plane SMOKE magnetometry.A critical thickness of about 100 Ǻ separating two switching regimes is observed.In films with thicknesses t Fe >100 Ǻ, the cubic anisotropy is superposed to the uniaxial anisotropy.Due to this anisotropy superposition the reversal process is determined by a non-coherent rotation of the magnetization, with Néel-to-Bloch domain walls motion.For t Fe >100 Ǻ the magnetization reversal is coherent and controlled by the cubic magneto crystalline anisotropy, with Bloch domain walls motion.The anisotropy constants of the films are determined from the SMOKE loops, using a phenomenological model for coherent rotation of the magnetization.All anisotropies follow the trend 1/t Fe .
Figure1shows a high-angle θ-2θ x-ray diffraction pattern for a 100 Å thick Fe film.Besides the peak from the substrate, the only observed Bragg peak from the Fe film is at 2θ=65.38°, which is associated with a reflection from the Fe(200) plane.The measured lattice parameter was a 0 =2.852Å.Compared to the bulk Fe lattice parameter of 2.866 Å, the compressive strain is -0.49%.These observations indicate the high crystalline quality of the films with a well-defined growth orientation in the (100) plane.The in-plane epitaxial relation was inferred by comparison of the substrate cleavage direction [110] with the magnetic easy and hard axes obtained by MOKE, and assuming that the Fe film behaves as in the bulk.The following epitaxial relations were obtained: [100]Fe|| [110] MgO and [010]Fe ||[110] MgO.
Figure 4 .
Figure 4. Thickness dependence of the anisotropy constants of Fe/MgO(001) thin films.The solid lines represent functions of 1/t Fe .The inset in (b) is the ratio K u /K 1 .The dashed line is a guide for the eyes
Table 1 .
Anisotropy constants obtained from SMOKE loops. | 2,602.6 | 2016-03-07T00:00:00.000 | [
"Physics",
"Materials Science"
] |
Finite Simple Graphs and Their Associated Graph Lattices
In his 2005 dissertation, Antoine Vella explored combinatorical aspects of finite graphs utilizing a topological space whose open sets are intimately tied to the structure of the graph. In this paper, we go a step further and examine some aspects of the open set lattices induced by these topological spaces. In particular, we will characterize all lattices isomorphic to the open set lattices for finite simple graphs endowed with this topology, explore the structure of these lattices, and show that these lattices contain information necessary to reconstruct the graph and its complement in several ways.
Introduction
The content of this paper developed from Brian Frazier's initial exploration in [5]. Antoine Vella [12] explored the so-called classical topology on a finite graph G = (G, E); the open sets of this topology are those subsets of G ∪ E which contain only edges or are unions of edge-balls associated with vertices. The goal of this paper will be to establish a novel relationship between graphs and lattices using this topology. We utilize tools from graph theory, topology, and order theory and will assume the reader is for the most part conversant in these topics. There are many excellent texts available for those wishing greater information; we recommend Diestel [3] for graph theory and Munkres [8] for topology. Excellent general resources for order theory include Gratzer [7] and Davey and Priestley [2].
Before embarking on this project, it may be helpful to provide a summary of its main conclusions. The following paragraphs accomplish this, leaving precise definitions, technical details, context, and relevant concepts to be addressed in subsequent sections. The central idea is relatively straightforward: Given any finite graph G = (G, E), the lattice Ω(G) of open sets for the classical topology, partially ordered by subset inclusion, contains a wealth of information about the graph -encoded in an order-theoretic format.
The lattice Ω(G), when viewed in its entirety, is an intimidating structure; however, its structural complexity belies the fact that the lattice is built from very simple substructures. For example, the lattice Ω(G) is order-generated by its subposet of join-prime elements. This is not surprising since Ω(G) is a finite distributive lattice (see Gratzer [7] for example); however, it is also true that the subposet of join-prime elements is order-isomorphic to the (order-dual) of the incidence poset for the graph G. (See Proposition 2.2, 2.4, and Corollary 2.5.) Consequently, the fact that the posets of join-prime and meet-prime elements of Ω(G) are order-isomorphic tells us that Ω(G) contains two subposets which can be used to reconstruct (a graph-isomorphic copy of) the graph G. Moreover, by focusing attention on those finite lattices order-generated by their joinprime elements, it is possible to characterize the class of finite lattices which are (orderisomorphic to) the lattice Ω(G) for some graph G -the requirement is simply that the join-prime elements form a graph poset. (See Definitions 2.1 and 2.6 along with Theorem 2.7.)
Graphs and Graph Lattices
For our purposes, a (simple) graph is an ordered pair G = (G, E) where G is a finite nonempty set whose elements are called vertices and E is a set of two-element subsets of G whose elements are called edges. Note that we do not consider edges to be directed, and we do not allow "loops" (one-element subsets of G) to be edges. Two vertices u, v ∈ G are adjacent provided {u, v} ∈ E. Vertices cannot be self-adjacent; a vertex that is not adjacent to any other vertex is called an isolated vertex.
A vertex v is said to be incident to an edge e if v ∈ e. In this case, we say v is an endvertex (or simply "end") of e and say that e joins its end vertices. It is common to let uv denote an edge {u, v}. (Of course, it is understood that uv = vu in this notation.) is a graph-homomorphism provided the following conditions are met.
We point out that our definition of graph-homomorphism varies slightly from the standard (see Diestel [3] for example) in that it explicitly requires graph-homomorphisms to preserve edges. Readers familiar with graph-homomorphisms can easily see that this divergence from the norm is of no consequence; it does, however, make working with graph-homomorphisms in a topological context more convenient.
Let G = (G, E) and G = (G , E ) be graphs. A bijection f : (G ∪ E) −→ (G ∪ E ) is a graph-isomorphism provided f is a graph-homomorphism with the property that xy ∈ E if and only if f (x)f (y) ∈ E . As is typical with isomorphisms, if two graphs are isomorphic, then they have the same structure (i.e. can be drawn to look identical to each other).
In a graph G = (G, E), the set of all edges incident to a vertex v is called the edge neighborhood of v and will be denoted by It is easy to see that the family constitutes a basis for a topology on the set G ∪ E. In graph-theory circles, this space is known as the classical topology for G. It is worth noting that U ⊆ G ∪ E is open in the classical topology if and only if U satisfies one of the following conditions.
Vella [12] provides an extensive exploration of the combinatorical relationships between the graph G and its family open sets under the classical topology on G ∪ E. Aside from notable exceptions provided by Thomassen and Vella [11] and Richter and Vella [9], these topological spaces have received little attention since.
One reason these spaces have received little attention may stem from the fact that their structure is best understood in order-theoretic terms; therefore, we pause briefly to introduce some key ideas from the realm of order theory.
Suppose P = (P, ≤) is a poset (partially ordered set). We say P is lower bounded provided there exists some ⊥ ∈ P such that ⊥ ≤ x for all x ∈ P . The notion of upper bounded poset is defined dually; and, of course, a poset is bounded provided it is both lower-bounded and upper-bounded. A poset P is called a lattice provided every pair of elements in P has a least upper bound and a greatest lower bound in P. If P is a lattice, then it is common to let x ∨ y and x ∧ y denote the least upper bound and greatest lower bound, respectively, for x, y ∈ P .
Let G = (G, E) be a graph. In the work to follow, we will let Ω(G) denote the poset of all subsets of G ∪ E which are open in the classical topology on G, partially ordered by subset inclusion. For clarity, we will refer to the members of Ω(G) as the graph-open subsets of G.
The poset Ω(G) clearly forms a bounded lattice. Indeed, the greatest lower bound and least upper bound of any family of open sets is simply its intersection and union, respectively. The following diagrams show the graph-open set lattices for several graphs.
If P = (P, ≤) and Q = (Q, ) are posets, then a function f : P −→ Q is called an order-homomorphism provided a ≤ b implies f (a) f (b). (It is common to say an order . We should note that a bijection f : P −→ Q is an order-isomorphism if and only if both f and its inverse function are order-homomorphisms. Let P = (P, ≤) be any poset. A subset L of P is called a lowerset (or order ideal) of P provided x ∈ L and y ≤ x together imply that y ∈ L. It is commonplace to let ↓ x = {y ∈ P : y ≤ x} represent the principal lowerset generated by x. We will let Low(P) denote the poset of a lowersets of P, partially ordered by subset inclusion. It is easy to see that Low(P) is a complete lattice (closed under arbitrary set-unions and set-intersections) in which every member is the union of a family of principal lowersets. Furthermore, it is a routine exercise to prove that P may be order-embedded in Low(P) via the assignment p →↓ p. (See Davey and Priestley [2] for example.) We say that X is an upperset (or order filter) of a poset P = (P, ≤) provided X is a lowerset in the order-dual of P. It is common to let ↑ x denote the principal upperset of P generated by x.
In a poset P = (P, ≤), we say that x ∈ P is maximal provided ↑ x = {x}. Minimal elements in P are defined to be maximal elements in the order-dual of P. We say that x covers y ∈ P provided x and y are distinct, and ↑ y ∩ ↓ x = {y, x}. For x ∈ P , we will let Cov(x) denote the set of covers for x in P. Note that Cov(x) will be empty if x is a maximal member of P.
In a poset P = (P, ≤), we say A ⊆ P is an antichain provided the elements of A are pairwise incomparable. To be more precise, A is an antichain provided x, y ∈ A and x ≤ y together imply x = y. We will say that a finite poset P = (P, ≤) is bipartite provided there exist disjoint nonempty antichains V P and E P such that P = V P ∪ E P , and each member of E P is covered by at least one member of V P .
We caution that our definition of "bipartite poset" is somewhat different from the one commonly found in the literature. (See Erdös [4] for example.) However, the difference is primarily one of grouping in that we collect all maximal poset members into the antichain V P . The antichain E P may be a proper subset of the minimal poset members. If P = (V P ∪E P , ≤) is a bipartite poset, note that ↑ e contains at least two elements for every e ∈ E P . If x ∈ V P is such that ↓ x = {x}, then we will say that x is isolated in P. The isolated members of P are, of course, precisely those members of V P that are also minimal in P.
Definition 2.1. We will say that a bipartite poset P = (V P ∪ E P , ≤) is a graph poset provided the following conditions are met.
1. Every member of E P is covered by exactly two members of V P .
If G = (G, E) is any graph, then there is a graph poset naturally associated with G, namely the set P G = G ∪ E endowed with the partial order defined by u v if and only if Graph theorists commonly refer to the order-dual of the poset P G = (P G , ≤) as the incidence poset for the graph G.
On the other hand, if P = (V P ∪ E P , ≤) is any graph poset, then there is a natural way to associate a graph with P. To begin, note that the conditions in Definition 2.1 guarantee there is a unique two-element subset of V P associated with every member of E P . With this in mind, let We simply consider the structure G P = (V P , E P ). (In other words, two distinct vertices x and y are adjacent if and only if x ∧ y exists in P.) On the other hand, if P = (V P ∪ E P , ≤) is a graph poset, then it is also not difficult to see that the graph poset P G P = (V P ∪ E G P , ) is order-isomorphic to P. Indeed, simply consider the mapping g : It is worth noting that, aside from adjusting notation to fit the codomain structure, the functions f and g defined above are the same. We may consider graphs and finite graph posets to be essentially interchangeable structures.
The following proposition provides a simple but convenient alternative way to view the open set lattice for any graph.
Proof. Suppose that U ∈ Ω(G). If U ⊆ E, then U is an antichain (of minimal elements) in P G and is therefore a lowerset of P G . Suppose x ∈ U ∩ G and suppose y ∈ P G is such that y ≤ x. It follows that y = x or that y is an edge in B(x). In either case, y ∈ U ; and we may conclude that U is a lowerset of P G .
A member a of a lower-bounded poset P = (P, ≤) is called an atom of P provided ↓ p contains exactly two elements. We say a member c of an upper-bounded poset P is a co-atom (or dual atom) of P provided ↑ a contains exactly two elements. A lower-bounded poset P is atomic provided ↓ p contains an atom for every p ∈ P − {⊥}. Co-atomic posets are defined dually.
If G = (G, E) is a graph, then its open set lattice Ω(G) is finite and contains at least two members; therefore Ω(G) is both atomic and co-atomic. There is a particularly simple characterization for both the atoms and the co-atoms of G.
1. The atoms of Ω(G) are the singleton edges and the singleton isolated vertices.
The co-atoms of
Proof. Singleton edges and singleton isolated vertices are the only graph-open sets that can cover the empty set, which is the smallest member of Ω(G); hence, Claim (1) is trivial.
. This set is clearly a member of Ω(G) and is covered by the set G ∪ E which is the largest element of the lattice. Hence, On the other hand, suppose that θ is a co-atom of Ω(G). Suppose that x, y are distinct vertices missing from θ and consider the graph-open set χ = θ ∪ B(x). It is clear that y ∈ χ. Consequently, we have θ ⊂ χ ⊂ G ∪ E -contrary to assumption. We must conclude that θ is missing at most one vertex. Now suppose that θ is missing an edge, and let e = xy be one edge missing from θ. Since θ is graph-open, we know that x ∈ θ and y ∈ θ. We have shown this situation to be impossible; therefore we must conclude that θ is missing exactly one vertex. Thus, we know that An element j of a lattice L = (L, ≤) is join-prime provided whenever F ⊆ L is finite and j ≤ F , then j ≤ x for some x ∈ F . It is a routine exercise to prove a poset has a least element ⊥ if and only if ∅ exists in the poset. (Indeed, one can show ∅ = ⊥ when either is assumed to exist.) With this in mind, the least element of a lattice (if it exists) cannot be join-prime. We will let JP(L) denote the subposet of join-prime elements for L. Note that any atom of a lower-bounded lattice is also join-prime in that lattice. In a finite lattice, an element j is join-prime if and only if ↓ j − {j} contains a unique maximal element; in this sense, join-prime elements generalize the notion of "atom" in finite lattices. The following result is easily proven but will play a crucial role in much of the work to follow. Proposition 2.4. If P = (P, ≤) is a finite poset, then the join-prime members of Low(P) are precisely the principal lowersets of P.
If G = (G, E) is any graph, then the singleton edge sets and edge-balls of G are precisely the principal lowersets of Low(P G ). With this in mind, we have the following result. A lattice L = (L, ≤) is distributive provided x∧(y ∨z) = (x∧y)∨(x∧z) for all x, y, z ∈ L. It is worth noting that Join-prime elements play a key role in understanding the structure of distributive lattices. In the parlance of order-theory, a subposet X of a finite lattice L = (L, ≤) is join-dense in that lattice provided every member of the lattice is the join of a finite (possibly empty) subset of X. It is well-known that a finite lattice L is distributive if and only if JP(L) is join-dense in L. (See Gratzer [7] pages 102 and 112 for example.) Every finite, distributive lattice L is order-isomorphic to the lowerset lattice of its poset of join-prime elements. The order isomorphism is provided by the function f : The proof is straightforward; see Gratzer [7] or Davey and Priestley [2] page 171 for details. Definition 2.6. We will say that a finite lattice L = (L, ≤) is a graph lattice provided the following conditions are met. Let G = (G, E) be any graph and let P G = (G ∪ E, ) be its graph poset. We know that P G is order-isomorphic to JP(Low(P G )); hence, Low(P G )) is a graph lattice. Consequently, Proposition 2.2 tells us that Ω(G) is a graph lattice for any graph G. It should come as no surprise that every graph lattice arises in this fashion.
Since we do not allow multiple edges between vertices, it is easy to see that f is a bijection. Now, if x, y ∈ V L or x, y ∈ E L , then it is clear that On the other hand, if x ∈ V L and y ∈ E L , then it is clear that We may conclude that the graph posets (B G L , ⊆) and P L are order-isomorphic. In light of this fact, it is easy to show that L is order-isomorphic to Ω(G L ). Indeed, the orderisomorphism is accomplished via the mapping ϕ : L −→ Ω(G L ) defined by If L is a graph lattice, then we will call the graph G L the graph induced by L. Note that there is a bijection between the edges and isolated vertices of the induced graph G L and the atoms of the lattice L. Figures 5 and 6 illustrate the process of passing from a graph lattice L to the open set lattice for the induced graph G L .
is the graph-isomorphism we seek.
An element p of a lattice L is meet-prime provided it is join-prime in the order-dual of L. In other words, p is meet-prime whenever F ⊆ L is finite and F ≤ p, then there exist x ∈ F such that x ≤ p. It is a routine exercise to prove a poset has a greatest element if and only if ∅ exists in the poset. (Indeed, one can show ∅ = when either is assumed to exist.) With this in mind, the greatest element of a lattice (if it exists) cannot be meet-prime. Note that the concept of meet-prime element is order-dual to that of join-prime element. We will let MP(L) denote the subposet of meet-prime elements for the lattice L.
In Theorem 2.7, we demonstrated that the subposet of join-prime elements for any graph lattice L induces a graph G L whose open set lattice is in turn order-isomorphic to L. We now introduce a special case of a result appearing in Snodgrass and Tsinakis [10] which proves that MP(L) can also be used to create the graph G L . We provide its proof for completeness, noting that we have merely adapted arguments appearing in the aforementioned paper. Proof. Let a, b ∈ L. We say that the ordered pair (a, b) ∈ L × L splits the lattice L provided ↓ a ∩ ↑ b = ∅ and ↓ a ∪ ↑ b = L. If (a, b) splits L, then it is easy to see that a is meet-prime and b is join-prime in L. Indeed, to see why a is meet-prime, suppose x, y ∈ L are such that x ∧ y ≤ a. If it were the case that {x, y} ⊆ ↑ b, then we would know x ∧ y ∈ ↑ b as well. However, this is impossible, since ↑ a ∩ ↑ b = ∅. Thus, we must conclude that x ≤ a or y ≤ a. The proof that b is join-prime is similar.
Suppose a ∈ MP(L), and consider the element φ(a). Since a is meet-prime, it follows that φ(a) ≤ a; and, for each x ∈ L, we must have x ≤ a if and only if φ(a) ≤ x. In light of this observation, the pair (a, φ(a)) splits L; and we must conclude that φ(a) is join-prime in L. If we instead assume that b is join-prime in L, then similar reasoning demonstrates that the pair (ζ(b), b) splits L; and we must conclude that ζ(b) is meet-prime in L.
Finally, suppose that u, v ∈ MP(L) and suppose that u ≤ v. If x ≤ v, then it is certainly the case that x ≤ u. Consequently, we know that φ(u) ≤ φ(v); and we may conclude that φ is an order-homomorphism. The proof that ζ is also an order-homomorphism is similar. Lemma 2.9 tells us that if L = (L, ≤) is any graph lattice, then MP(L) is a graph poset which is order isomorphic to the graph poset for G L . Consequently, we may also construct (a graph-isomorphic copy of) the graph G L from MP(L).
It is worth noting that the graph lattice Ω(G) for a graph G = (G, E) also contains information sufficient to construct the graph complement of G. The graph complement G c = (G, E c ) is defined by {x, y} ∈ E c if and only if x, y ∈ G and {x, y} ∈ E. We know {x, y} ∈ E if and only if B(x) ∩ B(y) = ∅. For each x, y ∈ G, let B(x, y) = B(x) ∪ B(y) and consider the following sets is (graph isomorphic to) the graph complement of G. Furthermore, (P c , ⊆) is the incidence poset for G P c = (V G , E c G ). Proof. To see that G P c serves as the graph complement for G, suppose B(x), B(y) ∈ V P and observe that It is clear that the elements B(x, y) are pairwise incomparable; hence, P c is the union of two disjoint antichains since a graph must contain at least two vertices. It is possible that E c G is empty -this will occur if and only if G is a complete graph; that is, if and only if E = {{x, y} : x, y ∈ G}.
Suppose that U ∈ E c G . We know U = B(x, y) for some x, y ∈ G; and it is clear that B(x) ⊂ U and B(y) ⊂ U . Since every member of V G is join-prime in Ω(G), it also follows that B(z) ⊂ U implies B(z) = B(x) or B(z) = B(y). Consequently, U covers exactly two members of V G . Suppose V ∈ E c G is distinct from U . There exist a, b ∈ G such that V = B(a, b), and we may assume x = a. Of course, this implies ↓ U ∩ V G =↓ V ∩ V G ; and we may conclude that P c = (P c , ⊆) is the order-dual of a graph poset whenever E c G is nonempty. Figures 7 and 8 together demonstrate how the graph complement of a graph G may be constructed from members of Ω(G) using Theorem 2.10.
The Structure of Graph Lattices
In this section, we explore some of the structural properties of graph lattices and tie these properties to their corresponding graphs. In light of the previous section, we can adopt the perspective that a graph lattice is (isomorphic to) the lowerset lattice of some graph poset, or we can adopt the perspective that a graph lattice is (isomorphic to) the open set lattice for a graph endowed with the classical topology. We will move freely between these perspectives in the work to follow.
If P = (V P ∪ E P , ≤) is any graph poset, then we know that the join-prime members of Low(P) are simply the principal lowersets of P. Let us consider what Lemma 2.9 tells us about meet-prime elements in Low(P). Suppose that U is meet-prime in Low(P). This means that U = ζ(↓ p) for some p ∈ V P ∪ E P since the join-prime members of Low(P) are precisely the principal lowersets of P.
First, suppose that P = ζ(↓ x) for some x ∈ E P . Of course, we know ↓ x = {x}; hence we also know For simplicity, let (Cov(x)) = P − ({x} ∪ Cov(x)). Now, suppose that U = ζ(↓ y) for some y ∈ V P . This tells us that We have now proven the following result.
Theorem 3.1. Let P(V P ∪ E P , ≤) be a graph poset. A member U of Low(P) is meet-prime if and only if U is a co-atom of Low(P) or U = (Cov(x)) for some x ∈ E P .
An element x of a bounded lattice L is complemented provided there exist y ∈ L such that x ∧ y = ⊥ and x ∨ y = , where ⊥ and denote the least and greatest elements, respectively, for L. Complements in distributive lattices are necessarily unique. A bounded, distributive lattice in which every element has a complement is called a Boolean lattice. Some authors require Boolean lattices to contain at least two elements; we shall not use that requirement. Boolean lattices comprise one of the most important classes of lattices; we recommend Givant [6] as an excellent resource on this topic.
Of course, every finite Boolean lattice containing at least two elements is atomic. It is well-known that every finite Boolean lattice is order-isomorphic to the powerset lattice of its set of atoms, partially ordered by subset inclusion. (This includes the one-element Boolean lattice as well, since the powerset of the empty set contains exactly one element.) Definition 3.2. Let G = (G, E) be a graph. In the work to follow, we will let B ⊥ represent the powerset of E, and we will let B = {E ∪ X : X ⊆ G}.
Let L = (L ≤) be a lattice and let I ⊆ L be nonempty. Recall that I ∈ Low(L) is an ideal of L provided I contains an upper bound for each of its finite (possibly empty) subsets. A subset F of L is a filter of L provided F is an ideal in the order dual of L.
The sublattice (B , ⊆) is a filter of Ω(G).
Proof. Claim (1) follows from Definition 3.2. It is a well-known fact that the powerset of any set is a Boolean lattice -see Givant and Halmos [6]. By construction, (B ⊥ , ⊆) is the powerset of E; and (B , ⊆) is order-isomorphic to the powerset of G. It is worth noting that (B ⊥ , ⊆) is atomic if and only if E is nonempty; in this case, the atoms of (B ⊥ , ⊆) are the singleton edge sets. Since G is assumed nonempty, the poset (B , ⊆) is always atomic, and the atoms of (B , ⊆) are all sets of the form E ∪ {x} such that x ∈ G.
It is easy to see that (B ⊥ , ⊆) is an ideal of Ω(G). Since (B ⊥ , ⊆) is a sublattice, we need only prove it is a lowerset of Ω(G). To this end, suppose that χ ∈ Ω(G) is such that χ ⊆ α for some α ∈ B ⊥ . Of course, this tells us that χ is an edge-only set and is therefore a member of B ⊥ by construction.
It is also easy to see that (B , ⊆) is a filter of Ω(G). Again, since (B , ⊆) is a sublattice, we need only prove it is an upperset of Ω(G). To this end, suppose that χ ∈ Ω(G) is such that β ⊆ χ for some β ∈ B . By construction, we know that E ⊆ β, so we know that χ = E ∪ X for some X ⊆ G. Therefore, χ ∈ B by construction.
For ease of reading and in a common abuse of notation, we will usually identify the posets (B ⊥ , ⊆) and (B , ⊆) with their underlying sets.
Let G = (G, E) be a graph. We will call the lattice B ⊥ ∪B the Boolean cone of Ω(G). We will refer to the poset Sus(Ω(G)) = Ω(G) − (B ⊥ ∪ B ) (partially ordered by subset inclusion) as the collection of suspended elements for Ω(G). A graph with empty edge set will have no suspended elements; indeed, the open set lattice of such a graph is simply the powerset of its vertices.
Figure 9 Anatomy of a Graph Lattice
In light of Theorem 3.1, every meet-prime element of Ω(G) that is not a co-atom must be a suspended element. Now, an element U ∈ Ω(G) that is not a co-atom will be meet-prime if and only if there exist x, y ∈ G such that U = (G − {x, y}) ∪ (E − {xy}) = (Cov(xy)). It follows at once that the sets (Cov(xy)) such that xy ∈ E are maximal suspended elements in Ω(G).
Proposition 3.4. Let G = (G, E) be a graph and suppose U ∈ Ω(G). The set U is maximal in the poset Sus(Ω(G)) if and only if U = (Cov(xy)) for some xy ∈ E.
Proof. Suppose that U ∈ Ω(G) is a maximal suspended element. If there exist e, f ∈ E − U , then V = {e} ∪ U would be a suspended element properly containing U -contrary to assumption. Hence, we must conclude that E − U = {e}. If we let e = uv, then we must conclude that u, v ∈ U . It follows that U = (E − {e}) ∪ (G − X) for some X that contains u and v.
Suppose y ∈ X − {u, v}. Since e ∈ B(u) ∩ B(v), we know that e ∈ B(y). Therefore {y}∪U is a suspended member of Ω(G) properly containing U -contrary to assumption.
Characterizing the minimal suspended elements requires a bit more care. Let G = (G, E) be a graph, and let x ∈ G. We will say that x is a center for G provided E(x) = E. If a graph contains no edges, then every vertex serves as a center for the graph. The graph K 2 = ({x, y}, {xy}) appearing in Figure 1 is the only graph with nonempty edge set that contains more than one center. Any graph with nonempty edge set that is not K 2 can contain at most one center since any edge can be incident to exactly two vertices. Graphs that contain a center are sometimes called stars. (See Diestel [3] for example.) Figures 1,3, and 6 present graphs that contain a center. 1. The graph G L has empty edge set.
2. The graph G L is a star.
Proof. If G L has empty edge set, then Ω(G L ) is simply the powerset of the vertex set, and Ω(G L ) = B . If G L is a star, then there is a vertex x that serves as a center for G L . The edge-ball B(x) must contain the edge set for G L and therefore corresponds to a join-prime member of L contained in B .
Conversely, suppose that there exist join-prime elements in the set B . If B ⊥ = {⊥}, then every join-prime member of L must correspond to a vertex in G L ; and we must conclude G L has empty edge set. Suppose that {⊥} ⊂ B ⊥ , and let x ∈ JP(L) ∩ B . The atoms of B ⊥ correspond to the edges of the graph G L ; hence we know that x does not correspond to a subset of E P L . This tells us that x corresponds to B(u) for some vertex u ∈ V L . Since B ⊥ ⊆ ↓ x, we must conclude B(u) is contains the edge set for G L . Hence, we know that G L is a star. Proposition 3.6. For a graph G = (G, E) with nonempty edge set, the following claims are equivalent for a vertex x.
1. The vertex x is not a center for G .
The edge-ball B(x) is a minimal suspended element in Ω(G).
Proof. To prove that Claim (1) implies Claim (2), suppose x ∈ G is not a center. Since x ∈ B(x), we know B(x) ∈ B ⊥ . Since E(x) = E, we also know that B(x) ∈ B . Consequently, we may conclude that B(x) is a suspended element. Any proper open subset of B(x) contains only edges; hence, B(x) must be a minimal suspended element.
To prove that Claim (2) implies Claim (1), suppose B(x) is a minimal suspended element. Our assumption implies B(x) ∈ B . Hence we know that E(x) = E; and we must conclude that x is not a center.
In light of the previous result, for any graph G with nonempty edge set, the minimal suspended elements of Ω(G) are precisely those edge-balls that are not generated by a center of the graph. (Recall that any member of Sus(Ω(G)) must contain an edge-ball; hence a suspended element which is not itself an edge-ball cannot be minimal.) Lemma 3.7. Let G = (G, E) be a graph with nonempty edge set. If G contains at least four elements, then the following claims are true.
1. No edge-ball is a maximal suspended element in Ω(G).
The poset Sus(Ω(G) is not an antichain.
Proof. If x is a center for G, then B(x) is not a suspended element; and there is nothing to show. Suppose x ∈ G is not a center. If B(x) is a maximal member of Ω(G), then by Proposition 3.4, there exist y, z ∈ G such that B(x) = (G − {y, z}) ∪ (E − {yz}). Since x is the only vertex in B(x), we are forced to conclude that G = {x, y, z} -contrary to assumption.
We now establish Claim (2). Since G contains at least four members, we know G contains a vertex x that is not a center. By Proposition 3.6, B(x) is a suspended element; hence we know Sus(Ω(G) is nonempty. By Claim (1), we also know that B(x) is not maximal in Sus(Ω(G); consequently, we must conclude Sus(Ω(G) is not an antichain. Figures 1,2,3, and 4 present all of the graphs with nonempty edge set that contain at most three elements. In each case, the suspended elements of Ω(G) form an antichain. In light of Lemma 3.7, these are the only graphs having this property.
A graph lattice is a very complex entity; yet much of its complexity has little to do with the structure of the underlying graph. Indeed, the Boolean cone of a graph lattice is entirely determined simply by the number of vertices and edges contained in the graph. Any two graphs having the same number of edges will generate isomorphic lower nappes B ⊥ in their open set lattices; and any two graphs having the same number of vertices will generate isomorphic upper nappes B in their open set lattices. It therefore seems reasonable to focus attention upon the suspended elements and determine how these elements are constructed and whether they encode information sufficient to reconstruct the graph.
Let G = (G, E) be a graph. Let MaxSus(Ω(G)) represent the antichain of all maximal suspended elements in Ω(G), and let MinSus(Ω(G)) represent the antichain of all minimal suspended elements in Ω(G). Note that we will have MaxSus(Ω(G)) = MinSus(Ω(G)) precisely when G has empty edge set or contains at most three vertices.
Definition 3.8. We will say a poset P = (P, ≤) is an anti-graph poset provided the following conditions are met.
1. There exist disjoint, nonempty antichains U P and D P such that P = U P ∪ D P and D P contains at least four elements.
2.
If e, f ∈ U P are distinct, then ↓ e ∩ D P = ↓ f ∩ D P .
3. If e ∈ U P , then D P − ↓ e contains exactly two elements.
4. If x ∈ D P , then U P ∩ ↑ x is nonempty.
For completeness, we note that an anti-graph poset with n minimal elements is really just the order-dual of the incidence poset for an (n − 2)-uniform hypergraph.
Of course, every anti-graph poset can be associated with a graph poset (and hence a graph) in a natural way. Suppose P = (U P ∪ D P , ≤) is an anti-graph poset. Define a partial ordering on U P ∪ D P as follows: For all x, y ∈ U P ∪ D P , let x y if and only if one of the following conditions holds: 1. We have x = y.
2.
We have x ∈ U P , y ∈ D P , and y ≤ x.
Let P G = (U P ∪ D P , ). To see that P G is a graph-poset, first suppose that e ∈ U P . By Condition (3) of Definition 3.8, we know D P − ↓ e contains exactly two elements; hence, we know that e is covered by exactly two elements in the poset P G . Now, suppose e, f ∈ U P are distinct. Condition (2) of Definition 3.8 guarantees that e and f have distinct covering sets in P G . Proof. Let D P = MinSus(G) and let U P = MaxSus(G). We know that D P = {B(x) : x ∈ G} contains at least four elements by Proposition 3.6, and by Proposition 3.4, we know that U ∈ U P if and only if U = (Cov(xy)) for some xy ∈ E. We also know that U P ∩ D P = ∅ by Lemma 3.7. Consequently, if U ∈ U P , we know that B(x) and B(y) are the only members of D P that are not subsets of U . Suppose V ∈ U P and suppose U = V . We know that V = (G − {u, v}) ∪ (E − {uv}); and since we do not allow multiple edges incident to the same pair of vertices, we may assume x = u. Consequently, we know that U covers B(u) and V covers B(x); and it follows that U and V do not cover the same subset of D P .
Finally, consider B(x) for any x ∈ G. We know that x is not a center; hence, E(x) = E. Let e = uv ∈ E − E(x) and consider (Cov(uv)) = (G − {u, v}) ∪ (E − {uv}). Since e ∈ B(x), we know that u = x and v = x. Consequently, B(x) ⊂ (Cov(uv)); and we may conclude that B(x) is covered by members of U P .
There is a graph lattice associated with every anti-graph poset P = (U P ∪ D P , ≤)namely the open set lattice of the graph poset P G = (U P ∪D P , ). The graph associated with P G has nonempty edge set and contains at least four vertices; therefore, we know its poset of maximal and minimal suspended subsets forms an anti-graph poset. It should come as no surprise that this poset is order-isomorphic to the original anti-graph poset. Approaching the proof of this fact directly from the graph lattice is notationally cumbersome, so we conclude this paper by providing a proof that constructs (an isomorphic copy of) the graph lattice directly from the anti-graph poset.
Suppose that P = (U P ∪ D P , ≤) is an anti-graph poset, and let Pow[U P ∪ D P ] represent the powerset of U P ∪ D P . For each x ∈ D P , let A(x) = {x} ∪ (U P − ↑ x), and let If x = y in D P , then A(x) ∩ A(y) contains at most one member. To see why, suppose that e and f are distinct members of A(x) ∩ A(y). It follows that e, f ∈ U P . However, it also follows that e, f ∈ ↑ x ∪ ↑ y; hence, by Condition (3) of Definition 3.8, we must have This is impossible by Condition (1) of Definition 3.8.
Theorem 3.10. If P = (U P ∪ D P , ≤) is an anti-graph poset, then the poset L P = (L P , ⊆) is a graph lattice whose poset of maximal and minimal suspended elements is order-isomorphic to P.
Proof. It is easy to see that L P is a lattice in which meet and join are simply set-intersection and set-union, respectively. Consider the set The empty set is the least element of L P . Hence, the singletons {e} where e ∈ D P are clearly atoms (and hence join-prime) in L P . It is also clear that each A(x) is join-prime in L P . Indeed, suppose that B, C ∈ L P are such that A(x) ⊆ B ∪ C. We may assume that x ∈ B, and it is clear that A(x) ⊆ B.
By construction, if U ∈ L P and x ∈ U ∩ D P , then A(x) ⊆ U . Therefore, if U ⊆ U P , then there must exist x 1 , ..., x n ∈ D P and (possibly empty) E U ⊆ U P disjoint from each A(x j ) such that It follows that the join-prime elements of L P are join-dense in L P . Furthermore, this characterization of the elements in L P makes it is clear we must have JP(L P ) = B P .
To see that L P is a graph lattice, it will therefore suffice to show that B P is a graph poset. Let E P = {{e} : e ∈ U P } and let V P = {A(x) : x ∈ D P }. It is clear that E P and V P are antichains. Let e ∈ U P . By assumption, e fails to cover exactly two members of D P ; let x and y be these elements. Since e ∈ (U P − Cov(x)) ∩ (U P − Cov(y)), it follows that A(x) and A(y) are the only covers for {e} in the poset B P .
Suppose now that {e}, {f } are distinct members of E P . Let x, y ∈ D P be the elements e fails to cover, and let u, v ∈ D P be the elements f fails to cover. Since {x, y} = {u, v} by assumption, we may suppose that x = u. It follows that f ∈ A(u) but e ∈ A(u). Likewise, it follows that e ∈ A(x) but f ∈ A(x). Therefore, the cover sets for {e} and {f } in B P are not the same; and we may conclude that B P is indeed a graph poset.
We have proven that L P is a graph lattice. Let G P = ({A(x) : x ∈ D P }, E P ) be the graph associated with L P by Theorem 2.7. We know that {A(x), A(y)} ∈ E P if and only if A(x) ∩ A(y) is nonempty. For each {A(x), A(y)} ∈ E P , let A(x) ∩ A(y) = {e xy }. By assumption, we know that G P contains at least four vertices. Since U P is nonempty, we know the edge set for G P is also nonempty by Condition (4) of Definition 3.8. Consequently, by Lemma 3.7, we know the edge-balls of G P are the minimal suspended members of Ω(G P ). Note that the edge-ball generated by A(x) is the set B(A(x)) = {A(x)} ∪ {e ∈ E P : A(x) ∈ e} .
The assignment B(A(x)) → x is therefore a bijection from the set MinSus(Ω(G P )) to the set D P . Now, suppose U is a maximal suspended member of Ω(G P ). There exist x, y ∈ D P such that U = (Cov(A(x)A(y))) = (G P − {A(x), A(y)}) ∪ (E P − {A(x)A(y)}). The assignment U → e xy is a bijection from the set MaxSus(Ω(G P )) to the set U P . Consequently, consider the function ϕ : Sus(Ω(G P )) −→ P defined by ), e xy if U = (Cov(A(x)A(y))) .
For each e ∈ U P , let U e denote the pre-image of e in MaxSus(Ω(G P )) under the function ϕ. There exists a unique pair {x, y} ⊆ D P such that U e = (G P − {A(x), A(y)}) ∪ (E P − {A(x)A(y)}); and we know e = e xy . With this in mind, note that the edge A(u)A(v) ∈ U e if and only if {u, v} = {x, y}. Consequently, it follows that A(u) ⊂ U e if and only if e ∈ A(u). Now, suppose that U ∈ MaxSus(Ω(G P )) and A(x) ∈ MinSus(Ω(G P )). There exist unique e ∈ U P such that U = U e . Observe A(x) ⊆ U e ⇐⇒ e ∈ A(u) ⇐⇒ e ∈↑ P x ⇐⇒ x < e .
The function ϕ therefore defines an order-isomorphism between the poset P and the poset of maximal and minimal suspended elements of Ω(G P ). | 10,871 | 2018-01-01T00:00:00.000 | [
"Mathematics"
] |
Super‐Soft DNA/Dopamine‐Grafted‐Dextran Hydrogel as Dynamic Wire for Electric Circuits Switched by a Microbial Metabolism Process
Abstract Engineering dynamic systems or materials to respond to biological process is one of the major tasks in synthetic biology and will enable wide promising applications, such as robotics and smart medicine. Herein, a super‐soft and dynamic DNA/dopamine‐grafted‐dextran hydrogel, which shows super‐fast volume‐responsiveness with high sensitivity upon solvents with different polarities and enables creation of electric circuits in response to microbial metabolism is reported. Synergic permanent and dynamic double networks are integrated in this hydrogel. A serials of dynamic hydrogel‐based electric circuits are fabricated: 1) triggered by using water as switch, 2) triggered by using water and petroleum ether as switch pair, 3) a self‐healing electric circuit; 4) remarkably, a microbial metabolism process which produces ethanol triggering electric circuit is achieved successfully. It is envisioned that the work provides a new strategy for the construction of dynamic materials, particularly DNA‐based biomaterials; and the electric circuits will be highly promising in applications, such as soft robotics and intelligent systems.
Introduction
Dynamic hydrogels stimulated by specific stimuli or biological process have emerged as potential candidates for a wide range of applications, such as soft robots, [1,2] next-generation bioelectronics interfaces, [3,4] switchable catalysis, [5] and targeted gene therapy. [6] Many polymer materials have been used for the construction of dynamic hydrogels with reversible volume and shape changes, due to the presence of entropic elasticity of polymer chains. [7,8] The general strategy for designing dynamic hydrogels is constructing synergic permanent and temporary double networks. The permanent networks created by covalent bonds maintained the structural integrity; meanwhile the temporary networks created by dynamic interactions were responsible for the responsiveness upon stimuli. [7,9,10] DNA composed of four deoxyribonucleotide monomers was regarded as a block copolymer and showed great potential for the construction of dynamic soft DNA materials, especially for dynamic soft DNA hydrogels (lower than 100 Pa of elastic modulus). [11] The ultrahigh molecular weights of DNA enabled the entropic elasticity of DNA copolymers. [12] Therefore, DNA was a competitive alternative for the construction of dynamic DNA hydrogels in response to specific stimuli or biological process. [13][14][15] For example, Luo and colleagues synthesized a dynamic DNA hydrogel, which exhibited solid/liquid transition in response to water. [16] Liu and colleagues prepared an enzymeresponsive DNA hydrogel possessing double networks. [17] Fan and colleagues developed a DNA hydrogel with volume changes in response to specific DNA sequence. [18] Willner and colleagues reported a pH-responsive DNA hybrid hydrogel with multiple shape-memory properties. [19] Besides, the unique attributes of DNA molecules, such as the intrinsic biological functions, molecular recognition, sequence programmability, and biocompatibility [20][21][22] endowed DNA hydrogels with rationally designed molecular structures and diversified biofunctions. For instance, Tan and colleagues prepared redox-responsive DNA nanogels for targeted gene therapy. [6] Schulman and colleagues fabricated shape-changing DNA hydrogels for programmable soft devices. [1] Recently, we have demonstrated that the physical interactions of long DNA chains can result in the formation of super-soft DNA hydrogels. For example, we prepared a super-soft and super-elastic magnetic DNA hydrogel-based robot (DNA robot) via utilizing DNA chain entanglement and hybridization for the construction of combinational dynamic and permanent crosslinking networks. [2] This DNA robot showed shape-adaptive properties and thus enabled the magnetically driven navigational locomotion for cell delivery in unstructured and confined space. We further constructed a DNA network via double rolling circle amplification, and through the intertwining and self-assembly of two strands of ultralong DNA chains. [22] This special structural design endowed DNA hydrogel with the desired functions and properties to fulfill the stem cell fishing and controlled release. In general, to construct dynamic DNA hydrogels in response to specific stimuli, designing synergic permanent and temporary double networks was proposed, in which the permanent networks maintained the structural integrity, meanwhile the temporary networks were responsible for the responsiveness. Inspired by the adhesive proteins in mussel byssus, [23] dopamine (DOPA)-functionalized polymer has been served as "molecular glue" to prepare dynamic hydrogels. [24,25] DOPA-mediated hydrogen bonds and the subsequent covalent crosslinking were responsible for the formation of temporary and permanent networks, respectively. [26] In particular, it has been demonstrated that the strong interactions between DOPA and DNA had significant effect on the conformation of DNA nanostructures, showing great potential for constructing DOPA-based DNA networks. [27] Herein, we report a super-soft and dynamic nanofiberassembled DNA/dopamine-grafted-dextran hydrogel composed of natural long DNA chains and dopamine-grafted-dextran (DEXg-DOPA), which showed super-fast volume-responsiveness with high sensitivity upon solvents with different polarities. Synergic permanent and dynamic double networks were integrated in this hydrogel. DOPA-DOPA covalent interactions provided permanent crosslinking for hydrogel framework; meanwhile hydrophobic interactions, hydrogen bonds, and -stacking between DNA and DEX-g-DOPA formed dynamic interactions. By replacing solvents, the volume of hydrogel changed within a few seconds because the hydrogel was super-soft. Although the change of solvent polarity was only 0.4, the volume change of hydrogel was visually distinguished, demonstrating the high sensitivity of the responsiveness. Regarding the previously reported solvent-responsive hydrogels (Table S1, Supporting Information), the modulus of the hydrogels was usually higher than 1000 Pa, which usually needed a relatively long time to achieve the volume changes, because the hydrogels with high modulus and crosslinking density were not favor to the rapid exchange of solvents. [28] By using this unique DNA hydrogel as dynamic connecting wires, a serials of dynamic hydrogel-based electric circuits were fabricated: 1) triggered by using water as switch, 2) triggered by using water and petroleum ether as switch pair, 3) a self-healing electric circuit; 4) remarkably, a microbial metabolism process which produced ethanol triggering electric circuit was achieved successfully. To the best of our knowledge, this is the first example of the electric circuits switched by a dynamic microbial metabolism process without human interventions. Scheme 1. Molecular design and synthesis route of super-soft and dynamic DNA/DEX-g-DOPA hydrogel. A) The synthesis route of DEX-g-DOPA. B) Preparation of the nanofiber-assembled hydrogel with volumeric responsiveness upon solvent polarity. C) Electric circuit switched by a microbial metabolism process which produced ethanol using DNA/DEX-g-DOPA hydrogel as dynamic wires.
Results and Discussion
Scheme 1 describes the overall molecular design and synthesis route of DNA/DEX-g-DOPA hydrogel. First, DEX-g-DOPA was synthesized via the grafting DOPA onto the backbone of dextran (DEX, 6000 Da) (Scheme 1A). The hydroxyl groups of DEX reacted with 1,1'-carbonyldiimidazole (CDI) to form imidazolyl carbamates, which were further coupled with the primary amine groups of DOPA, thus yielding DEX-g-DOPA. 1 H Nuclear magnetic resonance ( 1 H NMR), ultraviolet-visible light (UV-vis), and Fourier transform infrared (FT-IR) spectrum confirmed the structure of DEX-g-DOPA ( Figure S1, Supporting Information). From 1 H NMR results, the grafting rate of DEX-g-DOPA was ≈11.1%. Then natural salmon sperm DNA was mixed with DEX-g-DOPA, resulting in the formation of nanofiberassembled DNA/DEX-g-DOPA hydrogel through multiple interactions (Scheme 1B). The molecular weight of the used DNA was ≈1.2 × 10 7 Da (20 000 bp), which was calculated from the agarose gel electrophoresis results ( Figure S2, Supporting Information). By regulating the solvent polarity, the volume of the hydrogel could be precisely controlled. Moreover, the volume of the hydrogel showed a positive linear correlation with the solvent polarity. Owing to this special property, we noticed that a variety of microorganisms were able to produce solvents with different polarities, such as ethanol, acetic acid, butanol, and acetone. As a demonstration of application, a microbial metabolism process which produced ethanol triggering electric circuit was achieved using DNA/DEX-g-DOPA hydrogel as dynamic wires (Scheme 1C). Notably, though the maximum production of ethanol during microbial metabolism process was 10% at most and thus the change of solvent polarity was only 0.59, -, and A-shaped hydrogels stained with GelRed were written using a syringe, suggesting that the hydrogel was injectable. F) G′ and G″ of the hydrogel as a function of strain, demonstrating that gel-to-liquid transition occurred when the strain was 370%. G) G′ and G″ of the hydrogel as a function of temperature, demonstrating the stability of hydrogel in a wide range of temperatures.
our prepared hydrogel with super-fast volume-responsiveness with high sensitivity upon the change of solvent polarity was utilized to dynamically switch the electric circuit.
When DNA and DEX-g-DOPA were mixed together at 90°C and incubated at room temperature for 5 days, an opaque DNA/DEX-g-DOPA hydrogel formed spontaneously in test tube ( Figure 1A). The hydrogel was stained with a DNA specific fluorescent dye, GelRed and showed red fluorescence under ultraviolet light, suggesting that the entire hydrogel was composed of DNA. Rheology results further confirmed the formation of hydrogel, as the storage modulus (G′) was constantly higher than loss modulus (G″) ( Figure 1B). The G′ value was ≈59 Pa, showing the super-soft mechanical strength. It was notable that the hydrogel was composed of entangled nanofibers ( Figure 1C). Under fluorescence microscope, the hydrogel showed apparent fibrous structures, wherein DNA components of the hydrogel was specifically stained green using SYBR Green I, demonstrating the formation of DNA-based nanofibers ( Figure S3, Supporting Information). Scanning electron microscope (SEM) image gave more details that nanofibers were relatively uniform and the diameter was ≈800 nm ( Figure 1D). Considering the hydrogel which was composed of entangled nanofibers, the hydrogel was injectable through the motions and rearrangements of nanofibers. Therefore, we used a syringe containing hydrogel to write any arbitrary shapes, such as the letters of D, N, and A with red fluorescence ( Figure 1E). Besides, due to the relatively low cost of natural DNA, the hydrogel showed high potential in scale-up industrial production.
Under strain sweep mode, when the strain was less than 370%, G′ value was constantly higher than G″ value, showing the gel property. When the strain was larger than 370%, the G″ value turned to be higher than G′ value, showing the liquid property ( Figure 1F). G″ value increased slightly when the strain changed from 100% to 370%, because the sliding of DNA/DEX-g-DOPA nanofibers needed to dissipate energy. [29] Under temperature sweep mode, the hydrogel was relatively stable to maintain the gel properties over entire temperature range, from 25 to 75°C ( Figure 1G). The influence factors for the formation of hydrogel were discussed in detail (Discussion S1, Supporting Information). Among DEX, oxidized DEX (aldehyde groups grafted DEX) and DEX-g-DOPA, only DEX-g-DOPA was able to interact with DNA to form the hydrogel ( Figure S4, Supporting Information). Moreover, the appropriate DOPA grafting density of DEX-g-DOPA, ultrahigh molecular weight of DNA were critical for the hydrogel formation ( Figure S5, Supporting Information). DEX-g-DOPA with different molecular weights (6k, 10k, and 40k Da) were all able to interact with DNA to form the hydrogel. The G′ value of hydrogel was increased from 60 to 220 Pa with the increase of the molecular weights of DEX-g-DOPA ( Figure S5A, Supporting Information).
The molecular mechanism for the formation of hydrogel was discussed in detail (Discussion S2, Supporting Information). Fluorescence spectra results of ethidium bromide (EtBr, 20 × 10 −3 m) in DNA/DEX-g-DOPA mixed solution at 25, 90, and after cooling back to 25°C indicated that DEX-g-DOPA interfered the repairing of hydrophobic DNA bases during the annealing process ( Figure S6, Supporting Information). Time-dependent UV-vis spectra of DNA/DEX-g-DOPA mixed solution confirmed the phase separation of DNA induced by DEX-g-DOPA, as revealed by the appearance of scattering as the time elongated ( Figure S7A-C, Supporting Information). [30] In addition, the increase of absorption peak intensity at 324 nm along with time indicated the occurrence of DOPA oxidation and covalent crosslinking. [31] Differential scanning calorimeter curves gave direct evidence of the presence of hydrophobic separated DNA bases in the hydrogel ( Figure S7D, Supporting Information). Density functional theory results further suggested that DEX-g-DOPA mainly interacted with adenine (dA) and guanine (dG) groups of DNA bases, which were responsible for the phase separation of DNA ( Figure S8, Supporting Information). [30] UV-vis spectrum of DNA/DEX-g-DOPA mixed solution and X-ray photoelectron spectroscopy (XPS) and FT-IR of the hydrogel demonstrated the presence of hydrogen bonds and -stacking between DEX-g-DOPA and DNA ( Figure S9, Supporting Information). Therefore, synergic permanent and dynamic double networks were integrated in this hydrogel. DOPA-DOPA covalent interactions among DEX-g-DOPA provided permanent crosslinking for hydrogel framework; meanwhile hydrophobic interactions derived from separated DNA bases, and hydrogen bonds and -stacking between DNA and DEX-g-DOPA formed dynamic interactions.
We explored the volume-and shape-responsiveness of the hydrogel upon solvent polarity. First, the volume of hydrogel increased gradually with the increase of solvent polarity (Figure 2A). The hydrogel was highly expanded in polar solvents, such as water and dimethyl sulphoxide, and shrank in nonpolar solvents, such as petroleum ether and cyclohexane. Remarkably, the volume expansion ratio of hydrogel showed a linear correlation with the solvent polarity when the value of solvent polarity was higher than 2, which provided a simple way to directly distinguish different solvents ( Figure 2B). Furthermore, the G′ value of the hydrogel could be precisely controlled by immersing the hydrogel into different solvents ( Figure S10, Supporting Information). With the increase of solvent polarity from 0.01 to 10.2, the G′ value of hydrogel gradually increased from 59 to 3651 Pa ( Figure S11, Supporting Information).
To further investigate the effects of solvent polarity on volume, micromorphology and G′ value of hydrogel, water and petroleum ether (MSO) were utilized for the demonstration. Specifically, a square-shaped hydrogel was first prepared in water using the square-shaped mold. When soaked in MSO, the hydrogel was quickly aggregated into a dense form with volume shrinkage within a few seconds. When soaked in water again, the hydrogel returned to its birth shape with volume expansion reversibly ( Figure 2C). Although the hydrogel was super-soft, the hydrogel still maintained the original shape (square) even though undergoing the repeated cycles of replacing the solvents. Accompany with the exchange of solvents, the micromorphology of nanofibers was changed from loose structures to compact structures reversibly ( Figure 2D). In MSO, nanofibers were aggregated to drive the volume shrinkage of hydrogel, and the hydrophobic interactions between nanofibers and nonpolar solvents were responsible for this process. [28] In this situation, the micromorphology of nanofibers was coiled, aggregated, and compact. In water, the nanofibers were unwrapped to drive the volume expansion of hydrogel, and thus the micromorphology was stretched, intertwined, and loose. Hydrogen bonds between DNA/DEX-g-DOPA and polar solvents accelerated this process. Moreover, the modulus of hydrogel changed accordingly in water and MSO. In water, the G′ value of hydrogel was ≈59 Pa. When the hydrogel was immersed in MSO, the G′ value (3651 Pa) increased nearly 62 times than that in water. During this process, hydrophobic interactions between the nanofibers and MSO molecules were significantly enhanced and thus resulting in the formation of the compact nanofibrous structures ( Figure 2E). Moreover, the hydrogen bonds existing in the hydrogel were significantly weakened in MSO, and thus the hydrophobic interactions were the main driving force for the volume shrinkage of the hydrogel.
Besides, the hydrogel showed super-fast volume-and shaperesponsiveness upon water. The shapeless hydrogels with volume shrinkage recovered their birth shapes with volume expansion quickly by reintroducing water. To investigate this property, we first prepared hydrogels in water using molds with the shapes of letters A and D. After removing water of hydrogels, A-and Dshaped hydrogels were aggregated with the volume shrinkage. When water was reintroduced, the hydrogels could return to their birth shapes quickly ( Figure 2F).
As a demonstration of application, a serials of dynamic hydrogel-based electric circuits were fabricated. First, the electric circuit including a bulb with unilateral conduction, variable voltage power supply, and DNA/DEX-g-DOPA hydrogel as dynamic wires was fabricated to be able to power a bulb, indicating the feasibility of the construction of hydrogel-based electric circuits ( Figure S12, Supporting Information). Notably, the hydrogel showed superfast volume-responsiveness upon water. When the hydrogel was taken out from water, the volume of hydrogel was only one-tenth of that in water within 1 s (Figure 3A). When soaked in water again, the hydrogel returned to its birth volume. Hydrophobic interactions were mainly responsible for this process. When the hydrogel was taken out from water, the presence of hydrophobic interactions would repel excess water from the hydrogel. [28] Based on this special property, the electric circuit that used DNA/DEX-g-DOPA hydrogel as dynamic wires was fabricated through adding/removing water ( Figure 3B). When water was added into the conductive channel, the hydrogel with a very small volume was highly expanded and conformed to the shape of the channel linking the two electrodes together, thus turning on the circuit. By removing water of the channel, the hydrogel shrank quickly to shut off the current.
By virtue of the volumetric responsiveness of the hydrogel upon different solvents, the electric circuit that used water and MSO as the switch pair was fabricated, in which DNA/DEX-g- The curve of volume expansion ratio for the hydrogel as a function of solvent polarity. C) Simultaneous volume and shape changes when the square-shaped hydrogel was immersed into different solvents repeatedly. A square-shaped hydrogel was first prepared in water. When soaked in MSO, the hydrogel was quickly aggregated into a dense form with volume shrinkage. When soaked in water again, the hydrogel returned to its birth shape with volume expansion reversibly. The hydrogel was stained with Rhodamine B. D) SEM images of the hydrogel immersed in water and MSO. Accompany with the exchange of solvents, the micromorphology of nanofibers was changed from loose structures to compact structures reversibly. E) The G′ value of the hydrogel immersed in water and MSO as a function of time. F) Water-triggered volume and shape changes of the hydrogel. Hydrogels with the shapes of letters A and D were first prepared in water. After removing water, the hydrogels were aggregated with volume shrinkage. When water was reintroduced, the hydrogels could return to their birth shapes quickly. The hydrogels were stained with GelRed.
DOPA hydrogel was served as dynamic wires ( Figure 3C). Gold nanoparticles were doped into the hydrogel to enhance the conductivity of the hydrogel. When MSO was added into the channel, the hydrogel was aggregated with volume shrinkage, and thus the electric circuit was shut off. When MSO was replaced by water, the volume expansion of the hydrogel occurred and the electric circuit turned on again. Due to the large difference of solvent polarity between water and MSO, low amounts of solvents were enough to switch the electric circuit. The robust stability of electric circuit was demonstrated as revealed by the cyclic currenttime curve ( Figure 3D). The water/MSO cycle was repeated three times, and the current value was still relatively stable every time as the electric circuit was triggered on/off. Other solvent pairs with different polarities could be employed as switch pairs to trigger on/off of the electric circuit.
The electric circuit switched by the water/MSO switch pair was fabricated successfully using the self-healing DNA/DEX-g-DOPA hydrogel as dynamic wires. To visually demonstrate the self-healing property of hydrogel, one integrated hydrogel was cut into two pieces with rectangle shapes, in which one piece was stained with SYBR Green I (green) and the other piece was stained with GelRed (red) (Figure 4A). When two pieces of hydrogels were held together for 20 s, they formed an integrated hydrogel. It was inferred that motions and re-entanglements of DNA/DEX-g-DOPA nanofibers driven by dynamic interactions at interface were responsible for the self-healing property. The manual force was applied onto the right side of red hydrogel, resulting in the cooperative motions of the self-healed hydrogel without the separation of the two pieces. Finally, the self-healed hydrogel with original rectangle shapes could be hung in Figure 3. Electric circuits switched by water and water/MSO switch pair using DNA/DEX-g-DOPA hydrogel as dynamic wires. A) Digital photos of the hydrogel with volumetric responsiveness upon water. When the hydrogel was taken out from water, the volume of hydrogel was only one-tenth of that in water within 1 s. When soaked in water again, the hydrogel returned to its birth volume. B) Cyclic voltage-current curve of the electric circuit switched by adding/removing water. C) Cyclic voltage-current curve of the electric circuit that used water and MSO as the switch pair. D) Current-time curve of the electric circuit by replacing the solvents (MSO and water) repeatedly, demonstrating the stability of the electric circuit. the air, visually demonstrating the self-healing performance of hydrogel.
In addition, a manual tensile test was conducted to further confirm the self-healing performance of hydrogel. Two pieces of hydrogels stained with red and green were held together, they could form an integrated hydrogel in the air ( Figure 4B). Manual force was then applied to the right side of the self-healed hydrogel (Figure 4C). The initial length of green hydrogel was 1 mm. When the manual force increased gradually, the length of green hydrogel increased to 8 mm without fracture at the self-healing interface. Alternatively, an obvious fracture interface appeared inside the green hydrogel. Finally, the fracture of hydrogel occurred by further increasing the manual force. It was apparent that the fracture interface was not the self-healing interface, further confirming that the interface between red and green hydrogel was completely self-healed. By virtue of the self-healing performance and solventtriggered volume responsiveness of the hydrogel, a self-healing hydrogel-based electric circuit that used water and MSO as the switch pair was designed ( Figure 4D). First, two pieces of the hydrogels were placed in the both sides of the channel. The two pieces were not able to contact with each other and thus the electric circuit was cut off. When small amount of water was added into the channel, volume expansion of the two pieces occurred and then the two pieces contacted with each other to connect two electrodes together, turning on the electric circuit. By replacing water with MSO, the self-healed hydrogel shrank quickly and was located at the one side of the channel to cut off the electric circuit.
Taking full advantages of solvent-triggered volume responsiveness and high biocompatibility of the hydrogel, a living system that produced solvents showed great potential to dynamically trigger the volume change of the hydrogel. As a proof-of-concept demonstration, yeast continuously produced ethanol through a microbial fermentation process, which was thus employed to design the microbial fermentation process triggering electric circuit that used DNA/DEX-g-DOPA hydrogel as a dynamic wire. Along with the consumption of glucose, the content of ethanol gradually increased during the fermentation process. This process provided dynamic and changeable solvent polarity to trigger off the electric circuit. First, a microinjection pump was used to mimic the fermentation process of yeast, which continuously injected ethanol into yeast extract peptone dextrose medium (YPD) (Figure 5A). With the increase of ethanol content, the polarity of YPD decreased gradually, thus resulting in the volume shrinkage of the hydrogel ( Figure 5B). Specifically, when the injected time increased from 0 to 40 min, the ethanol content was increased from 0% to 66.67% and the solvent polarity of YPD would decreased from 10.20 to 6.27 accordingly ( Figure 5C). An electric circuit switched by the process of continuously injecting ethanol was fabricated. Real-time current-time curve was measured to reflect dynamic changing process of the volume of hydrogel (Figure 5D). At the beginning, the current values were stable; after 2.88 min injection of ethanol, the volume of hydrogel decreased significantly and the separation between the hydrogel and electrodes occurred, resulting in the remarkable decrease of the . Electric circuit switched by the water/MSO switch pair using the self-healed DNA/DEX-g-DOPA hydrogel as dynamic wires. A) Self-healing of the two pieces of rectangle-shaped hydrogels. One integrated hydrogel was cut into two pieces of rectangle-shaped hydrogels and then the two pieces self-healed for 20 s to achieve the cooperative motions of the self-healed hydrogel. The hydrogel was able to be hung in the air. Two pieces of hydrogels were stained with SYBR Green I (green) and GelRed (red), respectively. B,C) Manual tensile test of the self-healed hydrogel. Two pieces of hydrogels stained with red and green were held together to form an integrated hydrogel. Manual force was then applied to the right side of the self-healed hydrogel. The initial length of green hydrogel was 1 mm. When the manual force increased gradually, the length of green hydrogel increased to 8 mm without fracture at the self-healing interface, further demonstrating the self-healing of the hydrogel. D) Cyclic voltage-current curve of the electric circuit that used the self-heald hydrogel as dynamic wires. current value. Notably, when the electric circuit was cut off at 2.88 min, the ethanol content was only 9.97% and the solvent polarity was 9.46.
A dynamic electric circuit switched by a microbial metabolism process producing ethanol was created successfully. During the microbial metabolism process, the maximum production of ethanol was 10% in theory and thus the change of the solvent polarity was 0.59 at most ( Figure 5E). Therefore, our prepared hydrogel with super-fast volume-responsiveness with high sensitivity upon the change of solvent polarity was utilized to dynamically switch the electric circuit. Specifically, yeasts were first precultured in YPD for 2 h to enhance the metabolic activity, and then the hydrogel was immersed into YPD culture medium. With the consumption of glucose, yeasts produced ethanol, and the real-time current-time curve was measured. The current value was slightly unstable at the beginning, probably due to the perturbation of the fermentation process ( Figure 5F). Along with the constant accumulation of ethanol, a significant reduction of current value occurred accompanied with the separation between the hydrogel and electrodes. Finally, the current value was relatively stable and low, which was from the poor conductiv-ity of the culture medium. We emphasized that this is the first time to construct a dynamic electric circuit switched by a microbial metabolism process without human interventions, showing great potential for the remote and programmable control of the circuits.
Other applications including regulating cell adhesion and controlled drug delivery were demonstrated to illustrate the wide potential of our prepared hydrogel. The hydrogel with different concentrations (0.05, 0.1, 0.5, 1, and 5 mg mL −1 ) showed high cell viability to human cervical cancer cells (Hela cells) and smooth muscle cells (SMCs cells), thus confirming the high biocompatibility of the hydrogel (Figures S13-S15, Supporting Information). When SMCs cells were cocultured with the hydrogel, SMCs cells were able to be adhered onto the surface of nanofiberassembled hydrogel ( Figure S16A-C, Supporting Information).
With the prolongation of cell culture time, the number of SMCs cells adhered onto the surface of hydrogel gradually increased, showing excellent cell affinity and adhesion ( Figure S16D, Supporting Information). The hydrogel was also served as a biomaterial for controlled drug release using DNase I as a trigger. With the increment of DNase I concentration, the rate of drug release the current values were stable; after 2.88 min injection of ethanol, the volume of hydrogel decreased significantly and the separation between hydrogel and the electrodes occurred, resulting in the remarkable decrease of the current value. Notably, when the electric circuit was cut off at 2.88 min, the ethanol content was 9.97% and the solvent polarity was only 9.46. E) Scheme for the process of yeast fermentation. During the process, the maximum production of ethanol was 10% in theory and thus the minimum polarity of the system was only 9.61. F) Current-time curve of the electric circuit switched by a microbial metabolism process which produced ethanol. Along with the constant accumulation of ethanol, a significant reduction of current value occurred accompanied with the separation between hydrogel and the electrodes.
was accelerated, indicating a controlled and enzyme-responsive drug release manner ( Figure S17E, Supporting Information).
Conclusion
In summary, a super-soft and dynamic DNA/DEX-g-DOPA hydrogel with super-fast volume-responsiveness and high sensitivity upon solvent polarity was designed and prepared. Due to the super-soft mechanical strength of the hydrogel, the volume and shape of hydrogel changed within a few seconds by replacing solvents. The hydrogel responded sensitively to the change of solvent polarity. Although the change of solvent polarity was only 0.4, the volume change of hydrogel was visually distinguished. This special property allowed the construction of a serials of dynamic hydrogel-based electric circuits using solvents as switches. Remarkably, microorganisms were able to produce solvents through the metabolism process, thus providing a dynamic trigger to drive the volume change of our prepared hydrogel. The electric circuit switched by the microbial metabolism process was achieved successfully. We envision that our hydrogel will be a potential candidate for the construction of smart dynamic systems in response to biological process, and the electric circuits will be highly promising in wide applications, such as smart medicine, soft robots, controlled electro-catalysis, and flexible electronic devices. [32][33][34][35][36]
Synthesis of DEX-g-DOPA: DEX (0.2 g) was dissolved in 5 mL anhydrous DMSO first. CDI (0.2303 g) dissolved in 2 mL anhydrous DMSO was transferred into the DEX solution and stirred for 30 min under room temperature. DOPA (0.1797 g, mole ratio of glucose unit of DEX and DOPA was ≈1.3:1) dissolved in 1 mL anhydrous DMSO was then transferred into the CDI-activated DEX solution. The room-temperature reaction was carried out with magnetic stirring (450 rpm) in the atmosphere of N 2 for 12 h. After reaction, the solution was transferred into a dialysis membrane (M W = 3500) and dialyzed for 24 h to remove unreacted reagents. The final products were freeze-dried and stored in desiccator before use. 1 Preparation of DNA/DOPA Hydrogel: DNA was dissolved in deionized water with a concentration of 5 w/v% at higher temperature, and DEX-g-DOPA dissolved in deionized water with a concentration of 10 w/v% was added to the heated DNA solution. The volume ratio was 1:1. The hydrogel would form spontaneously in couple of days.
Synthesis of Oxidized DEX: 3.3 g of sodium periodate was transferred into 50 mL of 10 w/v% DEX solution (M w = 40 000 g mol −1 ). The roomtemperature reaction was stirred for 1.5 h at dark conditions, after which the solution was transferred into a dialysis membrane (M W = 12 000) and dialyzed for 3 d at 4°C. The final products were freeze-dried and stored in desiccator before use. 1 H NMR analysis (400 MHz, D 2 O) was utilized to determine the structure of oxidized DEX.
Determination of Oxidation Degree of DEX: The oxidation degree of DEX was determined using a quantitative reaction of 2,4dinitrophenylhydrazine (DNPH). A solution of 4 w/v% oxidized DEX (250 µL) was mixed with 0.05 m DNPH (500 µL) (mixed solution of 1:1 v/v acetonitrile and 2 m HCl for dissolving DNPH) and incubated for 20 min under stirring. Then, the orange precipitate formed and the solution was extracted with 500 µL of ethyl acetate. The amount of unreacted DNPH could be calculated from the optical density at 350 nm using UV-vis spectra.
Fluorescence Microscopy Imaging: High resolution images of the hydrogel were obtained by fluorescence microscopy (Nikon). The hydrogel was stained with SYBR Green I or GelRed.
Rheology Test: The rheological properties were carried out on a HR-2 rheometer (TA Instruments) equipped with a temperature controller. The test was performed in an 8 mm parallel-plate geometry using 100 µL DNA/DOPA hydrogel. Temperature rheological test was carried out from 25 to 75°C at a rate of 2°C min −1 at fixed frequency (1 Hz) and strain (1%). Time scanning was performed with a fixed strain (1%) and a fixed frequency (1 Hz) at 25°C for 3 min. Strain scanning was conducted from 0.1% to 1000% with a fixed frequency (1 Hz) at 25°C. SEM: All hydrogels were placed onto the sample stage using conductive adhesive. Then, the sample stage was placed in liquid nitrogen to achieve quick-freeze. Finally, the samples were freeze-dried and metalcoated with Au for SEM (Hitachi-S4800 FESEM).
XPS: XPS was performed on a VGESCALAB220i-XL spectrometer equipped with a hemispherical analyzer. Survey (wide) and high-resolution (narrow) scans were recorded. The base pressure in the analysis chamber was less than 8.0 × 10 −9 mbar. All data were processed using Avantage software, and the energy calibration was referenced to the C 1s peak at 285.0 eV.
Volume Expansion Experiments: The hydrogel was immersed into different kinds of solvents with different solvent polarities. The volume of hydrogel immersed in each kind of solvent was measured and named as V 2 . The volume of hydrogel placed in the air was measured and named as V 1 . The Volume expansion ratio of the hydrogel in different solvents could be calculated using the following formula (1) Volume expansion ratio = V 2 ∕V 1 (1) Electrical Property Measurement: To test the conductivity property of the hydrogel, a simple circuit was established, including a bulb with unilateral conduction, DNA/DOPA hydrogel and a variable voltage power supply. The voltage was changed from −5 to 5 V, followed by changed from 5 to −5 V. The current-voltage curves were measured using a Keithley 6430. To test the ability of the hydrogel as an electric circuit switch, hydrogel formed in a mold using petroleum ether as solvent was aggregated with smaller volume and the electric circuit was in the state of "OFF." When petroleum ether was fully replaced by water, the hydrogel expanded and electric circuit was in the state of "ON." The current-voltage curves were measured. The voltage changed from −1 to 1 V and the scan rate was set as 10 mV s −1 . In order to prove the robustness of hydrogel-based electric circuit, the current-time curve was measured.
Hydrogel Circuits Switched by Yeast Fermentation: In order to mimic the process of ethanol production, microinjection pump was used to inject ethanol into yeast extract peptone dextrose medium (YPD). The injection rate was 5 µL min −1 . The current-time curve was measured for real time monitoring the process of yeast fermentation. The voltage was 0.5 V. Biocircuits switched by yeast fermentation were fabricated. Specifically, yeast were dispersed in YPD for culturing 2 h, then the hydrogel was immersed into the obtained culture medium for the construction of circuits. The current-time curve was detected.
In Vitro Cell Culture: SMCs were used as the target cells for in vitro cell culture of DNA/DOPA hydrogel. Before cell seeding, the hydrogel was purified in PBS, followed by sterilized by 75% ethanol for 24 h. The obtained hydrogel was placed in a 96-well plate with DMEM and swelled to the equilibrium state for 2 days at 37°C. Cells were seeded on hydrogels and left undisturbed in an incubator for 3 h to allow for cell attachment. After 1, 3, and 5 days of culture, the cells grown on the hydrogel were stained with calcein AM, and observed by fluorescence microscope. The viability of cells grown on the hydrogels was analyzed by thiazolyl blue tetrazolium bromide MTT assay. After 1, 3, and 5 days culture, MTT solution was added to each well, and the plate was incubated at 37°C for 4 h. After removing medium, the resulting purple formazan was dissolved in DMSO, and the absorbance at 490 nm was measured.
In Vitro Drug Release: DNA/DOPA hydrogel was expected as a controlled drug release system by using DNase I as a trigger. A unique advantage was that both the building blocks (DNA) and the fibrous hydrogel itself (physical enclosure, high specific surface area of DNA nanofibers) were used as drug reservoirs. Hydrogel with doxorubicin was loaded. DNase I with 50 and 500 U mL −1 were utilized.
Supporting Information
Supporting Information is available from the Wiley Online Library or from the author. | 8,222.2 | 2020-05-25T00:00:00.000 | [
"Engineering"
] |
Improved prediction of solvation free energies by machine-learning polarizable continuum solvation model
Theoretical estimation of solvation free energy by continuum solvation models, as a standard approach in computational chemistry, is extensively applied by a broad range of scientific disciplines. Nevertheless, the current widely accepted solvation models are either inaccurate in reproducing experimentally determined solvation free energies or require a number of macroscopic observables which are not always readily available. In the present study, we develop and introduce the Machine-Learning Polarizable Continuum solvation Model (ML-PCM) for a substantial improvement of the predictability of solvation free energy. The performance and reliability of the developed models are validated through a rigorous and demanding validation procedure. The ML-PCM models developed in the present study improve the accuracy of widely accepted continuum solvation models by almost one order of magnitude with almost no additional computational costs. A freely available software is developed and provided for a straightforward implementation of the new approach.
F ree energy of solvation is one of the key thermophysical properties in studying thermochemistry in solution, where the majority of real-life chemistry happens. In theoretical studies of solution chemistry, estimation of free energies allows evaluation of reaction rates and equilibrium constants of physical or chemical reactions of interest. Nevertheless, direct evaluation of free energies in solution can be quite challenging since it sometimes requires appropriate sampling of phase space [1][2][3] and appropriate treatment of the non-covalent interactions between the solvent and solute, which can have a remarkable impact on electronic structures of both the solvent and solute and consequently on the microscopic and macroscopic observables 4,5 .
Theoretical approaches for evaluating physical chemistry behind solvation free energy can be generally divided into two main categories, namely explicit solvent and implicit solvent approaches. In explicit solvent approaches, solvent molecules are treated explicitly, and the free energy is typically evaluated by analyzing the trajectory of time evolution of phase space obtained via molecular dynamics or Monte Carlo simulations. For that end, a number of efficient free energy estimators have been developed in the past decades such as thermodynamic integration, free-energy perturbation, and histogram analysis methods 6 .
Despite obvious advantages of applying the explicit solvent methods such as retaining the physically proper picture of discrete solvent molecules, they suffer by a number of limitations when applied to free-energy estimation. For example, in case of applying methods which evaluate the free energy through alchemical transformations (e.g., thermodynamic integration or free energy perturbation), defining intermediate states and pathways between the endpoints appropriately can be quite tricky 7 . Also, necessity of employing appropriate force fields, which for many solute-solvent mixtures requires to develop or reparametrize a force field, and running the simulations and trajectory analyses can be laborious and time-taking tasks.
To overcome the mentioned limitations, the implicit solvent approach has been developed and is widely applied as standard method for studying solvent effects in computational chemistry. In implicit solvent approaches, the solvent molecules are treated implicitly as a continuous medium and the solute is placed in a cavity of this implicitly defined solvent. The solute-solvent interactions are then evaluated via considering the solvent polarization due to the solute charge distribution and its resulting potential field acting on the solute, known as the reaction field 5 . For a moderate level of theory and medium-sized molecules, implicit solvent approaches can yield a reasonable estimation of the solvation free energy in few seconds to few minutes on a normal desktop PC, while for explicit solvent approaches it might take from hours to days.
The most widely applied implicit solvent approaches are those based on the so-called polarizable continuum model (PCM) proposed by Tomasi and co-workers 8 . In polarizable continuum models, the solvation free energy is constructed by summing the contributions of electrostatic interactions including electronic, nuclear, and polarization interactions (ΔG ENP ), changes in free energy by solvent cavity formation, dispersion energy and local solvent structure changes (G CDS ), and corrections for differences in molar densities in the two phases compared with the standard state (ΔG cons ). The contributions of electrostatic interactions are evaluated by iteratively solving the following relationship: VjΨ ð1Þ i À hΨ ð0Þ jHjΨ ð0Þ i ð1Þ which is known as the self-consistent reaction-field (SCRF) calculations 5 . Here, superscripts (0) and (1) refer to the gas and solution phases, respectively, and V is the potential energy operator resulting from the reaction field. Various constructions of the potential energy operator as well as G CDS have resulted in different continuum solvation models. The parallel existence of several continuum solvation models is a good indicator that each of them has its own strengths and weaknesses, and choosing a single, optimal model is not trivial. It is totally impossible to provide a detailed overview here; a 2005 review of implicit solvation models 9 covered 95 pages and cited 936 references. In the present study, we only consider the most widely used PCM-based models.
One of simplest and yet successful continuum solvation models is CPCM which implements the conductor-like screening solvation boundary condition within the PCM framework. In CPCM, the following correction of the polarization charge densities by the scaling factor x is employed 10 : where ε is the solvent dielectric constant. One main advantage of CPCM is its much simpler defined boundary conditions. More importantly, unlike more advanced PCM-based models which require the normal component of the solute electric field as input, CPCM only requires the solute electrostatic potential; for this reason it is much less affected by outlying charge errors (OCE) 11,12 . A more versatile model exploiting the conductor-like screening solvation boundary condition is COSMO-RS, developed by Klamt and co-workers 13,14 , which although initially proposed in 1995, still is one of the most accurate available continuum solvation models. A more sophisticated treatment of the boundary condition is implemented in the integral equation formalism of PCM (IEF-PCM) taking into account apparent surface charge isotropic 15 or anisotropic 16 dielectric continuum solvation. Another extensively used continuum solvation model is the SMx family of methods which specifically focuses on more accurate estimation of the solvation free energy 4,5 .
We already discussed the main advantages of continuum solvation models such as their efficiency in terms of computational cost. Nevertheless, it should be noted that all this has become possible for a considerable amount of assumptions and simplifications on the physics of the problem, such as overlooking the conformational entropy of solvent and solute which can have a significant contribution on the total free energy 17 , neglecting the site-specific solute-solvent interactions and decoupling the polar and nonpolar components of free energies and considering them independent, linear and additive 18,19 . The inaccuracies resulting from such simplifications are commonly compensated for via incorporating additional macroscopic observables as well as adjustable parameters in the solvation models. In the CPCM model for example, this is achieved by implementing an ad hoc modification of the atomic radii via defining a number of adjustable parameters and empirical descriptors, such as the number of bonded hydrogens and the number of bonded active atoms 10 . In the COSMO-RS model, it is achieved by ad hoc modification of the interaction energies and effective contact area via some adjustable parameters 14 .
In contrast, in the SMx family of methods, to provide a more accurate estimation of the solvation free energy, an ad hoc modification of the G CDS term in (1) has been proposed. For that end, employing additional macroscopic observables in the model has been considered 4 , including the refractive index, Abraham's hydrogen bond acidity and basicity of the solute, macroscopic surface tension of the solvent at the air/solvent interface at 298.15 K, the square of the fraction of solvent atoms that are aromatic carbon atoms, and the square of the fraction of solvent atoms that are F, Cl, or Br. Although these employed macroscopic observables indirectly introduce more physics into the model and hence provide the chance to make predictions of solvation free energies more universal, except for the last two they are not readily available for many new compounds and their experimental or theoretical evaluation is not straightforward.
In a number of recent studies, Machine Learning (ML) has been exploited to map the highly complicated relationship between solvation free energy and potentially relevant macroscopic or microscopic observables.
Wang et al. employed a pool of 30 molecular representations which all are either per atom reaction field energies or partial charges, as the input of the learning-to rank (LTR) machine learning algorithm, resulting in a root mean squared error (RMSE) of 1.05 kcal/mol 18 Another recent example is the kernel-based machine learning model of Rauer and Bereau which is developed to predict the free energy of solvating small organic molecules containing C, H, O, and N atoms in pure water via implicit-solvent molecular dynamics simulations 22 . For a 39-parameter model they reported a MUE of 1.06 kcal/mol.
The most recent example of employing machine learning for prediction of solvation free energy is the model developed by Vermeire and Green 23 . Their model is developed based on the transfer of knowledge learned through one million data of QM evaluated free energies and fine tuning it to accurately reproduce the experimentally determined solvation free energies. They reported a MUE of 0.21 kcal/mol for their model which is currently the most accurate ever reported result for prediction of solvation free energy.
In the present study, we propose a machine-learning-based PCM model, which, similar to other conventional continuum solvation models, is based on considering the solvent as a continuous medium and calculating the solvation energy components of a solute placed in the cavity of this medium by the SCRF procedure. Nevertheless, unlike the conventional PCM models which propose simple and ad hoc expressions to integrate and modify those calculated energy components, we employ machine learning for this purpose and show its efficiency in substantial improvements of the predictability of solvation free energy.
Results and discussions
After setting up and training the neural networks and screening the appropriately trained models via the post-validation strategy discussed in the previous section, the best results with MUE of 0.52526 and 0.40011 kcal/mol were observed for the computations at B3LYP/6-31 G* and DSD-PBEP86-D3/def2TZVP levels of theory, respectively. The two models employed SCRF energy components and solvation free energy computed via CPCM x=0.5 solvation model in both cases and 100 and 130 hidden layer neurons, respectively. These two models are denoted by ML-PCM(B3LYP) and ML-PCM (DSD-PBEP86) hereafter, respectively. Details of the selected input variables and implementation instructions for all selected models are provided in Supplementary Software 1. These results show a substantial improvement compared to the original continuum solvation model CPCM x=0.5 , which for the same dataset yielded MUE of 3.1611 and 2.9130 kcal/mol, respectively.
In comparison to the SMD model, which for the same dataset and solvation free energy computations at B3LYP/6-31 G* and DSD-PBEP86-D3/def2TZVP levels yields MUE of 0.78623 and 0.85396 kcal/mol, respectively, the obtained results still show a higher accuracy, without requiring additional solvent parameters needed in the SMD approach. In comparison to the MUE of 0.4214 kcal/mol reported by Klamt and Diedenhofen 24 for employing one of the recent versions of the COSMO-RS model for the same dataset, the ML-PCM(DSD-PBEP86) provides a slightly higher accuracy. Also, in terms of maximum unsigned error, the two ML-PCM models which yield maximum unsigned error of 6.2252and 3.8799 kcal/mol, respectively, are more accurate than that of COSMO-RS for which this value is 6.8701 kcal/mol. For other continuum solvation models studied for the same dataset, the maximum unsigned error of the SMD, PCM, CPCM and CPCM x=0.5 were 11.311, 12.75, 12.2, 12.6 kcal/mol for B3LYP/ 6-31 G* and 11.311, 12.83, 12.31, 12.68 kcal/mol for DSD-PBEP86-D3/def2TZVP levels of theory, which are all substantially higher than those achievable by the ML based models.
The higher accuracy of the predicted solvation free energies by the COSMO-RS model compared to the other conventional solvation models also motivated us to study neural networks which take SCRF energy components computed via PCM or CPCM models in addition to the solvation free energies predicted via COSMO-RS as neural network feeds. For these updates, the best results with MUEs of 0.26057 and 0.24387 kcal/mol and maximum unsigned errors of 7.1349 and 2.9154 kcal/mol were obtained for energy components calculated via CPCM x=0.5 and CPCM solvation models, 130 and 120 hidden layer neurons, and computations at B3LYP/6-31 G* and DSD-PBEP86-D3/ def2TZVP levels of theory, respectively. These two models, which are denoted by ML-PCM/COSMO-RS(B3LYP) and ML-PCM/ COSMO-RS(DSD-PBEP86) hereafter, respectively, show a remarkable improvement in predicted solvation free energy compared to those obtained via the original implementation of COSMO-RS reported by Klamt and Diedenhofen 24 . This implies considerable flexibility of the proposed approach in improving accuracy of various solvation models. Nevertheless, it should be noted that the solvation free energies evaluated by COSMO-RS which were used as additional model inputs in the present study were evaluated using the 2015 version of that method. Using free energies evaluated by more recent versions of COSMO-RS and also the energy terms computed with this method, will probably result in more accurate predictions of the solvation free energy by the presented ML-PCM.
As the most important parameter in developing ANN models, we studied the impact of the selected number of hidden layer neurons on the performance of the developed machine learning models. As can be seen in Fig. 1, by increasing the number of hidden layer neurons, the predictability of the solvation free ARTICLE energy is generally improved. This is due to the larger number of adjustable parameters of the resulting models and their consequently higher flexibility to map complicated functionalities. However, at the same time this may reduce the extrapolation capability of the model, i.e., it may reduce performance when applied to samples remarkably different from those already examined in developing the models.
To investigate the impact of the number of hidden layer neurons on extrapolation performance of the models developed in the present study, we re-examined the trained models for out-ofsample predictions, following the approach proposed by Vermeire and Green 23 . For that end, we compared the results of models for which a group of samples with either a specific element or a specific solvent were included in the training dataset with the same models trained with a dataset excluding that specific group of samples. We studied out-of-sample prediction performance for 20 solvents and 6 solute elements most frequently encountered in our studied dataset. The obtained results are reported in Tables 1 and 2. According to the results, the developed models show an excellent extrapolation capability for out of sample predictions of solvent splits, while for the element splits, the extrapolation is slightly less accurate. Furthermore, except for the element Br, the out-of-sample predictions tested for ML-PCM/COSMO-RS(B3LYP) are within chemical accuracy.
A comparison of predicted and experimentally determined free energies is depicted in Fig. 2. As can be seen, the linear correlation between the predicted and reference data is more evident for the newly derived models, compared to the conventionally accepted ones.
The overall results obtained via newly developed ML models are compared with various other models proposed in the literature in Table 3. Although a more informative comparison would be possible if different models were compared for the same dataset and, if applicable, the same level of theory, the larger size of the benchmark dataset used in the present study compared to most of the other works confirms the superior accuracy of the newly proposed method compared to the majority of the widely accepted ones. In comparison to the model developed by Vermeire and Green 23 which yields MUE of 0.21 kcal/mol, our results are slightly less accurate, but it should be noted that our results are obtained for a much lower number of neurons and model parameters.
Furthermore, it should be noted that the inaccuracies inherent in the reference data of solvation free energies (Aleatoric uncertainty) can also impact both the training efficiency and inferences about model performances, as pointed out by Vermeire and Green 23 .
To summarize, we have demonstrated substantial improvements of continuum solvation models in evaluating solvation free energy with the help of machine learning. For that end, we proposed a more versatile machine learning assisted integration of the continuum solvation energy components calculated in SCRF computations which can be used to modify the predicted solvation free energy by various solvation models. It allowed us to achieve accurate predictions of solvation free energy with MUE as low as 0.2439 kcal/mol for a large dataset of 2493 binary mixtures of 435 neutral solutes and 91 solvents from diverse chemical families.
Methods
Dataset. To benchmark our results, we used the solvation free energy data of 2493 binary mixtures of 435 neutral solutes and 91 solvents from diverse chemical families available in the Minnesota solvation database 4 . The full list of the studied samples can be found as Supplementary Data 1.
Computational details. The performance of models is reported as mean unsigned error (MUE) and root mean squared error (RMSE) defined as: where y exp i and y pred i are experimentally determined and predicted solvation free energies, respectively.
Prior to SCRF computations, all solute geometries were optimized in vacuo at the B3LYP/6-31 G*level of theory. Using the optimized structures, the SCRF principal energy components listed in Table 4 were computed for each compound at the B3LYP/6-31 G* and DSD-PBEP86-D3/def2TZVP levels of theory. The latter method as a double hybrid has been shown to yield more precise charge distributions and energy estimations compared to lower-rung DFT or MP2 methods, for a cost comparable to that of the MP2 calculation 25 .
The SCRF energy components listed in Table 4 were computed for two widely accepted polarizable continuum models, namely the IEF-PCM and CPCM, as implemented in Gaussian 16 (ref. 26 ). For CPCM, the default value of zero is considered as the scaling factor x in relationship (2). However, a value of 0.5 has been shown to be a more reasonable choice for this scaling factor 11,27 . Therefore, in addition to the default implementation of CPCM in Gaussian 16, we also employed a CPCM model with a scaling factor of x=0.5 and denote it by CPCM x=0.5 . For that, we replaced the original dielectric constant of the solvent with an effective dielectric constant e εðε; xÞ calculated via: as suggested by Klamt et al. 11 . For comparison purposes, we also calculated the solvation free energy via the SMD approach. We employed feed-forward neural networks to map the relationship between the solvation free energy and the calculated SCRF energy components, which in addition to the solvation free energy estimated by the applied continuum solvation model and to the dielectric constant of the solvent, comprised our model inputs.
The obtained pool of model inputs was further screened using the Minimum Redundancy and Maximum Relevance (MRMR) algorithm 28 resulting in various 8-16 membered combinations of those variables. MRMR is a highly efficient algorithms for selecting most effective sets of variables for developing robust machine-learning-based models 29 . For each number of selected variables, 25 different settings of the MRMR algorithm were applied, distinguished by the employed quantization level, level of dependency, forward or backward variable selection and considering pseudo-samples based on Bayesian statistics or not 28 . In many cases, this resulted in diversely selected set of variables, even for the same applied level of theory and continuum solvation model.
In the next step, various configurations of neural network models were set up and their reliability were examined with a demanding procedure based on the guidelines presented in a previous study 30 . Accordingly, we assigned large parts of the dataset for test (25%) and validation (15%), and only 60% of the dataset compounds were used for training the models.
To improve the transferability of the developed models for out-of-sample predictions, validation and test sets were selected in a way to include either solvent or solute elements not available in the training set.
We employed Levenberg-Marquardt backpropagation and Gradient descent backpropagation training algorithms, and hidden layer transfer functions of the logarithm-sigmoid and tangent-sigmoid types 31 . We only employed neural For each neural network configuration, training was carried out for 60 randomly selected training, validation and test sets, and for each one 40 different initializations of weight and bias constants of the neural networks were made. Above all, to avoid getting misleading data affected by favorable or unfavorable division of dataset into training, validation and test sets, the post validation strategy proposed in a previous study 30 was carried out. Accordingly, during the initial training of the neural networks, for the models which yielded mean absolute percentage errors lower than 22%, the final optimized weights and bias constants of the neural network models were recorded. These recorded constants were used as the initial guess to train, validate and test the same neural network configurations but under 100 different randomly selected training, validation and test sets. The models for which in at least 80 out of 100 iterations their test and training sets errors had the same means and variances as evaluated by the two sample t-test method with 5% significance level were considered as reliably trained models. For them, the average of the ANN-predicted results in all repeats were reported as the performance of that model. Setting up and running the neural network models were implemented in Matlab software. A freely available C++ code for practical use of our proposed ML-PCM models, with detailed user instructions, is provided in Supplementary Software 1. All the computations were carried out on the High Performance Computing center clusters of the Christian-Albrechts-University of Kiel.
Data availability
All data produced in this study are available and can be provided by contacting the corresponding author.
Code availability
The source file of the C++ code developed for implementing the proposed method with detailed used instructions are available in Supplementary Software 1 or can be provided by contacting the corresponding author.
Received: 11 August 2020; Accepted: 12 May 2021; Table 4 The components of the continuum solvation model. | 5,149.2 | 2021-02-17T00:00:00.000 | [
"Chemistry",
"Computer Science"
] |
Mitigating Financial Burden of Tuberculosis through Active Case Finding Targeting Household and Neighbourhood Contacts in Cambodia
Background Despite free TB services available in public health facilities, TB patients often face severe financial burden due to TB. WHO set a new global target that no TB-affected families experience catastrophic costs due to TB. To monitor the progress and strategize the optimal approach to achieve the target, there is a great need to assess baseline cost data, explore potential proxy indicators for catastrophic costs, and understand what intervention mitigates financial burden. In Cambodia, nationwide active case finding (ACF) targeting household and neighbourhood contacts was implemented alongside routine passive case finding (PCF). We analyzed household cost data from ACF and PCF to determine the financial benefit of ACF, update the baseline cost data, and explore whether any dissaving patterns can be a proxy for catastrophic costs in Cambodia. Methods In this cross-sectional comparative study, structured interviews were carried out with 108 ACF patients and 100 PCF patients. Direct and indirect costs, costs before and during treatment, costs as percentage of annual household income and dissaving patterns were compared between the two groups. Results The median total costs were lower by 17% in ACF than in PCF ($240.7 [IQR 65.5–594.6] vs $290.5 [IQR 113.6–813.4], p = 0.104). The median costs before treatment were significantly lower in ACF than in PCF ($5.1 [IQR 1.5–25.8] vs $22.4 [IQR 4.4–70.8], p<0.001). Indirect costs constituted the largest portion of total costs (72.3% in ACF and 61.5% in PCF). Total costs were equivalent to 11.3% and 18.6% of annual household income in ACF and PCF, respectively. ACF patients were less likely to dissave to afford TB-related expenses. Costs as percentage of annual household income were significantly associated with an occurrence of selling property (p = 0.02 for ACF, p = 0.005 for PCF). Conclusions TB-affected households face severe financial hardship in Cambodia. ACF has the great potential to mitigate the costs incurred particularly before treatment. Social protection schemes that can replace lost income are critically needed to compensate for the most devastating costs in TB. An occurrence of selling household property can be a useful proxy for catastrophic cost in Cambodia.
Introduction
Tuberculosis (TB) is predominantly a disease of the poor [1].For the past decades, the global TB community has strived to address the needs of poor and marginalized population through promoting pro-poor strategies in TB control programmes [1].The international standard has been established that basic TB diagnostic and treatment services are provided free of charge [2,3].Nevertheless, TB patients often face severe financial burden by spending considerable amount of out-of-pocket (OOP) expenses before and during treatment [3,4].They are often trapped in a vicious cycle of repeated visits at the same healthcare level [5] or complex careseeking pathways at multiple healthcare providers including private facilities and traditional healers unlinked to the national TB programme (NTP) [6], escalating their OOP expenditures.Free TB services help reduce direct medical cost borne by the patient, however in reality there are other hidden costs such as direct non-medical costs (i.e.costs for food, transportation and accommodation) and indirect costs (i.e.lost income and reduced productivity) [7,8].A recent systematic review that involved 49 studies from 32 low-and middle-income countries (mostly African and Asian countries with some Latin American countries) revealed that indirect cost accounted for 60% of the total cost faced by patients across 25 surveys that provided the disaggregated data, constituting the largest financial risk for patients [9].The total direct and indirect cost can be significant, being equivalent to 39% of reported household income [9].The financial barriers to accessing TB services, often coupled with geographical and health system barriers, contribute to delayed diagnosis, leading to more advanced disease and continued transmission, and resulting in poor health outcomes and further aggravating poverty for the patient and affected household [7][8][9].
The WHO End TB Strategy highlighted the need for accelerated progress toward universal access and social protection [10].The Strategy aims to achieve that no TB affected families experience catastrophic costs due to TB by 2020 [11].To monitor the progress toward this target, WHO has been exploring the definition of TB-specific "catastrophic costs" taking into account the hidden costs [12].This is in contrast to the indicator of "catastrophic health care expenditure" which WHO defined as direct health expenditures (not including indirect cost) of >40% of annual discretionary income [3,12].The two options being considered are (1) the percentage of TB-affected households facing a total cost that is above a certain percentage of annual household income, and (2) the percentage of TB-affected households experiencing "dissaving" (such as taking a loan or selling assets) to cope with TB-related expenses as a proxy for catastrophic costs [12].Although several studies are available that documented direct and indirect patient cost as a percentage of household income [9,13], only three studies published permission from World Health Organization and the National Center for Tuberculosis and Leprosy Control, Ministry of Health, Cambodia.All interested researchers will contact Fukushi Morishita<EMAIL_ADDRESS>to request the data.
recent data after 2010 [13][14][15].A study conducted in Peru suggested a threshold of total expenses !20% of annual household income as it was associated with poor clinical TB outcomes [14].Since the effort to explore the TB-specific "catastrophic costs" is relatively new, few studies provided a comprehensive set of data that allows to examine changes in the proportion of patients facing catastrophic costs with different thresholds using empirical data.For the second option to be chosen, the correlation between coping strategies and high total cost relative to income needs to be assessed [9].So far, only one study was available that examined this association.A significant positive association was found between the occurrence of dissaving and total costs incurred in Tanzania and India [16].In Bangladesh, an increase in dissaving of $10 US dollar (USD) was significantly associated with an increase in total cots of $7 USD among low-income patients [16].More evidence needs to be accumulated from different countries and contexts.
To mitigate financial hardship of TB patients and overcome access barriers, various interventions have been implemented in many parts of the world [3,4,17].One approach is to provide direct or indirect economic support for patients or affected-households through the provision of, for example, nutrition package, food package, transport allowance/vouchers/ reimbursement, occupational training, and income generating fund [4].Active case finding (ACF) for TB, if implemented deliberately with strategic selection of target population and diagnostic algorithms, has a potential to improve access and detect prevalent cases earlier [18][19][20].Early case finding brought about by ACF may further help prevent unnecessary OOP expenditure and income losses, and thereby reduce associated costs for patients and affected household.However the quantitative evidence to demonstrate these benefits is limited, with no studies so far comparing patient costs between actively-and passively-detected patients.
Cambodia is a low-income country and one of 22 countries with a high burden of TB [21].The Cambodian National Centre for Tuberculosis and Leprosy Control (CENAT) has progressively intensified case finding activities for TB to ensure equitable access to quality TB services [22].For the last several years, CENAT has adopted many systematic case-finding approaches, including ACF, to bring TB services closer to hard-to-reach populations [23,24].Since 2005, CENAT has conducted ACF targeting household and neighbourhood contacts in poor communities alongside routine passive case finding (PCF), a symptom-driven facility-based case finding approach.The results from the national TB prevalence surveys in 2002 and 2011 showed a slow decline of smear-positive TB prevalence rate for asymptomatic cases, and highlighted the limitation of DOTS strategy which has focused on passive detection by smear microscopy among symptomatic individuals [25].Then, after 2012, CENAT upgraded the ACF strategy by introducing Xpert MTB/RIF (Cepheid, Sunnyvale, CA, USA) to enable better diagnostic capacities especially for asymptomatic and sputum smear-negative patients with the funding from TB REACH.The previous study showed this Cambodia's ACF among contacts tended to find more patients from an older age group and more patients who were smear-negative or had lower smear grades, as compared to PCF, showing an indication of early case finding [26].Furthermore, the ACF increased case detection beyond what is reported in PCF [27], and was found to be highly cost-effective from a provider perspective [28].
To date, there was only one study that surveyed TB-affected household costs in Cambodia [29].The survey took place in 2008/2009 and showed that the average total household cost was US$476.8 per TB episode, ranging from US$395 to $1900, depending on the modality of care [29].However, the study did not provide costs as percentage of household income nor information about dissaving.Furthermore these cost data need to be updated in light of the WHO End TB Strategy to provide a baseline indicator of progress towards eliminating financial hardship due to TB.
Against this background, we aim to examine whether or to what extent the Cambodia's ACF among household and neighbourhood contacts reduces financial burden of TB-affected household by comparing costs due to TB between actively-and passively-detected patients.The study further aims to examine the association of catastrophic cost with different dissaving patterns using different thresholds as well as to provide a baseline indicator to monitor and evaluate the progress towards eliminating financial hardship due to TB in the context of Cambodia.
Programmatic information
The intervention was conducted in socio-economically disadvantaged and underserved areas with a relatively high burden of TB.In the selected health centres, community volunteers and health workers conducted house-to-house visits of all smear-positive TB patients who had been registered for treatment during the preceding two years.All of their household contacts regardless of TB symptoms and symptomatic neighbourhood contacts with cough, fever, weight loss, and/or night sweats of more than two weeks were invited to the prescheduled ACF session on a specific date in their nearest health centres.Neighbourhood contacts were included in screening efforts as they are likely exposed to infectious index cases through close community interaction that is typical in rural areas.Two weeks prior to ACF session dates, CENAT-ACF team visited intervention sites to train existing health facility staff and selected community volunteers on the project initiative including screening, diagnosis and care procedures.During the two-week preparation period, the trained staff conducted house-to-house visits for pre-screening and inviting eligible participants to ACF.On the day of ACF session, all participants were re-screened for TB symptoms and a history of contact at ACF sites by clinicians of the CENAT team to verify the eligibility of the participants.They then underwent chest X-ray (CXR) examinations.CXR films were developed immediately and evaluated by a radiologist of the team by classifying either normal or abnormal.Abnormal CXR findings were further classified as active TB, suspected TB, healed TB, or other abnormalities to facilitate clinical diagnosis.Individuals who had abnormal CXR findings and/or TB symptoms were asked to provide a sputum specimen for Xpert MTB/RIF testing.Those with MTB-positive results were reported as bacteriologically-confirmed TB.Diagnosis of bacteriologically-negative TB and extra-pulmonary TB was made onsite by the clinicians in the CENAT-ACF team based on all available evidence, in principle, on the same day of the ACF session.Treatment of the detected patients was managed by routine health services.
Study design and sampling
This is a cross-sectional comparative study involving a questionnaire survey that explores costs associated with TB diagnosis and treatment among actively-and passively-detected TB patients.The intervention group consisted of patients diagnosed through ACF sessions organized at health centres.The control group was taken from patients who were diagnosed and registered in the same health centres within four months prior to the respective ACF sessions.All participants were new pulmonary TB patients with treatment outcome of either "completed" or "cured".For the simplicity of interpretation of results, patients with unfavourable treatment outcomes and retreatment and extra-pulmonary patients were not included in the study.
We employed a combination of purposive sampling (for district and health centre selection) and systematic sampling (for patient selection).Of 30 operational districts (ODs) with intervention in 2012 and 2013, four ODs (Oudong, Angkor Chey, Stoung and Sothnikum) were selected based on the implementation timing (ODs with ACF patients who had most recently completed treatment at the time of data collection) and geographical distribution (ODs that were near and far from the capital to allow geographical diversity).Within these ODs, we further targeted health centres with relatively high TB case notifications both in ACF and PCF to ensure the adequate number of eligible patients.Then we systematically approached the eligible participants in order from the top of the eligible subject list in each health centre.The difference in the sample size between ACF and PCF in each health centre was set at no more than five patients to avoid a biased representation of the two groups from each health centre.We used a prevalence of catastrophic costs as the outcome variable to guide sample size estimation with the formula described by Pocock [30].Assuming that 20% in the control group and 5% in the intervention group had faced catastrophic costs due to TB, 194 patients (97 patients from each group) were required to have a 90% chance of detecting a difference in the two groups at the 5% significance level.As a result, we visited 25 health centres until we reached the sample size.Eligible participants were contacted by the health centre staff and community volunteers and invited to the pre-scheduled interview.Data collection took place between October and December 2014.A total of 108 ACF patients and 100 PCF patients were recruited for the study.
The interviews were conducted by three local research assistants either at the health centre or participants' house in a private manner.The tool to estimate patients' costs [7], a structured questionnaire, was adapted to the local setting, translated into Khmer, pre-tested with adjustment as needed, and administered during each interview.If a patient was under 18 years of age, their guardian was asked to answer the questions with or without participation of the patient.All participants or their guardians provided their written consent before commencement of interview.The ethical clearance was obtained from the Cambodian National Ethics Committee for Health Research before study commencement.
Quantitative Data and Statistical Analysis
Demographic and clinical information were sourced from project database, TB registers, laboratory registers, and individual treatment card.The questionnaire included various questions on health-seeking behaviour, costs associated with TB diagnosis and treatment, and socio-economic information including household income and spending.We collected and calculated a wide range of cost data including direct medical cost, direct non-medical cost, indirect cost, reimbursement and coping costs.Direct medical cost included OOP expenses for facility administration/consultation, laboratory test, X-ray, drug, and hospitalization.Direct non-medical cost included OOP expenses for food, transportation, guardian and caregiver (food, transportation and accommodation for an escort), accommodation, supplemental foods given to patients during treatment, and interest for borrowed money.Indirect cost included patient's lost income, guardian/caregiver's lost income and value lost due to sold property.We also collected insurance reimbursement.Many of these costs were, where appropriate, collected separately for two different time periods, "before treatment initiation" and "during 6-months treatment period".
To estimate direct costs before treatment, we obtained the data of actual OOP expenses for each different health-seeking visit related to the single episode of TB before treatment.To estimate direct costs during treatment, we obtained the data on the average costs per visit for the different items and they were multiplied by the number of visits to health facilities during treatment.Our questionnaire captured costs for three types of visits including daily DOT, pickingup drugs, and follow-up examinations.A visit with multiple purposes was counted as one visit.
For all health-seeking visits of all participants before treatment, we obtained the total health-seeking time in minutes including time spent for travel, waiting, consultation and hospitalization.For patients who had any income before TB illness, the total health-seeking time per patient was then multiplied by their income per minute before TB illness to estimate lost income due to health-seeking, assuming that a patient worked for 8 hours per day for 5 days per week.For ACF patients, health-seeking time during ACF session was included in the total health-seeking time.For 69 participants (40 ACF and 29 PCF patients), only travel time was available and other components of their health-seeking time were extrapolated using the data from the rest of the samples by types of health facilities visited.For participants who reported sick leave before treatment, we estimated lost income due to sick leave using their income per day before TB illness multiplied by the duration of sick leave in days before treatment.
Lost income during treatment was calculated for patients experiencing a change in the average weekly household income due to TB, using the reduced weekly income multiplied by patient's actual treatment period in weeks.If patients who engaged in housework stopped their work due to TB and their labour force was replaced by someone else, we estimated cost of reduced household activity of patient using self-reported estimated value of the work per day multiplied by the duration of treatment.
If the patient was accompanied by a guardian on health facility visits during treatment, their lost income was estimated using the reported income per day multiplied by the number of accompanied visits.If caregivers of the patient quitted their jobs to specifically take care of the patient, their lost income due to caregiving was estimated using the reported lost income per week multiplied by duration of caregiving during treatment.Lost income of guardians and caregivers were combined and reported together.Value lost due to sold property was not an actual selling price but defined as the difference between an actual selling price and self-estimated market value of a sold property that was also asked in the interview.
To examine the changes in the proportion of catastrophic costs, costs as percentage of reported annual household income were calculated.The proportion of patients who spent total expenses !10%, !20%, !30% and !40% of annual household income were compared between ACF and PCF.The prevalence of dissaving was further explored by stratifying patients into four cost-income bands that were <10%, 10-20%, 20-30% and >30%.
For categorical data, distribution and frequency were presented, and a Pearson's chi-square test was used to examine associations.For numerical data, we presented median, mean, interquartile range (IQR), standard deviation (SD) and range in tables.For most of the cost data that were skewed, the median was mainly reported in the text while the mean was also used when describing the proportion of sub-categorized costs among total costs as well as describing the cost data with an extremely skewed distribution where median and IQRs were all zero both in ACF and PCF.The Welch T-test was used for age comparison.A two-sample Wilcoxon ranksum test was applied to examine the difference in various costs and costs as percentage of annual household income between ACF and PCF.A Fisher's exact test was employed to assess the association between costs as percentage of annual household income and the prevalence of different dissaving patterns.Statistical significance is defined as p<0.05.All statistical analyses were performed using the statistical software package R 3.2.1 (CRAN: Comprehensive R Archive Network at https://cran.r-project.org/).All cost data were obtained in local currency (Cambodian Riel) and then were converted into USD for analysis at the rate of 4000 Riel per dollar.
Participant characteristics (Table 1)
A total of 108 ACF patients and 100 PCF patients were enrolled in the study.Compared to the PCF group, the ACF group had more female (51.9% vs 44.0%), more children with 14 years of age (2.8% vs 0%), and more elderly with !65 years of age (30.6% vs 23.0%) although these are not statistically significant differences.Median age was slightly higher in ACF than in PCF (55 [IQR 43.8-68] vs 52.5 [IQR 45-62.3],p = 0.556).The proportion of participants who were jobless or were only doing housework was higher in ACF than in PCF (41.7% vs 28.0%).Distance from home to nearest health centre was significantly longer in ACF than in PCF (4km [IQR 3-7] vs 3km [IQR 2-5], p = 0.014).The ACF group had more clinically-diagnosed patients compared to the PCF group (47.2% vs 16.0%, p<0.001).
Costs before and during TB treatment (Table 2 and 3) Total cost.The total cost incurred during an episode of TB showed a considerable variation in both groups, ranging from $6.5 to $1649 in ACF and from $10.8 to $4251 in PCF.The median total costs were lower by 17% in ACF than in PCF ($240.Direct medical cost.Before treatment, drug cost accounted for around 80% of the direct medical costs in both groups; the ACF group reported the significantly higher median cost on drug ($0 [IQR 0-6.5] vs $6.2 [IQR 0-25.5],p<0.001).The medians of administrative cost, test cost, X-ray cost and hospitalization cost were zero in both groups and could be considered minor expenses.Yet the highest points of the range in administrative and hospitalization costs exceeded $100 in PCF.During treatment, hospitalization costs were incurred in only two patients in PCF and none in ACF, and no other costs were incurred under direct medical costs in both groups.
Insurance reimbursement.No insurance reimbursement was reported in the ACF group before and during treatment while 6 PCF patients (6%) before treatment and one PCF patient (1%) during treatment received a reimbursement or subsidy for their OOP expenses (direct medical and/or non-medical costs) mainly through a donor-funded programme.12 ACF patients (11.1%) and 17 PCF patients (17.0%) were registered in the non-insurance Cambodian Health Equity Funds (HEFs) and Subsidy Scheme that provides poor people with transportation and food costs associated with their health-seeking to public health facilities in addition to granting a user fee waiver at government health facilities [31].Of 29 patients enrolled in the HEFs scheme, 16 (55%) benefited from the free service at government health centres, however, no patients in this study received subsidy or reimbursement for their non-medical costs through this scheme.Indirect costs.Indirect costs accounted for 72.3% of total costs in ACF and 61.5% in PCF.In both groups, indirect costs during treatment made up larger proportions than indirect costs before treatment (ACF: 67.6% vs 4.7%, PCF: 50.5% vs 11.0%).Before treatment, the ACF group incurred, on average, lower income loss due to health-seeking ($1.6 vs $20.6) and lower income loss due to sick leave ($17 and $38.5), compared to PCF.In particular, lost income due to health-seeking was not a main cost contributor in ACF as it accounted for 5% of total costs before treatment whereas it was 17.7% in PCF.
During treatment, lost income of patients ($132.5 for ACF, $193.6 for PCF) and reduced household activity of patients ($108.8 for ACF, $45.9 for PCF) represented substantial expenses, which are followed by lost income of guardian/caregivers ($28.6 for ACF, $27.5 for PCF).There was only one PCF patient that was affected by the value lost due to sold property ($325 lost by selling livestock property).
Overall observation.In both groups, the highest mean costs incurred before treatment were lost income due to sick leave and drug cost; these costs accounted for more than 60% of costs before treatment.Likewise, in both group, the highest mean costs incurred during treatment were supplemental food cost, lost income of patients, and reduced household activity of patients; these represented around 80% of costs during treatment.
Most of the mean costs were found to be lower in ACF than in PCF, however reduced household activity of patients was more than doubled in ACF than in PCF.Many of the median costs and IQRs were zero, showing that these mean costs were mainly driven by some of the patients spending extremely high costs.
The prevalence of dissaving to finance TB-related expenses was found to be low in ACF than in PCF (Fig 4).The largest difference between ACF and PCF was observed for "Sale" (13.9% vs 21%, p = 0.255), which was followed by "Loan with interest or sale" (21.3% vs 28%, p = 0.336), "All dissaving" (46.3% vs 52%, p = 0.494), and "Any loan" (42.6% vs 46%, p = 0.732).However none of them were statistically significant.No difference was found in "Loan with interest" (12% vs 12%, p = 1).Among 36 patients who sold property, 80.6% of them (12 ACF and 17 PCF patients) sold their livestock.The prevalence of dissaving was investigated by stratifying patients into four categories according to costs as percentage of annual household income of <10%, 10-20%, 20-30% and >30% (Table 5).Cost as percentage of reported household income was significantly associated with "Sale" both in ACF and PCF, with the highest prevalence of "Sale" reported in the highest cost-income band of >30% (26.9% for ACF [p = 0.020], 41.2% for PCF [p = 0.005]).There were no clear demographic differences between different cost-income bands in both groups.When summing up ACF and PCF patients, "Sale" (p<0.001),"Loan with interest or Sale" (p = 0.007) and "All dissaving" (p = 0.019) were significantly associated with cost-income levels.
Median costs as percentage of annual household income were consistently higher in the group with dissaving as compared to the group without dissaving (Fig 5).They were also consistently lower in ACF than in PCF.The statistically significant difference was found between patients with and without "Sale" (p = 0.005 for ACF, p = 0.002 for PCF) and "All dissaving" (p = 0.03 for ACF, p = 0.047 for PCF).
Discussion
Our study results quantitatively demonstrated lower household costs incurred in ACF patients as compared to PCF patients.In particular, costs before treatment were significantly lower in ACF than in PCF.Similarly, costs as percentage of household income were consistently lower in ACF with significant differences found in direct costs and costs before treatment.Indirect costs constituted the largest portion of total cost.ACF patients were less likely to dissave to afford TB-related costs.Only a limited number of patients received insurance reimbursement.
Why ACF patients incurred lower costs?First, in the Cambodia's ACF strategy, CENAT actively offered a one-stop shop TB screening service which does not require further repeated visits for patients.Such proactive and tailored approach might reduce diagnostic delays and avoid unnecessary OOP expenditures.For example, in Russia, actively detected patients had the first interaction with a medical health provider one week earlier than passively detected patients [32], showing the contribution of ACF to reduced delays.In the study of Pichenda et al, patients with <1 month treatment delay incurred 8 times lower costs before diagnosis and 1.6 times lower total costs as compared to patients with >3 months treatment delay in Cambodia [29].This explains the association between shorter delays and lower costs particularly before diagnosis, and supports our argument.
Second, introduction of a rapid sensitive test in ACF, namely Xpert MTB/RIF, in combination with mobile chest X-ray and an onsite clinical assessment greatly increased the diagnostic capacity to detect patients at the early stage of the disease.These patients may typically be asymptomatic or have milder and more chronic symptoms.They therefore might incur lower direct costs due to less-frequent health-seeking visits as well as lower indirect costs due to less sick-leave.In this study, ACF patients were more likely to be bacteriologically-negative (clinically-diagnosed) TB, which could partly explain a less severe disease presentation in ACF patients.This has some similarity with the results from our previous study that Cambodia's ACF detected more patients with lower smear grade and smear-negative TB [26].Lower costs incurred for supplemental food and guardian/caregiver during treatment in ACF may be attributable to less severity of the disease in this group.
A concern may be raised about the high proportion of clinically-diagnosed patients in ACF.In Cambodia, the national TB prevalence surveys (conducted in 2002 and 2012) revealed that smear-negative and/or asymptomatic patients were significantly under-diagnosed in the routine case finding [25].This community-based ACF approach expanded greatly upon traditional approaches to contact investigation mainly to increase case detection in general and partly to fill this diagnostic gap.As a result of the unique target selection, the project detected more patients in children and the elderly as compared to the routine PCF.Such characteristics of ACF participants were likely to have affected the sensitivity of Xpert.The systematic review on diagnostic accuracy of Xpert showed that Xpert achieved an overall pooled sensitivity of 88% (95% Credible interval [Crl], 84-92%) when used as an initial diagnostic test to diagnose pulmonary TB among adults [33].However the sensitivity decreased to 68% (95% Crl, 61-74%) in people with smear-negative results, and to 66% (95% Crl, 52-77%) in children [33].
One important factor that lowers the sensitivity in children and smear-negatives is bacterial load in the specimen that is generally lower in these populations [34,35].Likewise, we assume the sensitivity of Xpert is lower in the elderly given their low quality of sputum specimens that is reported elsewhere [36][37][38].In such cases, it may be reasonable to fill this diagnostic gap by facilitating clinical diagnosis based on CXR and clinical findings.Indeed, our data showed that nearly 70% of clinically-diagnosed patients in ACF were either children with 14 or older people with !55.Some might be over-diagnosed but we consider it as an acceptable level given this epidemiological context.Third, demographic differences between ACF and PCF patients could have resulted in the cost difference.In this study, the ACF group had more female, more children and elderly, and more patients who were house-worker/jobless.Most of them have less or no regular income and more household work, as compared to an economically active population.This most likely caused lower income loss as well as higher cost of reduced household activity in ACF, which could consequently lowered the overall indirect costs in ACF.
The above factors could collectively explain lower costs incurred in ACF.It is often anticipated that ACF detects patients earlier in terms of time therefore reduces only costs before treatment.However, less severity of the disease in ACF may bring prolonged financial benefits for ACF patients over the course of treatment and possibly after treatment completion.Such benefits were more evident in direct costs than in indirect costs perhaps due to the difference in the nature of the two costs that OOP direct expenditure is more responsive to severe patient's conditions while indirect costs can be triggered even by mild conditions.
In this study, the proportion of patients facing total costs corresponding to >10% of annual household income was 63% in PCF.This was similar to the percentage reported in other countries; 65% in Peru [14], 66-75% in studies from sub-Saharan Africa [39,40] and 67.7% in China (for patients who complied with treatment) [15], providing a common ground that TBrelated cost has the substantial impact on their household budget in low-and middle-income countries.Our results showing the consistently lower proportions found in ACF suggested that ACF has the large potential to reduce the incidence of catastrophic expenditure.
Can dissaving be a proxy for catastrophic costs?Madan et al found a significant positive association between the occurrence of dissaving and total costs incurred, and highlighted the potential of using dissaving as a proxy for catastrophic costs [16].In our study, we found a significant association between the cost as percentage of annual household income and occurrence of "Sale" both in ACF and PCF using two different analytical approaches.Although "All dissaving" also showed a significant association, the strength of association appeared to be increased by the effect of "Sale" given the relatively disperse distribution of patients with "Any loan" and "Loan with interest" across the percentage scale in Fig 5 .In the context of Cambodia, therefore the occurrence of selling property of the household can be a more useful proxy indicator, and this information can be easily collected and monitored using existing standardized recording and reporting forms with a slight modification.However it is important to note that, in this study, more than one third (34.3%) of patients without "Sale" still experienced the total cost that was above 20% of their household income, and in contrast, one in five (19.4%) patients with "Sale" incurred less than 10% of their household income.This implies that the occurrence of "Sale" is not sensitive enough be a close proxy or direct measure for catastrophic expenditure.Further exploration may be needed on how to make the dissaving information more useful for example by combining with other patient factors to better monitor financial risk of TB-affected households.
In this study, the proportions of TB-affected household experiencing "Sale" were 13.9% in ACF, 21% in PCF and 17.3% for both groups.Previous studies from other countries reported a wide range of proportions; 37% in Ghana [41], 19% in Dominican Republic [41], 15.9% in Thailand (patients with income below poverty line) [42], 5% in Vietnam [41], 45% in China [43].Although available data is limited, dissaving patterns and its implication seem greatly vary across countries, and perhaps the association between dissaving and catastrophic costs may also be context-and culture-specific even within a country.This needs to be carefully taken into account when using dissaving as a potential indicator.
This study has several limitations.First, patient recall bias may be present as we selected all PCF patients who were registered before the ACF session date in each health centre.We tried to minimize the bias by limiting the sampling timeframe to "within four months prior to ACF", however PCF patients might have more difficulty to recall the details of their expenses and care seeking as compared to ACF patients, which might have influenced the results.Second, we did not sample patients with unfavourable treatment outcomes, drug-resistant TB and/or human immunodeficiency virus infection.Given that they are more likely to experience catastrophic costs due to TB [9,15], the results presented here might not capture a full picture of TB-related costs.Third, as is common in any cost studies, some of the cost might be overand/or underestimated.In particular, we carefully assessed indirect costs including reduced household activity of patients and lost income of guardians/caregivers, which may not be included in other cost studies.This might have led to overestimation of indirect costs and/or made the cost data not comparable with other studies.Extrapolating part of health-seeking time for some participants and the assumption of the average working hours/days might have reduced the accuracy of calculating lost income before treatment.Fourth, our relatively small sample size was unable to detect statistical differences in some demographic variables between the two groups, weakening our arguments.It is, however, noteworthy that an analysis of full project database with screening data for more than 70,000 participants has confirmed a high participation of children, female and the elderly.
Our study provided an important policy implication.In this study, only a small number of patients in PCF received reimbursement that covered a part of direct costs.This clearly illustrated the current situation of health financing mechanisms in Cambodia that costs incurred due to TB are predominantly derived from OOP spending by individual care seekers.In our study setting, the HEFs scheme has been playing a certain role in removing financial barriers at the point of care for the poor.However this pro-poor financial protection mechanism has been under-utilized for many eligible patients who seek care in private facilities.Continued efforts are required for increasing the quality and credibility of government health services to maximize its utilization, as well as for expanding sustainable public-private mix approach in TB control, which helps avoid unnecessary OOP spending.In this study, the HEFs scheme was not used for reimbursement of non-medical costs.This could be attributable to the difference in benefit packages or its criteria across different ODs, which needs verification in the field.Given that inadequacies of the HEFs scheme have been pointed out also in the context of non-communicable diseases [44,45], wider policy discussion is needed on how the scheme can be expanded by capturing all of the core demands of the poor.
In this manner, strengthening traditional approaches to improve the diagnostic pathway and pursuing the best use of existing schemes are indispensable to minimize OOP spending.However, as is found in other countries [9], the largest financial burden due to TB in Cambodia comes from indirect cost (mainly lost income) which is not usually covered by existing social health protection schemes for the informal-sector workers (HEFs and Community-based Health Insurance schemes) [46].Although Cambodia has three established national social security schemes with an income replacement benefit targeting the formal-sector workers (civil servants, veterans, and employees in private sector) [47], none of the study participants benefited from them for their TB illness.This could indicate a low coverage of these schemes in the study population, likely due to many informal-sector workers in TB patients.Currently, the country is moving toward the establishment of the National Social Health Protection Fund targeting informal-sector workers by incorporating the HEFs and other demand-side financing schemes [46].For this new scheme to meet the greatest needs of TB-affected households, ensuring an income replacement or similar benefits at the household level will be critical.
At the same time, programmatic efforts are also required to tailor and package pro-poor and proactive case finding intervention strategies in order to detect vulnerable patients as early as possible before considerable costs occur.The Cambodia's ACF strategy could be the first evidence-based ACF that can help mitigate both direct and indirect costs.
Conclusions
Our study quantitatively demonstrated the financial hardship of TB-affected household in the routine PCF setting in Cambodia and highlighted the great potential of ACF in mitigating the costs incurred particularly before treatment.Future health policy dialogue should consider how best to design and expand the social protection scheme that can replace lost income for TB-affected households.This will compensate for the most devastating costs in TB, help achieve better health outcomes, and eventually prevent further impoverishment in rural poor communities.Measuring their financial risk using an appropriate and practical indicator is the key for future planning and implementation of the WHO End TB Strategy.An occurrence of selling household property to finance TB-related expenses can be a useful proxy for catastrophic cost in Cambodia.
doi: 10 .Fig 1 .
Fig 1.Comparison of cost distribution between ACF and PCF.Box-and-whisker plots indicate the median, 25 th and 75 th centiles, and the range of values.All values less than one were converted to one in order to use a log scale for the y-axis.*Significant difference (P<0.05).doi:10.1371/journal.pone.0162796.g001
Fig 2 .
Fig 2. Comparison of cost as percentage of reported annual household income between ACF and PCF.Box-and-whisker plots indicate the median, 25 th and 75 th centiles, and the range of values.All values less than 0.1 were converted to 0.1 in order to use a log scale for the y-axis.*Significant difference (P<0.05).doi:10.1371/journal.pone.0162796.g002
Fig 5 .
Fig 5. Cost as percentage of annual household income, by dissaving patterns and case finding approach.A short bar indicates an observed value of household's cost-income levels.A long bar indicates the median.Shaded areas show the density of the distribution.P-values were calculated using a Wilcoxon rank-sum test.A log scale was used for the y-axes.doi:10.1371/journal.pone.0162796.g005
Table 2 . Direct and indirect costs before and during TB treatment by case finding approach (in USD).
Significant difference (P<0.05)† Wilcoxon rank-sum test IQR: Interquartile range, SD: Standard deviation.A number with a negative value of less than zero is due to insurance reimbursement.
Table 3 .
(Continued) *Significant difference (P<0.05)† Wilcoxon rank-sum test IQR: Interquartile range, SD: Standard deviation, Tx: treatment.A number with a negative value of less than zero is due to insurance reimbursement. | 8,955.8 | 2016-09-09T00:00:00.000 | [
"Economics",
"Medicine"
] |
Diabetes Self-Management Education Programs in Nonmetropolitan Counties — United States, 2016
Problem/Condition Diabetes self-management education (DSME) is a clinical practice intended to improve preventive practices and behaviors with a focus on decision-making, problem-solving, and self-care. The distribution and correlates of established DSME programs in nonmetropolitan counties across the United States have not been previously described, nor have the characteristics of the nonmetropolitan counties with DSME programs. Reporting Period July 2016. Description of Systems DSME programs recognized by the American Diabetes Association or accredited by the American Association of Diabetes Educators (i.e., active programs) as of July 2016 were shared with CDC by both organizations. The U.S. Census Bureau’s census geocoder was used to identify the county of each DSME program site using documented addresses. County characteristic data originated from the U.S. Census Bureau, compiled by the U.S. Department of Agriculture’s Economic Research Service into the 2013 Atlas of Rural and Small-Town America data set. County levels of diagnosed diabetes prevalence and incidence, as well as the number of persons with diagnosed diabetes, were previously estimated by CDC. This report defined nonmetropolitan counties using the rural-urban continuum code from the 2013 Atlas of Rural and Small-Town America data set. This code included six nonmetropolitan categories of 1,976 urban and rural counties (62% of counties) adjacent to and nonadjacent to metropolitan counties. Results In 2016, a total of 1,065 DSME programs were located in 38% of the 1,976 nonmetropolitan counties; 62% of nonmetropolitan counties did not have a DSME program. The total number of DSME programs for nonmetropolitan counties with at least one DSME program ranged from 1 to 8, with an average of 1.4 programs. After adjusting for county-level characteristics, the odds of a nonmetropolitan county having at least one DSME program increased as the percentage insured increased (adjusted odds ratio [AOR] = 1.10, 95% confidence interval [CI] = 1.08–1.13), the percentage with a high school education or less decreased (AOR = 1.06, 95% CI = 1.04–1.07), the unemployment rate decreased (AOR = 1.19, 95% CI = 1.11–1.23), and the natural logarithm of the number of persons with diabetes increased (AOR = 3.63, 95% CI = 3.15–4.19). Interpretation In 2016, there were few DMSE programs in nonmetropolitan, socially disadvantaged counties in the United States. The number of persons with diabetes, percentage insured, percentage with a high school education or less, and the percentage unemployed were significantly associated with whether a DSME program was located in a nonmetropolitan county. Public Health Action Monitoring the distribution of DSME programs at the county level provides insight needed to strategically address rural disparities in diabetes care and outcomes. These findings provide information needed to assess lack of availability of DSME programs and to explore evidence-based strategies and innovative technologies to deliver DSME programs in underserved rural communities.
Introduction
An estimated 29.1 million persons in the United States had diabetes in 2012, and this number is projected to reach 64 million by 2050 (1,2). Persons with diabetes have an increased risk for microvascular and macrovascular complications (e.g., heart disease, stroke, kidney disease, and retinopathy) that lead to a decrease in quality of life (1). The appropriate use of diabetes preventive care practices and adherence to selfmanagement behaviors, such as routine medical visits, blood glucose and lipid tests, glucose self-monitoring, foot and eye examinations, and healthy dietary and physical activity, can prevent or delay costly complications (3).
Diabetes self-management education (DSME) is a clinical practice intended to improve preventive practices and behaviors with a focus on decision-making, problem-solving, and selfcare (4). DSME increases the use of preventive care services and reduces glucose levels associated with diabetes complications in persons with diabetes (4). The National Standards for Diabetes Self-Management Education and Support define quality standards for DSME to support evidence-based care by diabetes educators (5). Ideally, DSME interventions should occur at four critical points (i.e., diagnosis, annual examinations, when complications arise, and with a change in care) and consider age, culture, and other factors (4).
Persons with a diagnosis of diabetes who live in rural communities face barriers and challenges to accessing diabetes care (6). Rural populations have higher prevalence of diabetes and lower rates of participation in preventive care practices (7,8). A complex array of individual, provider, and environmental factors influence access and use of DSME by persons with diabetes who live in rural communities, including insurance, education and income, literacy, transportation, poverty, and race/ethnicity. Anderson's Behavioral Model for Health Services Use describes these as predisposing and enabling factors (8,9). Challenges also include establishing and sustaining DSME programs in rural communities (6).
Although previous studies have described DSME use by persons with diabetes at the national level, the distribution of established DSME programs in rural communities across the United States, an enabling factor in DSME use, has not been previously described (8). This report analyzes DSME program data from 2016 and data from the Atlas of Rural and Small-Town America to describe the distribution of established DSME programs in rural counties in the United States and differences in county-level characteristics of those counties with and without a DSME program.
Data Sources
Data analyzed for this report originated from several data sources. This report includes DSME programs recognized by the American Diabetes Association (ADA) or accredited by the American Association of Diabetes Educators (AADE) as of July 2016 (i.e., active programs). ADA and AADE identified the addresses (i.e., physical locations) of these active programs and shared them with CDC. ADA and AADE accredit or recognize DSME programs that meet the National Standards for Diabetes Self-Management Education and Support and serve as certifying organizations for the Centers for Medicare and Medicaid Services and other third-party insurers for reimbursement. The U.S. Census Bureau's census geocoder was used to identify published addresses and the county of each DSME program site. An internet search was conducted to identify all counties of addresses not identified by geocoding. County characteristic data originated from the U.S. Census Bureau, including the 2010 U.S. Census of Population, the American Community Survey (ACS) (2008-2012), the Small Area Income and Poverty Estimates (SAIPE), and the Small Area Health Insurance Estimates (SAIHE). The U.S. Department of Agriculture's Economic Research Service previously compiled these county data, excluding insurance estimates, for the 2013 Atlas of Rural and Small-Town America data set. SAIPE and SAIHE U.S. census programs use the decennial census, ACS, and numerous other administrative data sources to estimate income-related and health insurance indicators. County levels of diagnosed diabetes prevalence and incidence and number of persons with diagnosed diabetes for 2013 were previously estimated by CDC (10).
Variables
This study defined nonmetropolitan counties using the ruralurban continuum code from the Economic Research Service available from the 2013 Atlas of Rural and Small-Town America data set. This code included six nonmetropolitan categories of 1,976 urban and rural counties (62% of counties) outside of metropolitan boundaries and having no cities with ≥50,000 residents. Characteristics of these nonmetropolitan counties and populations were described and compared using several variables. The variables included number of DSME programs, DSME program density (number of DSME programs per 1,000 persons with diagnosed diabetes), total population, population density (number of persons per square mile), net migration rate, percentage foreign born, percentage of non-English-speaking households, percentage of persons aged ≥65 years, race/ethnicity percentages, poverty and unemployment rates, per capita income, percentage insured, and unadjusted and age-adjusted diabetes prevalence and incidence.
Data Analysis
The distribution of DSME programs in nonmetropolitan counties and differences in county characteristics between counties with and without a DSME program are described. In descriptive analyses, DSME program statuses for each of these county characteristics were compared. For comparison of continuous variables, the t-test was used for normally distributed variables, and the Wilcoxon rank-sum test was used for skewed variables. Pearson's chi-square test for proportions was used to test associations between categorical variables.
The probability of having at least one DSME program versus no DSME program was modeled using logistic regression. The model was designed to assess the degree to which the differences between DSME program statuses can be accounted for by differences in the distribution of county characteristics. The dependency between the composite variables (i.e., variables with subgroups measured in percentages that sum to 100%) was addressed by grouping the subgroups or by using only the largest subgroups of the variable. Specifically, for the variable measuring education, the percentage of the population with less than a high school education and the percentage of the population with a high school education were combined. For the race/ethnicity variable, the smallest subgroups, Asians and Native Americans, were excluded from the multivariable analyses.
After screening all variables, variable selection was performed by fitting the model with all potential predictors and interactions terms. Variables in the model were assessed for improving the model fit using the likelihood ratio test by manual stepwise selection procedure. All variables were tested for confounding effect using the change-in-estimate approach with a 10% cut-off value. The variables in the final model were screened for multicollinearity. Loess, generalized additive model (GAM), and other graphical assessment methods were used to check for violation of linearity in the logit for the continuous variables. Model fit was assessed by Pearson's and deviance chi-square tests, Hosmer-Lemeshow and Stukel tests, and Tjur's statistics. Results were considered statistically significant at p<0.05. Multivariable analyses were performed in SAS 9.3 (SAS Institute, Cary, North Carolina) using proc logistic for model fitting and proc genmod for model assessment. SAS/GRAPH 9.3 (SAS Institute, Cary, North Carolina) was used to map DSME programs by county.
Results
In 2016, a total of 1,065 DSME programs were located in 743 of the 1,976 (38%) nonmetropolitan counties or county equivalents, and 62% of nonmetropolitan counties did not have a DSME program (Figure). The total number of DSME programs for nonmetropolitan counties with at least one DSME program ranged from 1 to 8, with an average of 1.4 DSME programs. The number of DSME programs in nonmetropolitan counties of a state ranged from one in Massachusetts (with three nonmetropolitan counties) to 79 in Minnesota (with 60 nonmetropolitan counties), with an average of 22.7 DSME programs per state for nonmetropolitan counties. The proportion of nonmetropolitan counties within a state with a DSME program ranged from one of the 13 nonmetropolitan counties in Nevada to both of the nonmetropolitan counties in Hawaii and the one nonmetropolitan county in Connecticut. An average of 44.2% of nonmetropolitan counties in a state had a DSME program.
A bivariate analysis comparing county-level characteristics between nonmetropolitan counties without a DSME program and nonmetropolitan counties with at least one DSME program was conducted (Table 1). Nonmetropolitan counties with at least one DSME program had, on average, larger populations with higher population densities and net migration rates than nonmetropolitan counties without a DSME program. Nonmetropolitan counties with at least one DSME program also had a lower percentage of blacks and Hispanics, a lower percentage of non-English-speaking households, and a lower percentage of persons aged ≥65 years. Counties with at least one DSME program also were, on average, more affluent, with a higher average median household income, a lower poverty rate, a lower unemployment rate, and a lower percentage of the population with less than a high school education. In addition, a higher percentage of persons in these nonmetropolitan counties were insured. Although more persons with diagnosed diabetes lived in nonmetropolitan counties with at least one DSME program, crude and age-adjusted diabetes prevalence and incidence were lower in counties with at least one DSME program. Estimates of undiagnosed diabetes were not included in this analysis.
A multivariate logistic regression analysis of the association between county characteristics and the existence of at least one DSME program in a nonmetropolitan county was conducted ( Table 2). After adjusting for the other factors, the odds of a nonmetropolitan county having at least one DSME program increased as the percentage insured increased (adjusted odds ratio [AOR] = 1.10, 95% confidence interval [CI] = 1.08-1.13), the percentage with a high school education or less decreased (AOR = 0.94, 95% CI = 0.93-0.96), the unemployment rate decreased (AOR = 0.84, 95% CI = 0.78-0.90), and the natural logarithm of the number of persons with diabetes increased (AOR = 3.63, 95% CI = 3. 15-4.19).
Discussion
In 2016, 62% of nonmetropolitan counties did not have a DSME program. These counties were less affluent, had more black and Hispanic persons, and had higher prevalence and incidence rates of diabetes compared with nonmetropolitan counties with at least one DSME program. The number of persons with diabetes, percentage insured, percentage with a high school education or less, and percentage unemployed was significantly associated with whether a DSME program was located in a nonmetropolitan county. These findings underscore the need to further examine individual and community-level barriers to accessing quality DSME services in nonmetropolitan, socially disadvantaged communities.
The absence of DSME programs in approximately two thirds of nonmetropolitan counties aligns with reports of challenges sustaining health care services and health professionals in rural communities (11). Recent national assessments of the rural workforce identified shortages of health professionals across the United States, including registered nurses, dieticians, and health educators who provide services to DSME programs (11). Previous attempts to expand DSME programs in clinics in rural communities have encountered challenges with recruiting health professionals required to meet the standards of DSME program recognition (6).
The lower percentage of the county population that is insured and employed in counties without a DSME program (compared with those with a DSME program) is also consistent with previous findings of lower health insurance coverage rates in rural communities, particularly remote rural communities (12). Health insurance coverage is considered a substantial expense for many persons in rural areas because the costs exceed 10% of after-tax household income (13). Previous studies suggest that lower insurance coverage rates at the county level in the United States contributes to underuse of diabetes preventive services by persons with a diagnosis of diabetes (9).
The finding of lower rates of college education in nonmetropolitan counties without a DSME program (compared with those with a DSME program) also align with the lower socioeconomic status of rural communities. In a 2010 report, the Council of Economic Advisors highlighted the persistent educational gap between urban and rural communities, with a 10%-15% difference in the likelihood of adults in rural populations attending college compared with adults in urban populations (13). Although the study in this report included only nonmetropolitan counties, the persons in nonmetropolitan counties without a DSME program were also less educated than those in nonmetropolitan counties with at least one DSME program. Previous national studies suggest an association between lower education levels and lower use of preventive care practices (9).
Limitations
The findings in this report are subject to at least three limitations. First, DSME programs recognized by ADA or Abbreviation: DSME = diabetes self-management education. Sources: Addresses of active programs were obtained from the American Diabetes Association and American Association of Diabetes Educators, July 2016.
Nonmetropolitan county, no DSME programs Nonmetropolitan county, DSME programs Metropolitan county
FIGURE. Diabetes self-management education programs in nonmetropolitan counties -United States, 2016
accredited by AADE in this report include only primary sites and semi-independent sites. Some sites provide DSME program services at other off-site locations not included in this study (e.g., nursing facilities, work sites, and other community settings), with a small percentage of DSME programs using telemedicine services. In addition, the wide variation in the geographic size, contiguity, or proximity of counties could influence geographic accessibility for persons with diabetes who live in counties with and without a DSME program. As a result, analysis at the county level might underestimate or overestimate geographic accessibility to DSME programs. Second, by only including ADA-recognized and AADE-accredited programs, the gaps in DSME services might be overemphasized. For example, 44 states and the District of Colombia offer licensed Stanford Diabetes Self-Management Programs. However, Medicare and many private insurance plans require that programs be recognized by ADA or accredited by AADE for reimbursement; few Stanford Diabetes Self-Management Programs have achieved recognition or accreditation (L. Kolb, AADE, personal communication, 2017). Finally, this report analyzed county-level characteristics of the general county population. The characteristics of persons with diagnosed diabetes in those counties could differ from the overall county population. Although previous studies of diabetes care services found an association between persons living in socially disadvantaged counties and lower use of diabetes education among persons with diagnosed diabetes, this report limited interpretations of findings to the county level (9).
Conclusion
Monitoring the distribution of reimbursable DSME programs at the county level provides valuable insight needed to strategically address rural and other disparities in diabetes care and outcomes (14). These contextual-level findings of DSME program distribution, in conjunction with county-level estimates of diabetes prevalence, incidence, and other relevant data, provide the information needed to assess and address the significant gaps in the availability of DSME services. County-level data also can be used to identify opportunities to Abbreviations: AOR = adjusted odds ratio; CI = confidence interval (Wald); DSME = diabetes self-management education. * DSME program status: no DSME program or at least one DSME program. † All p values are <0.001. The odds ratios correspond to one percentage point change in the variables measured in percentages and one unit change in the natural logarithm of the number of persons with diabetes.
explore innovative technology such as telemedicine to deliver DSME and diabetes care in underserved rural communities with small populations and limited resources (15). Additional research is needed to further understand the factors influencing the geographic distribution of DSME programs in rural communities, including successful models for establishing and sustaining DSME programs in these communities. This report found that, in 2016, few DMSE programs existed in nonmetropolitan, socially disadvantaged counties in the United States. The number of persons with diabetes, percentage insured, percentage with a high school education or less, and percentage unemployed were significantly associated with whether a DSME program was located in a nonmetropolitan county. These findings highlight the need to examine more comprehensively the barriers and challenges to accessing quality DSME programs in these communities. | 4,197.2 | 2017-04-28T00:00:00.000 | [
"Economics"
] |
AI Chatbots and Challenges of HIPAA Compliance for AI Developers and Vendors
Developers and vendors of large language models (“LLMs”) — such as ChatGPT, Google Bard, and Microsoft’s Bing at the forefront—can be subject to Health Insurance Portability and Accountability Act of 1996 (“HIPAA”) when they process protected health information (“PHI”) on behalf of the HIPAA covered entities. In doing so, they become business associates or subcontractors of a business associate under HIPAA.
Introduction
There are various types of generative AI models, including LLMs.With LLMs being rapidly integrated in the healthcare industry, an increasing number of hospitals, healthcare professionals, and even patients are relying on AI chatbots for various purposes, including workflow optimization. 1When an AI chatbot interacts with a user, it initially collects data which is then processed and transformed into a mathematical representation.Subsequently, the chatbot leverages its training data to identify patterns and make predictions regarding the most likely next response of the user or sequence of responses. 2The deployment of AI chat bots in the healthcare industry can be accompanied by certain privacy risks both for data subjects and the developers and vendors of these AI-driven tools. 3LMs in the healthcare industry can take different forms.One example is when a HIPAA covered entity -i.e., "a health plan," "a health care clearinghouse," or "a health care provider who transmits any health information in electronic form in connection with a transaction covered by" HIPAA 4 -enters into a business associate agreement with an AI developer or vendor to disclose patients' electronic medical records. 5he AI developer/vendor will be a business associate of the covered entity under HIPAA, and it must comply with HIPAA if it engages in certain activities regarding the PHI on behalf of the covered entity. 6nother example is when a hospital or a physician adds inputincluding patients' health datainto an AI chat tool to respond to patients' routine medical questions, medical documentation, generating patient letters, medical summaries, composing emails, improving patients' understanding about procedures and side effects, and generating clinical and discharge notes, among others. 7Furthermore, there can be instances when patients engage in a customized conversation and share their own PHI with an AI chat tool for potential medial questions and recommendations. 8nderlying the widespread use and many other potential benefits of generative AI in the healthcare industry, however, certain legal challenges have emerged for AI developers and vendors that expose them to the risk of violating patients' privacy.This article aims to highlight some of the key measures that AI developers and Keywords: AI Chatbot, Artificial Intelligence, Law and Medicine, Privacy, Health Policy Abstract: Developers and vendors of large language models ("LLMs") -such as ChatGPT, Google Bard, and Microsoft's Bing at the forefront-can be subject to Health Insurance Portability and Accountability Act of 1996 ("HIPAA") when they process protected health information ("PHI") on behalf of the HIPAA covered entities.In doing so, they become business associates or subcontractors of a business associate under HIPAA.
About This Column
The articles featured in this space are written by ASLME's Expanding Perspectives Fellows.As the issues at the heart of law, medicine and ethics touch a wide variety of disciplines, cultures, countries, and experiences, ASLME launched its Expanding Perspectives program in 2022 to support new voices and ideas within our member community.vendors should implement to effectively manage these privacy risks.In other words, the purpose of this article is to recommend some key strategies to strike a harmonious balance between leveraging the benefits of AI and mitigating its substantial risks.The intended primary group of audience for this article is AI developers and vendors.This article also aims to serve as a valuable resource for policymakers and risk managers as it provides them with relevant information and practical recommendations to effectively manage some of the legal risks associated with AI in the healthcare context.This article proceeds in five Parts.The term covered entity notably includes "a health care provider who transmits any health information in electronic form in connection with a transaction covered by" HIPAA. 12he term business associate refers to a person or organization conducts certain activities on the PHI on behalf of or provide services, such as financial, administrative, management, legal, and data aggregation for a covered entity. 13t is noteworthy to mention that de-identified health information, which no longer can be used to identify the data subject, falls outside the definition of PHI, and there is no restriction on use or disclosure of de-identified data under HIPAA. 14n other words, as provided by the Department of Health and Human Services ("HHS"), "[h]ealth information that does not identify an individual and with respect to which there is no reasonable basis to believe that the information can be used to iden-tify an individual is not individually identifiable health information." 15Deidentification under HIPAA can be achieved through either Expert Determination 16 (i.e., certification of de-identification by an outside expert) or the Safe Harbor method 17 (i.e., removal of 18 identifiers including name, dates, city, state, zip code, and age).
Permitted and Prohibited Instances of Data Sharing
Developers and vendors of AI/MLdriven health products require a substantial volume, velocity, variety, and veracity of health information to be able to draw certain patterns in big data. 18Protection of PHI under HIPAA from use or disclosure ranges in a spectrum.The highest type of protection offered is when HIPAA requires the covered entity or business associate to obtain the patient's written authorization to use or disclose a recording 19 and limits the use or disclosure to the extent "minimum necessary." 20The lowest level of protection is use or disclosure of the PHI without any restrictions. 21urthermore, there are certain situations where disclosure of PHI is mandatory. 22he top chart in Figure 1 demonstrates four categories of use and disclosure of PHI under HIPAA.Based on the purposes of use or disclosure, situations when use, disclosure, and sharing of PHI occurs, and type of data recipients, there are four categories as demonstrated by colors red, This article aims to highlight some of the key measures that AI developers and vendors should implement to effectively manage these privacy risks.
In other words, the purpose of this article is to recommend some key strategies to strike a harmonious balance between leveraging the benefits of AI and mitigating its substantial risks.The intended primary group of audience for this article is AI developers and vendors.This article also aims to serve as a valuable resource for policymakers and risk managers as it provides them with relevant information and practical recommendations to effectively manage some of the legal risks associated with AI in the healthcare context.orange, yellow, and green in both of the charts in Figure 1.Protection of PHI under HIPAA ranges from the lowest level of protection which is situations when a covered entity is obligated to disclose PHI ( color red), to when a covered entity may, but is not required, obtain a data subject's authorization prior to use or disclosure of PHI (and colors orange and yellow), to the highest level of protection which is when a covered entity is required to obtain a data subject's written authorization prior to use and disclosure of their PHI (color green).
Limitations
There are certain scenarios of AI/ML use in the healthcare industry that HIPAA lacks sufficient protection for patients and clarity regarding the responsibilities of AI developers and vendors. 23Also, the Food and Drug Administration ("FDA") has not provided any guidelines or regulations on LLMs including ChatGPT and Bard either. 24he operations of AI developers and vendors on PHI may be left unregulated simply because they do not engage in activities that render them a business associate under HIPAA.When a patient discloses the PHI to an AI chatbot for medical advice, the AI developer or vendor is neither a covered entity, nor a business associate.Similarly, when a hospital or physician discloses patients' PHI to AI chatbots for various purposes including workflow optimization, that PHI is no longer regulated under HIPAA if the AI developer/ vendor is neither a HIPAA-covered entity, a business associate, nor a subcontractor of the business associate. 25This is an important deficiency because a considerable number of AI developers and vendors are technology companies that operate outside the traditional scope of HIPAA's covered entities and business associates framework and thus, patients' PHI is no longer regulated when processed by these companies. 26urthermore, even if the platform at issue was developed by a covered entity or business associate, the limitation of HIPAA's scope of regulation implies that if the data sub-ject decided to transfer the PHI to any other spaces, such as a personal health device, that data is no longer protected under HIPAA.The availability of an opt-out option for data subjects using an AI chatbot remains uncertain, as it is not clear whether the AI chatbot users have the same ability to opt out of future data uses as OpenAI users do. 27n the given example, the individual is entrusting an advanced platform with their sensitive health information.This platform potentially has the capability to gather a large amount of the user's personal information from a multitude of available online sources, in most of the cases without the knowledge or consent of data subject. 28In this case, the personal information that is not PHI but can be used to draw inferences about data subject's health information fall outside HIPAA's purview. 29Also, user-generated health information, such as health information posted on social media, despite their sensitivity fall outside the scope of HIPAA. 30ast but not least, with massive access of dominant tech companies -such as Meta, Google, and Microsoft -to patients' personal information, there is a significant risk of privacy violation through reidentification of health datasets that are de-identified through the Safe Harbor mechanism (also known as "data triangulation"). 31This concern about re-identification more pronounced when these dominant tech actors integrate generative AI into their own services -For instance, Google integrating chatbot Bard into its search engine or Microsoft integrating ChatGPT-based models into the Office -or when they require the users to rely on their services if they want benefit from the generative AI model -for instance, having to use Microsoft's Edge browser if an individual wants to use Microsoft's Bing chatbot. 32his issue of data triangulation featured in Dinerstein v. Google. 33he plaintiff in that case, Matt Dinerstein, sued defendants, the University of Chicago Medical Center, the University of Chicago, and Google for the invasion of his privacy rights. 34nerstein stated that sharing his deidentified electronic health records with Google created a significant risk of de-identification due to Google's access to massive personal information belonging each of its users. 35
FTC Act and Health Breach Notification Rule
The FTC has currently taken a proactive stance in protecting health data, thereby intensifying the importance of HIPAA compliance for AI developers and vendors.To protect consumers, the FTC heavily relies on Section 5(a) of the FTC Act and the FTC's Health Breach Notification Rule 36 ("HBNR").
In January 2021, the FTC entered into a settlement with the Flo Health Inc. ("Flo Health"). 37Flo Health has developed the Flo Period & Ovulation Tracker -a Direct-To-Consumer ("DTC") AI-driven health app -that allegedly collected detailed information about menstruations and gynecological health of more than 100 million users since 2016. 38According to the allegations of the FTC, contrary to its privacy promises, the company shared consumers personal health information with third parties such as Google, Facebook, Flurry, and AppsFlyer. 39ased on the facts of the complaint, the FTC asserted 7 counts against Flo Health: (i) "Privacy Misrepresentation -Disclosures of Health Information"; (ii) "Privacy Misrepresentation -Disclosures Beyond Identifiers;" (iii) "Privacy Misrepresentation -Failure to Limit Third-Party Use;" (iv) Misrepresentation Regarding Notice;" (v) "Misrepresentation Regarding Choice;" (vi) "Misrepresentation Regarding Accountability for Onward Transfers;" (vii) "Misrepresentation Regarding Data Integrity and Purpose Limitation." 40 Similarly in May 2023, the FTC filed a complaint against Easy Healthcare Corp. -the developer of the fertility app Premom -for consumer deception, unauthorized data sharing, and failure no notify its users about disclosing their menstrual cycles, reproductive health conditions, and other fertility-related data with third parties -including Google, journal of law, medicine & ethics AppsFlyer Inc. and two China-based firms -for various purposes such as advertising. 41ased on the facts of the complaint, the FTC asserted 8 counts: (i) "Privacy Misrepresentation -Disclosures of Health Information;" (ii) "Privacy Misrepresentation -Sharing Data with Third Parties;" (iii) "Deceptive Failure to Disclose -Sharing Geolocation Information with Third Parties;" (iv) "Privacy Misrepresentation -Third Parties' Use of Shared Data;" (v) "Deceptive Failure to Disclose -Third Parties' Use of Shared Data;" (vi) "Unfair Privacy and Data Security Practices;" (vii) "Unfair Sharing of Health Information for Advertising Purposes Without Affirmative;" (viii) "Violation of the [HBNR]." 42his suit against Easy Healthcare Corp. -was the second attempt of the FTC to hold a company accountable for an alleged violation of HBNR.Only a few months before that, in January 2023, FTC filed a complaint against GoodRX Holdings Inc ("GoodRX") 43 -a "consumerfocused digital healthcare platform" that "advertises, distributes, and sells health-related products and services directly to consumers, including purported prescription medication discount products." 44Allegedly, the company failed "to notify [more than 55 million] consumers and others of its unauthorized disclosures of consumers' personal health information to Facebook, Google, and other companies [since 2017]." 45ased on the facts of the complaint, the FTC asserted 8 counts: "(i) Privacy Misrepresentation: Disclosure of Health Information to Third Parties;" (ii) "Privacy Misrepresentation: Disclosure of Personal Information to Third Parties;" (iii) "Privacy Misrepresentation: Failure to Limit Third-Party Use of Health Information;" (iv) "Privacy Misrepresentation: Misrepresenting Compliance with the Digital Advertising Alliance Principles;" (v) "Privacy Misrepresentation: HIPAA Compliance;" (vi) "Unfairness: Failure to Implement Measures to Prevent the Unauthorized Disclosure of Health Information;" (vii) "Unfairness: Failure to Provide Notice and Obtain Consent Before Use and Disclosure of Health Information for Advertising;" (viii) "Violation of the Health Breach Notification Rule 16 C.F.R. § 318." 46 Following its settlement with GoodRx in February 2023, two other companies went on the FTC's radar.First, in March 2023, the FTC filed a complaint against BetterHelp Inc ("Better Help"). 47The company offered counseling services through its primary website and app, called "BetterHelp," since 2013. 48The FTC alleged that the respondent liable for disclosure of its consumers' health information for advertising purposes with third parties including Facebook, Snapchat, Pinterest, and Criteo; deceptive privacy misrepresentations; as well as failure to take reasonable measures to safeguard the collected health information. 49ased on the facts of the complaint, the FTC asserted 8 counts: "(i) Unfairness -Unfair Privacy Practices;" (ii) "Unfairness -Failure to Obtain Affirmative Express Consent Before Collecting, Using, and Disclosing Consumers' Health Information;" (iii) Failure to Disclose -Disclosure of Health Information for Advertising and Third Parties' Own Uses;" (iv) "Failure to Disclose -Use of Health Information for Advertising;" (v) "Privacy Misrepresentation -Disclosure of Health Information for Advertising and Third Parties' Own Uses;" (vi) "Privacy Misrepresentation -Use of Health Information for Advertising;" (vii) "Privacy Misrepresentation -Disclosure of Health Information; (viii) Privacy Misrepresentation -HIPAA Certification." 50hen, in June 2023, the FTC announced a proposed settlement agreement with 1Health.ioInc. ("1Health"), a provider of DNA health test kits and health, wellness, and ancestry reports. 51The FTC argued on several bases that 1Health made misrepresentations about its data privacy practices, including its lack of data deletion processes and a retroactive policy change that enabled genetic data sharing with third parties. 52ased on the facts of the complaint, the FTC asserted 5 counts: "Security Misrepresentation -Exceeding Industry Standards;" (ii) "Security Misrepresentation -Storing DNA Results without Identifying Information;" (iii) "Privacy Misrepresentation -Data Deletion;" (iv) "Privacy Misrepresentation -Saliva Sample Destruction;" (v) "Unfair Adoption of Material Retroactive Privacy Policy Changes Regarding Sharing of Consumers' Sensitive Personal Information with Third Parties." 53igure 2 provides an overview of the FTC's recent consumer health data and privacy cases against the companies that we mentioned in this section.This Figure aims to pinpoint the similarities between these complaints to emphasize the grounds that AI developers and vendors need to be mindful about.
Guidelines and Enforcement Actions of the FTC
It is true that HIPAA does not provide clear guidelines for compliance.However, AI developers and vendors should treat health data in a way that would be most compliant with not just the letter of HIPAA but with its spirit and purpose.In doing so, they need to take into serious considerations the guidelines and enforcement actions of the FTC that seeks to protect consumers from deceptive or unfair practices or acts in or affecting commerce. 54ith the increased focus of FTC on health data privacy, collection, use, and disclosure of sensitive health data is very risky, particularly in cases of data sharing with third parties for advertising purposes.To mitigate the potential risks, AI developers and vendors need to be exercise caution, minimize their data collection to what is strictly necessary, and actively engage in monitoring the tracking technologies on their website and apps to prevent any unintended and unlawful collection or sharing of their consumers' health information.These companies are advised to act with due diligence to notify consumers and obtain their affirmative consent prior to any sort of material changes to their privacy policies such as data sharing for advertising medical-legal partnerships: equity, evaluation, and evolution purposes.They should also refrain from any sort of misrepresentation of their privacy compliance or deliberate marketing that causes misunderstanding about the capacities of the offered tool.he goal of this framework is "to offer a resource to organizations designing, developing, deploying, or using AI systems [as well as] to help manage the many risks of AI and promote trustworthy and responsible development and use of AI systems." 57I governance goes hand in hand with data governance.AI developers and vendors are advised to place a primary focus on managing the risks of privacy violations and be diligent to adapt their standards in compliance with new regulations.In addition, to foster trust in AI and reinforcing the company's commitment to safeguarding consumer privacy in AI applications, AI developers and vendors need to adopt a proactive approach in AI audits and periodically communicate with data subjects about how their data is being handled.
Conclusion
It is crucial for developers of AI/MLdriven tools to recognize the shortcomings of HIPAA to gain a better understanding about the challenges related to compliance and be mindful about developing appropriate solutions.To achieve this, AI developers and vendors should be familiar with very common scenarios where HIPAA does not extend its coverage to sensitive health data of patients or consumers.This understanding has a critical role in paving the way for addressing these scenarios in a manner that aligns with the policy objectives and the spirit of HIPAA.
AI governance goes hand in hand with data governance, and when combined, allows AI developers and vendors to clearly identify where failures happen within their systems to best protect themselves from potential legal actions as outlined above.In managing compliance risks associated with the collection, use, and disclosure of health data, as well as building trust and credibility with users, AI developers and vendors should avoid any sort of false representations of their privacy policies in any of their open-to-consumers platforms such as their in-app privacy policy or the privacy terms on their website.Furthermore, by diligently assessing AI system's compliance with legal considerations as well as keeping the users informed about how their data is being handled, AI developers and vendors can foster a privacy-conscious environment.
Figure 1
Figure 1 Level of Protection of PHI under HIPAA based on the Category of Data Recipient JLME COLUMNThe Journal of Law, Medicine & Ethics, 51 (2023): 988-995.© 2024 The Author(s) | 4,530.4 | 2023-01-01T00:00:00.000 | [
"Law",
"Computer Science"
] |
Origin and Characteristics of Internal Genes Affect Infectivity of the Novel Avian-Origin Influenza A (H7N9) Virus
Background Human infection with a novel avian-origin influenza A (H7N9) virus occurred continuously in China during the first half of 2013, with high infectivity and pathogenicity to humans. In this study, we investigated the origin of internal genes of the novel H7N9 virus and analyzed the relationship between internal genes and infectivity of the virus. Methodology and Principal findings We tested the environmental specimens using real-time RT-PCR assays and isolated five H9N2 viruses from specimens that were positive for both H7 and H9. Results of recombination and phylogeny analysis, performed based on the entire sequences of 221 influenza viruses, showed that one of the Zhejiang avian H9N2 isolates, A/environment/Zhejiang/16/2013, shared the highest identities on the internal genes with the novel H7N9 virus A/Anhui/1/2013, ranging from 98.98% to 100%. Zhejiang avian H9N2 isolates were all reassortant viruses, by acquiring NS gene from A/chicken/Dawang/1/2011-like viruses and other five internal genes from A/brambling/Beijing/16/2012-like viruses. Compared to A/Anhui/1/2013 (H7N9), the homology on the NS gene was 99.16% with A/chicken/Dawang/1/2011, whereas only 94.27-97.61% with A/bramnling/Beijing/16/2012-like viruses. Analysis on the relationship between internal genes and the infectivity of novel H7N9 viruses were performed by comparing amino acid sequences with the HPAI H5N1 viruses, the H9N2 and the earlier H7N9 avian influenza viruses. There were nine amino acids on the internal genes found to be possibly associated with the infectivity of the novel H7N9 viruses. Conclusions These findings indicate that the internal genes, sharing the highest similarities with A/environment/Zhejiang/16/2013-like (H9N2) viruses, may affect the infectivity of the novel H7N9 viruses.
Introduction
Human infection with a novel avian-origin influenza A (H7N9) virus, which is associated with severe respiratory symptoms and even deaths, was first reported in eastern China in April, 2013 [1,2]. There have been 135 diagnosed cases including 44 deaths as of Aug 14 th , attracting great attention worldwide [3].
The novel H7N9 virus is a triple reassortant virus, in which the HA and NA genes originated from A/duck/Zhejiang/12/2011 (H7N3) and A/wild bird/Korea/A14/2011 (H7N9) respectively, whereas the internal genes are closely related to A/brambling/ Beijing/16/2012-like viruses (H9N2), as previously described [1]. Most of the current researches have focused on the HA and NA genes since the Q226L mutation in the HA protein has been considered to change the binding capacity from avian species to human, and thus might increase the transmission ability in air [4][5][6]. However, various studies have also shown that the continuous reassortments occurred on the internal genes of avian influenza A virus played a key role in the direct interspecies transmission and triggering human infection [7,8]. This study, therefore, paid close attention to the origin and characteristics of the internal genes of the novel H7N9 virus. Different subtypes of avian influenza viruses (AIVs) possess different virulence and infectivity. Except for domestic poultry and wild birds, some AIVs including subtype H5, H7 and H9 had been detected from humans [9]. The high pathogenic avian influenza (HPAI) H5N1 viruses can spread rapidly in and between poultry, resulting in hundreds of millions of domestic birds affected and killed [10]. Only in China, there are approximately 100,000 domestic birds infected with the H5N1AIVs every year, causing huge economic losses [11]. The HPAI H5N1 viruses can also be widely transmitted by poultry products, poultry movements and migration of wild birds. In total of 63 countries had reported to detect the HPAI H5N1 viruses from poultry or wild birds [12]. In addition, the HPAI H5N1 virus is a great threat to human as 637 human beings had been infected since 2003 [13].
In contrast to H5N1, subtype H9 AIVs were generally considered to be low pathogenicity viruses causing mild disease among domestic poultry and wild birds [14][15][16]. Human infections with H9N2 AIVs have occasionally been reported in southern China and Hong Kong, but the clinical symptoms of the patients were mild to moderate and no deaths have occurred [17][18][19]. Even low pathogenic, H9N2 AIVs, however, possess high infectivity in both poultry and human. Since the first subtype H9N2 AIV was isolated in 1966 [20], the H9N2 AIVs have been monitored from multiple avian species in various regions [21][22][23][24][25][26]. The previous serological surveys have also pointed out that the positive rates for anti-H9N2 antibody were high in both poultry and human beings. In Iran, 23% to 87% of poultry-related workers possessed antibody for H9 [9]. Chinese studies also reported that 12.8% of chickens and 5.1% of poultry-related workers in Guangzhou area were seropositive for H9N2 [27]. Even in healthy individual, the prevalence of anti-H9N2 antibodies was reported to be around 2% [19,28].
Infections with earlier H7N9 AIVs were rarely reported in China or sporadically happened in a couple of countries [29,30]. No H7N9 virus has been monitored and isolated from poultry in Zhejiang province before 2013, which indicated the low transmissibility of the earlier H7N9 AIVs. However, the novel H7N9 viruses represented high transmissibility as the HPAI H5N1 and the H9N2 AIVs possessed. Our concern is whether we can find any common features on the basis of the genome sequences among the HPAI H5N1 viruses, the H9N2 and earlier H7N9 AIVs, as well as the novel H7N9 viruses, which could be the possible reason for the high infectivity of the novel H7N9 virus.
In this study, we monitored the environmental specimens collected from live poultry markets and isolated five avian H9N2 viruses from both H7 and H9 positive specimens. Meanwhile, the origin of the internal genes of the novel H7N9 virus was systematically illustrated by analyzing the entire sequences of 221 influenza viruses using RDP3 software. Phylogeny analysis and homological comparison were performed to determine the reassortment events. In addition, the specific amino acids possibly related to the high infectivity of the novel H7N9 viruses were analyzed by comparing the genome sequences with the HPAI H5N1 viruses, the H9N2 and the earlier H7N9 AIVs.
Surveillance of AIVs from the environmental specimens
After the outbreaks of human infection with the novel H7N9 virus in Zhejiang province, China in 2013, we tested the environmental specimens collected from live poultry markets using real-time RT-PCR. Within the 82 environmental specimens collected from six cities, 33 (40.24%) were found to be positive for H7, 15 (18.29%) positive for H5 and 39 (47.56%) positive for H9. There were 23 (28.05%) found to be positive for both H7 and H9. Of the specimens positive for both H7 and H9, five AIVs were isolated, including A/environment/ Zhejiang/09/2013 (ZJ09), A/environment/Zhejiang/13/2013 (ZJ13), A/environment/Zhejiang/14/2013 (ZJ14), A/ environment/Zhejiang/15/2013 (ZJ15) and A/environment/ Zhejiang/16/2013 (ZJ16), in which two specimens were from Hangzhou city and three from Huzhou city. Ct values of the clinical specimens of the five isolates ranged from 10 to 14 for H9, and 17 to 35 for H7. After the virus isolates were obtained from allantoic liquids, the whole genome of the isolates were sequenced and aligned in Blast of NCBI. The results showed that the sequences of each gene shared the highest similarities with H9N2 AIVs strains, and thus the five Zhejiang isolates were identified as avian influenza A (H9N2) viruses. The details of five Zhejiang H9N2 AIVs are shown in Table 1.
A retrospective analysis on surveillance data for the AIVs in Zhejiang province between 2011 and 2012 was performed as well. In total 784 environmental specimens were tested by realtime RT-PCR assay to detect H5, H7 and H9 AIVs in 2011, in
Origin of the internal genes of the novel H7N9 virus
The evolutionary process of the novel H7N9 virus analyzed by RDP3 software showed that the six internal genes of the novel H7N9 viruses shared the highest similarities with ZJ16 (H9N2)-like viruses ( Figure 1). Significant reassortment signals were found on the internal genes of ZJ16 (H9N2)-like viruses, with acquiring NS gene from a H9N2 virus isolated from chicken in Jiangsu province in 2011, named A/chicken/ Dawang/1/2011 (H9N2), and other five genes from A/ brambling/Beijing/16/2012 (BJ16) (H9N2)-like strains.
Phylogenetic trees drawn on the basis of the amino acid sequences were used to confirm the reassortment events. The PB2 gene phylogenetic tree (Figure 2A) showed that five novel H7N9 strains were closest to Zhejiang H9N2 strains ZJ09,
Amino acids associated with the high infectivity of the novel H7N9 virus
In this study, the specific amino acids possibly related to the high infectivity of the novel H7N9 viruses were analyzed based on the criteria illustrated in Figure 3. As the novel H7N9 viruses, the HPAI H5N1 and the H9N2 AIVs were all defined as highly transmissible viruses, the amino acids which were identical amongst these three groups were selected first. Since the earlier H7N9 AIVs were considered as low infectivity, the identical amino acids between the earlier H7N9 viruses and the selected amino acid sites mentioned-above were excluded. The results are shown in Table 3.
In accordance with Figure 3 and the results in Table 3, a total of nine amino acids within the internal genes, including one in PB1 (aa113), three in NP (aa77, aa105 and aa377), three in M1 (aa15, aa101 and aa166), one in M2 (aa28), and one in NS1 (aa207) were found to be identical between the novel H7N9, the HPAI H5N1 and the H9N2 AIVs groups, but completely different with the earlier avian H7N9 group. Importantly, no amino acid on the HA and NA genes satisfied this requirement, therefore, only the nine specific amino acids showed a relationship with the high infectivity of the novel H7N9 virus.
Discussion
Since March 2013, a severe and fatal respiratory disease, characterized by high fever and severe low respiratory symptoms, has occurred continuously in China, mainly in Zhejiang and Jiangsu province, as well as Shanghai city [1,2,6]. The Chinese national influenza centre (CNIC) identified the pathogen as a triple reassortant avian-origin influenza A (H7N9) virus by testing the clinical specimens of the first three patients; meanwhile, the genetic origins of the viruses were also analyzed [1].
The novel H7N9 virus has infected 46 individuals resulting in eight deaths within one month in Zhejiang province, China. Except for the laboratory diagnosis of the cases, the environmental specimens collected from live poultry markets were monitored for H5, H7 and H9 AIVs in order to assess whether the novel H7N9 virus has infected the domestic poultry and circulated in the external environment. Interestingly, 23 specimens were detected to be positive for both H7 and H9 during this surveillance, however, the Ct value of H9 (Ct value 10-14) obtained from real-time RT-PCR assay were significant lower than that of H7 (Ct value [17][18][19][20][21][22][23][24][25][26][27][28][29][30][31][32][33][34][35] and all the isolates obtained from the double-positive specimens were identified as H9N2 AIVs, which indicated the co-infection of H7 and H9 has existed in Zhejiang province, and that domestic poultry, especially chicken, has a close relationship with the reassortment between the novel H7N9 and avian H9N2 viruses. Based on the previous study, A/brambling/Beijing/16/2012 (BJ16), a H9N2 strain isolated from brambling in Beijing in 2012, were identified as the donor of the internal genes of the novel H7N9 virus [1]. However, A/environmental/Zhejiang/ 16/2013 (H9N2)-like strains showed the higher similarities with the novel H7N9 virus than BJ16. The surveillance on the environmental specimens revealed that A/environmental/ Zhejiang/16/2013 (H9N2)-like strains existed and circulated in the domestic poultry in Zhejiang province before the outbreak caused by the novel H7N9 virus. Since most of the human infections with the novel H7N9 virus occurred in the Yangtze River Delta, from an area point of view, it seems more reasonable that the six internal genes of the novel H7N9 virus originated from the viruses reassorted and circulated in the Yangtze River Delta. Unfortunately, there is no information about the avian H9N2 viruses isolated in Jiangsu, Zhejiang province and Shanghai city in 2012, it is therefore difficult to clearly know when the ZJ16 (H9N2)-like viruses emerged, how long they have circulated and who were their original host.
Human infection with the HPAI H5N1 virus was first reported in Hong Kong in 1997, resulting in 18 cases with six deaths from May 1997 to February 1998 [31]. The pathogen caused this outbreaks was identified as A/Hong Kong/156/97 (H5N1) strain. Genetic analysis on the whole genome sequence showed that the internal genes of A/Hong Kong/156/97 (H5N1) shared the highest homologies with H9N2 AIVs [31,32]. As same as the A/Hong Kong/156/97 (H5N1) strain, the six internal genes of the novel H7N9 virus were also originated from the H9N2 AIVs. The notable feature of both A/Hong Kong/ 156/97 (H5N1) and the novel H7N9 viruses is the increase of the infectivity after genetic reassortment, with a large amount of poultry and even human infected. Thus, the internal genes reassorted from H9N2 AIVs may affect the infectivity of the novel H7N9 viruses.
In this study, several subtypes of AIVs, including HPAI H5N1, avian H9N2 and earlier H7N9 viruses, were included to analyze the possible reason for the high infectivity of the novel H7N9 virus. These three subtypes of AIVs were all of avian origin and have been identified to cause human infections. The differences and similarities of amino acid sequences among the three subtypes of AVIs may result in different infectivity to humans. In this study, the HPAI H5N1, the H9N2 AIVs and the novel H7N9 viruses are characterized as high infectivity, therefore the identical amino acids shared amongst these three groups were selected as the high infectivity related sites, as shown in Figure 3. Since the earlier H7N9 AIVs are acknowledged low transmissibility [29,30], we removed the amino acids that the earlier H7N9 virus possessed from the selected sites. Interestingly, all the nine amino acids obtained, possibly associated with the high infectivity of the novel H7N9 virus, were located on the internal proteins and none were found on the surface proteins. It can be inferred that the internal genes, reassorted from avian H9N2 virus, may increase the infectious ability of the novel H7N9 virus.
Even though the novel H7N9 virus is highly infective and pathogenic to humans, it is generally low pathogenic to poultry [1,29]. Therefore, it is difficult to eliminate the novel H7N9 virus from the environment, the domestic poultry as well as the wild birds. Although the infection can be controlled in the same manner as the HPAI H5N1 viruses through slaughter of the poultry [31,33,34], widely existing avian H9N2 viruses will provide rich sources of the internal genes to H5, H7 and other subtype viruses [14,19,31]. Once the reassortment happened, the newly reassortant avian influenza viruses have the potential to cross the species barrier to infect humans. Therefore, the surveillance on mutations, evolutionary and reassortment of the H9N2 AIVs should be strengthened for the novel H7N9 viruses to be controlled.
Ethics Statement
Ethics committee of Zhejiang provincial Center for Disease Control and Prevention (CDC) approved this study. In order to control the outbreak of infection with the novel H7N9 virus, staff in municipal CDC took responsibility for disinfection of live poultry markets. To assess the disinfectant effect, the environmental specimens were collected before and after the disinfection. All the specimens used in this study were collected from environment before disinfection.
Specimens and virus isolation
In total 82 environmental specimens were screened for subtype H5, H7 and H9 AIVs using one-step real-time reversetranscriptase polymerase chain reaction (rRT-PCR) assays. The primer pairs and probes of the real-time RT-PCR assays were designed and provided by the Chinese national influenza centre (CNIC). The environmental specimens, including swabs of chicken cage, chicken manure and sewage of slaughter poultry, were collected from the live poultry markets in Zhejiang province, China during 1 st April to 2 nd May in 2013. Those specimens found to be positive for both H7 and H9 AIVs were inoculated to the allantoic sac of 9-to-11-day-old specific pathogen-free (SPF) embryonated chicken eggs for 48 hours at 35°C to propagate the virus.
RNA extraction and sequencing
RNA was extracted from the harvested allantoic liquids using the QIAamp Viral RNA Mini Kit (Qiagen). The whole genomes of five avian H9N2 viruses were performed by second generation sequencing in Applied Biosystems (ABI). The obtained sequences of each gene were aligned in BLAST of NCBI. The complete sequences of five Zhejiang isolates were submitted to GenBank by Zhejiang provincial centre for disease control and prevention (CDC).
A multiple sequence alignment was performed by the ClustalW program and the amino acid mutations were analyzed with the Highlight model using MEGA software (Version 5.0). Phylogenetic trees were drawn with the Neighbor-joining method of the MEGA software (Version 5.0) by means of bootstrap analysis with 1000 replications. Reassortment events on the six internal genes of the novel H7N9 viruses were searched by the RDP, GENECONV and MaxChi suites within the Recombination Detection Program (RDP3) software, according to the instruction and previous studies [36]. The highest P value was set as 0.01 in order to enhance the credibility of the results. Other parameters were used with default settings.
Phylogeny analysis
Phylogenetic trees were generated based on the amino acid sequences of 39 influenza viruses by means of the Neighborjoining (NJ) method of the MEGA software (Version 5.0), with the bootstrap analysis of 1000 replications.
Comparison on the amino acid sequences of three subtypes of AIVs
The multiple sequence alignments based on the amino acid sequences of the HPAI H5N1 viruses, the H9N2 and the earlier H7N9 AIVs, as well as the novel H7N9 viruses were performed by using BioEdit software. Except for the novel H7N9 viruses, all the amino acid sequences were downloaded from GenBank. The GenBank accession numbers of the AIVs used for the comparison were shown in Table S1.
The comparative method and selection principle of the specific amino acids associated with high infectivity of the novel H7N9 virus is illustrated in Figure 3. The amino acids sequences on the whole genome of the four group AIVs were compared. The yellow region, representing high infectivity, was obtained by selecting the amino acids that were identical amongst the novel H7N9, the HPAI H5N1 and H9N2 AIVs, but different from the earlier H7N9 AIVs. Table S1. Genbank accession numbers of amino acid sequences for the AIVs used in this study. (DOC) | 4,198.6 | 2013-11-22T00:00:00.000 | [
"Biology",
"Medicine"
] |
The Connection Between Social Inequality And Intergenerational Transfers Between Three Generations In Europe
Abstract Family members support each other across the entire family cycle. Parents help their adult children with financial transfers and hands-on-support and childcare, while children in mid-life often support their older parents with help and care. However, there is profound social inequalities linked to intergenerational transfers. While there is some research on inequality for some types of intergenerational transfers and some transfer directions, there is still no conclusive study bringing together all different support types between multiple generations from different social backgrounds over time. In our view, taking a longitudinal multi-generational perspective is essential to capture dependencies and negotiations within families from different socio-economic backgrounds within different regional contexts. If middle-aged parents have to take care of their own older parents, they have fewer resources for their(grand-)children, who might then receive less attention and support from them. This may differ according to access to support from public or private institutions. Here, country and regional specifics have a huge impact on support patterns within the family, which can only be captured when looking into developments and change. Using six waves of the Survey of Health, Ageing and Retirement in Europe (SHARE), we look at intergenerational transfers between multiple generations over time across European regions, considering mid-aged Europeans in the “sandwich” position between older parents and children and include multiple transfer directions and types over time to assess the links between social inequality and intergenerational solidarity in Europe’s ageing societies. The impact of Covid 19 on this issue will also be considered.
effects on participant interaction during intergenerational programming.To address this knowledge gap, activity leaders at five sites serving older adults and/or preschoolers received training to implement 14 evidence-based practices during intergenerational activities involving 109 older adult and 105 preschool participants over four years.We utilized multi-level modeling to test whether variations in implementation of practices were associated with variations in participants' responses to programming on a session-by-session basis.For both preschool and older adult participants, analyses revealed that the implementation of certain practices was associated with significantly more intergenerational interaction.Specifically, when person-centered best practices (e.g., leading activities that are age-and role-appropriate for older adults) were implemented, preschoolers (estimate=5.83,SD=2.11,p=0.01 and older adults (estimate=5.11,SD=.10,p=0.02)had more intergenerational interaction.Likewise, when environmental-centered best practices were implemented, such as pairing materials between intergenerational partners, preschoolers (estimate=6.05,SD=1.57, p=0.002) and older adults (estimate=6.50,SD=1.85, p=0.001) had more intergenerational interaction.Our findings reveal session-by-session variation in intergenerational interaction that can be impacted by implementation practices, which highlights the importance of training activity leaders to implement evidence-based practices.Researchers and practitioners should consider how session-by-session variation in program implementation affects participant response.
SUBJECTIVE WELL-BEING, STATUS IDENTITY, AND INTERGENERATIONAL RELATIONS AMONG THE ELDERLY
Jieming Chen, Texas A&M University -Kingsville, KIngsville, Texas, United States This study investigates the influences of intergenerational relations on the subjective wellbeing and status identity of the elderly population in China.The project draws insights from the studies of social mobility and stratification, and that of family relations and old age support.Because of widespread exchange of economic resources across generations and strong sense of connectedness among parent and adult children families that continue to exist in Chinese society today, we hypothesize that older parents' subjective sense of well-being and evaluation of their socioeconomic statuses are positively related with the socioeconomic conditions of their grown children, and the strength of the such relations with them.The study used the data from the 2013 China General Social Survey (CGSS), and the results provide fairly strong support to the hypotheses.The implications of the results on age-based stratification are discussed.
SUPPORT EXCHANGES AMONG VERY OLD PARENTS AND THEIR CHILDREN: FINDINGS FROM THE BOSTON AGING TOGETHER STUDY
Kyungmin Kim, 1 Kathrin Boerner, 2 Yijung Kim, 3 and Daniela Jopp, 4 1.Seoul National University,Seoul,Republic of Korea,2. University of Massachusetts Boston,Boston,Massachusetts,United States,3. The University of Texas at Austin,Austin,Texas,United States,4. University of Lausanne,Lausanne,Vaud,Switzerland Very old parents and their "old" children are a growing group in industrialized countries worldwide.Care needs of very old parents can be substantial, while children may also face their own age-related issues.However, little is known about support exchanges within very-old parent-child dyads.This study aimed to identify patterns of support exchanges occurring in these dyads, as well as to ascertain individual and relationship factors associated with these patterns.Participants were 114 very old parents (age ≥ 90) and their children (age ≥ 65) from the Boston Aging Together Study.Data were collected using comprehensive, semistructured in-person interviews with both dyad members, including standardized assessments of support exchanges, relationship quality, health, and perceptions of family norms.Actor-Partner Interdependence Models (APIM) were used to predict upward and downward support reported by children and parents.Both dyad members not only reported substantial upward support (given to parents by children) in all domains but also notable amounts of downward support (given to children by parents) in the domains of emotional support, listening, and socializing.Findings showed significant associations of parent functional impairment, parent and child relationship quality, and child perceptions of family obligation with upward support, and of relationship quality with downward support.Continued support exchanges among very old parents and their children indicated that intergenerational theories still hold up in very late life relationships.Healthcare professionals should be aware that attention to relationship quality and family norms might be vital to ensure that support needs are met.
THE CONNECTION BETWEEN SOCIAL INEQUALITY AND INTERGENERATIONAL TRANSFERS BETWEEN THREE GENERATIONS IN EUROPE Christian Deindl, TU Dortmund University, Dortmund, Nordrhein-Westfalen, Germany
Family members support each other across the entire family cycle.Parents help their adult children with financial transfers and hands-on-support and childcare, while children in mid-life often support their older parents with help and care.However, there is profound social inequalities linked to intergenerational transfers.While there is some research on inequality for some types of intergenerational transfers and some transfer directions, there is still no conclusive study bringing together all different support types between multiple generations from different social backgrounds over time.In our view, taking a longitudinal multi-generational perspective is essential to capture dependencies and negotiations within families from different socio-economic backgrounds within different regional contexts.If middle-aged parents have to take care of their own older parents, they have fewer resources for their(grand-)children, who might then receive less attention and support from them.This may differ according to access to support from public or private institutions.Here, country and regional specifics have a huge impact on support patterns within the family, which can only be captured when looking into developments and change.Using six waves of the Survey of Health, Ageing and Retirement in Europe (SHARE), we look at intergenerational transfers between multiple generations over time across European regions, considering mid-aged Europeans in the "sandwich" position between older parents and children and include multiple transfer directions and types over time to assess the links between social inequality and intergenerational solidarity in Europe's ageing societies.The impact of Covid 19 on this issue will also be considered.
THE COSTS OF CONCERN: HEALTH IMPLICATIONS OF WORRIES ABOUT AGING PARENTS AND ADULT CHILDREN
Kelly Cichy, 1 and Athena Koumoutzis, 2 1. Kent State University, Kent, Ohio, United States, 2. Miami University, Oxford, Ohio, United States As their parents age and their children enter adulthood, midlife adults need to manage their worries and concerns about both generations.In midlife, worries about aging parents' health and emerging needs for support co-occur alongside worries about adult children's relationships and prolonged need for support.Research reveals links between midlife adults' worry and sleep quality, underscoring how worries compromise health and well-being.In addition to compromising sleep, worries may also contribute to poor health behaviors, such as emotional eating.Emotional eating, where individuals eat in response to stressors and negative emotions, is a significant risk factor for overeating and obesity.Less is known; however, about how midlife adults' worries contribute to poor health behaviors.To address this gap, the current study considers how midlife adults' concurrent and previous day's daily worries about aging parents and adult children are associated with daily well-being and health behaviors.Respondents are midlife adults (40-60 years) from Wave II of the Family Exchanges Study (Fingerman et al., 2009).During 7 days of daily telephone interviews, respondents indicated if they worried about their adult children and their aging parent(s), if they ate food for comfort, and their daily negative mood.Controlling for demographics, on days when midlife adults worried about their adult child(ren), they reported more negative emotions than on days without these worries (p <.05).Respondents engaged in more eating for comfort the day after they reported worrying about their mother (p < .05).Implications for aging families will be discussed.
THE EFFECTS OF SOCIAL SUPPORT ON THE PSYCHOLOGICAL WELL-BEING OF OLDER PARENTS: A LONGITUDINAL STUDY Erik Blanco, University of Southern California, Leonard Davis School of Gerontology, Los Angeles, California, United States
This study examines whether parental support (the provision of social support by older parents to adult children) and filial support (older parents' receipt of social support from adult children) influence two orthogonal dimensions of older adults' psychological wellbeing: positive feelings and negative feelings.This study also highlights the importance of accounting for parental need as a mediator of social support.A longitudinal design is used to examine the effects of social support on the psychological wellbeing of older adults at Wave 6 (1998) and Wave 8 (2004) of the Longitudinal Study of Generations.Parental support significantly increases parents' positive feelings, which suggests that, when it comes to positive feelings, it is better to give support than to receive it.Filial support findings indicate that older adults with greater level of disability demonstrate a decrease in negative feelings when they received filial support.However, this effect does not hold for older adults with lesser levels of disability, suggesting that, when it comes to older adults' negative feelings, it is better to receive support (rather than to give it) when parents are in need.Although parental and filial support have the potential to buffer stressful life transitions in old age, most parents wish to remain independent, even in later life, making them reluctant to accept filial support.The parent-adult child relationship is crucial for psychological wellbeing, especially because of increased life expectancy.
THE IMPACT OF LIVING ARRANGEMENTS AND INTERGENERATIONAL SUPPORT ON THE HEALTH STATUS OF OLDER PEOPLE IN CHINA
Yazhen Yang, Maria Evandrou, and Athina Vlachantoni, University of Southampton, Southampton, England, United Kingdom Research to-date has examined the impact of intergenerational support in terms of isolated types of support, or at one point in time, failing to provide strong evidence of the complex effect of support on older persons' wellbeing.Using the Harmonised China Health and Retirement Longitudinal Study (2011, 2013and 2015), this paper investigates the impact of older people's living arrangements and intergenerational support provision/ receipt on their physical and psychological wellbeing, focusing on rural/ urban differences.The results show that receiving economic support from one's adult children was a stronger predictor for higher life satisfaction among older rural residents compared to those in urban areas, while grandchild care provision was an important determinant for poor life satisfaction only for older urban residents.Receiving informal care from one's adult children was associated with a poor (I) ADL functional status and with depressive symptoms among older rural people.Meanwhile, having weekly in-person and distant contact reduced the risk of depression among older people in both rural and urban areas.The paper shows that it is important to improve the level of public economic transfers and public social care towards vulnerable older people in rural areas, and more emphasis should be placed on improving the psychological well-being of urban older residents, such as with the early diagnosis of depression.
Family
Caregivers' Perceptions and Experiences | 2,674 | 2021-12-01T00:00:00.000 | [
"Sociology",
"Economics"
] |
Probabilistic Inference with Interval Probabilities
: Probabilistic inference problems have very broad practical applications. To solve this kind of problems under conditions of certainty, an effective mathematical apparatus has been developed. In real situations, obtaining deterministic estimates of relevant probabilities is often difficult; therefore, problems with handling uncertain estimates of probabilities appear. This paper examines the problem of probabilistic inference with probability trees provided that the initial probabilities are given in the form of intervals of their possible values.
Introduction
In many practical problems, it is often necessary to determine the probabilities of events under consideration based on the probabilities of other events. Such problems are called probabilistic inference problems. Elementary tasks of probabilistic inference include [1,2]. More complicated are problems of probabilistic inference for probability trees and belief networks. More details about methods of solving this kind of tasks can be found in [1,2].
Effective techniques are developed for solving probabilistic inference problems for the cases when initial probability estimates are uncertain, namely, when the estimates are given in the interval or fuzzy form. This paper examines the problem of probabilistic inference under the condition that initial values of relevant probabilities are set as intervals of their possible values. In [3], a set of all possible such probabilities are formally defined as follows:
Basic Concepts and Definitions of Interval Probabilities
where p A denotes a set of all possible probability estimates defined in the set of random events To avoid a situation when P , boundary values of probability intervals have to satisfy these limiting conditions: Probability intervals satisfying conditions (2), in [3] are called proper intervals. It is evident that in tasks of interval probabilistic inference one should always operate with proper intervals only.
It means that deterministic probability values can be selected over the entire interval , i i l u , including its boundaries. In [3], probability intervals meeting conditions (3) are called reachable intervals.In [3], it is proven that for reachable probability intervals these inequalities are valid: Calculations of relevant interval probabilities are made according to rules of interval arithmetic, as well as by some special expressions. The use of such special expressions is stipulated by the need to ensure reachable intervals of resulting probabilities. For illustration, the calculation of the posterior probabilities by Bayes' formula can be mentioned given that the initial probabilities are set in the interval form. Several methods for extending classical Bayes' formula to interval probabilities are known; one method was proposed in [4 -7]. The essence of this method is as constellation. Then the following information enables reconstruction of the initial field F : -marginal F -field regarding the division C : "prior probabilities".
2.
C -a set of fields of conditional F -probability according to the canonical concept.
of 11
An F -field of complete probabilities can be calculated based on the assigned conditional For each B A , intuitive concept of conditional probability creates an F -field Conditional probabilities / ip A B are the desired posterior probabilities.
Although the application of the proposed method produces correct results, it will not be used in this paper due to the complexity and difficulties in interpretation of the results obtained. Instead, a method proposed in [8,9] will be used.
The method under consideration is based on the concept of generalized intervals. Classical interval is identified as a set of real numbers, whereas a generalized interval is identified with the help of predicates that are filled with real numbers; its boundaries are not ordered in common Operations on the generalized intervals are defined based on Kaucher arithmetic [10]. In the set of generalized intervals, these specific mathematical operations are defined.
Operation [7] results in a proper generalized interval.
The result of operation (8) is an improper generalised interval.
The operation that follows transforms a proper generalized interval into the improper In [8], author proposes this interval version of Bayes' formula: where i E , 1,..., It is necessary to understand that according to common expression (10), interval values of probabilities in both denominators in expression (11) are inverted values of initial interval values of When we have only two relevant events E and c E , boundary values of the posterior conditional probabilities can be calculated by these expressions [8]: It should be taken into consideration that in the denominators of expressions (12, a, b) not the initial boundary values of probabilities
Case Study
Let us consider a "classical" task of assessing chances of the presence of oil on the site given that the prior evaluations of these chances and evaluations of conditional probabilities of the results of seismic exploration of the site are assigned. We have these initial data.
A set of random events ("states of nature") where event 1 a corresponds to the actual presence of oil on the site, but event 2 a corresponds to real absence of oil on the site. Let us call events 1 a and 2 a "geological events". Let us assume that based on the expert evaluation, the following interval values of probabilities of occurrence of these events are assigned: Assume that a manager of an oil mining company has made a decision to undertake seismic exploration of the site to re-evaluate the prior values of probabilities The specifics of a seismic exploration is that it can both precisely confirm real presence or absence of oil on a site, and produce erroneous results, i.e., to show the presence of oil when it is missing in reality or to show the absence of oil when it is really present. Let us introduce this system of denotations: 1 1 / b a -seismic exploration has confirmed real presence of oil on the site; 2 1 / b a -seismic exploration has erroneously indicated the lack of oil on the site, though in reality oil is present; To calculate the required posterior interval probabilities, it is necessary to divide the outcome probability values It is easy to verify that the resulting intervals are valid probability intervals. The target state of information in the form of a decision tree is presented in Figure 2. (1) (2) Note that on this probability tree, the numbering of outcomes corresponds to the numbering of outcomes on the probability tree in Figure 1. The probabilities of the respective outcomes in both figures are the same.
Algorithms for Finding Permissible Values of Probabilities on Sets of Their Interval Values
Let us introduce the concept of consistent probability intervals. Let us assume that a set of For clarity, three such conditional intervals are graphically presented in Figure 4. 2. For these intervals to be consistent, they must satisfy the following requirements: (1) 1 2 3 c + c + c = 1 ;(2) 1 1 Funding: This research received no external funding.
Conflicts of Interest:
The authors declare no conflict of interest. | 1,622.6 | 2020-12-08T00:00:00.000 | [
"Mathematics",
"Computer Science"
] |
The Problems and Countermeasures of Publicity Translation of Shu Brocade
: As one of the four famous brocades in China, Shu Brocade has a history of more than 2,000 years, and it is the treasure of silk culture in China and even in the world, which has a very high historical and cultural value. Under the background of globalization, culture is becoming more and more an important part of determining the influence of a country, so publicity translation has become the most important work of spreading culture. However, there are still many problems in China due to the late start of publicity translation. This study discovers some problems in the publicity translation of Shu Brocade and proposes corresponding countermeasures to improve the quality of publicity translation of Shu Brocade, so as to enhance the influence of Chinese culture in the world.
Introduction
In today's game of great powers, cultural soft power is becoming an important part of it, and enhancing cultural "soft power" has become an important task for a country to enhance its international competitiveness.Doing a good job of spreading Chinese culture to foreign countries has become an important way to show cultural confidence and improve the international influence of Chinese culture.Due to the differences in language and culture, coupled with the differences in the quality of different translators themselves, the translations of various versions have many differences and irregularities, which will inevitably have a negative impact on foreign communication, thus affecting the effect of cultural communication.Shu Brocade has a long history of more than 2,000 years, starting from the Qin and Han Dynasties and booming during the Tang and Song Dynasties. [1]It is one of the four famous brocades in China.It is also the predecessor of Kyoto Nishijin Weaving, a national treasure of Japan.Shu Brocade is a treasure of Chinese and even the world's silk culture, with high historical and cultural value.In 2006, the Shu Brocade weaving technique was included in the first batch of national intangible cultural heritage list, and in 2010, the General Administration of Quality Supervision, Inspection and Quarantine of the People's Republic of China approved the protection of "brocade of Shu" as a geographical indication product.Under this circumstance, the publicity translation plays an important role in promoting Shu Brocade to the world.
Problems with Shu Brocade's Publicity Translation
As one of the most important contents of foreign communication, publicity translation is not a simple conversion between two languages, but is based on local culture and other cultures to realize the good transmission of content.Thanks to the implementation of the strategy of "cultural power" and "culture going out", how to disseminate Chinese culture and enhance the international influence of Chinese culture has become the focus of academic attention. [2]As a world-class intangible cultural heritage, Shu Brocade has been revitalized and reintroduced to the society through the efforts of all circles.At the opening ceremony of the 31st FISU Summer World University Games, athletes walked on the "Embroidered Road" made of Shu Brocade, heading for their "Embroidered Future".Shu Brocade, like its flow on the Silk Road a thousand years ago, once again realizes the beautiful dialogue between Chengdu and the world, and lets the world see the charm of traditional Chinese intangible cultural heritage.Although the publicity of Shu Brocade has made some progress under the efforts of many parties, there are still many problems in the translation of Shu Brocade for publicity.
Insufficient attention
Although publicity translation work is one of the most important work of spreading Chinese excellent culture, but due to various reasons, China's excellent cultural publicity translation still has many problems. [3]Governments at all levels in the process of foreign exchanges, focus on economic benefits and put less attention to social benefits, thus making the government not attach enough importance to the publicity of excellent Chinese culture, and its investment in this area is gradually shrinking.This has led to insufficient attention and even a perfunctory attitude towards the work of practitioners related to Sichuan brocade.They simply do not spend too much effort on serious research and speculation.Some Shu Brocade related translators even outsource the work of publicity translation to a third party, and the quality of their translated texts is of course quite low.In addition, governments at all levels have seldom issued the standards and management requirements for the publicity translation of Shu Brocade, and have not set up any specialized management departments and systems to carry out strict management.This kind of top-down awareness conveys that the lower-level units and individuals pay less attention to the quality of Shu Brocade's publicity translation.
Lack of specialized personnel
The translation of excellent Chinese culture requires not only good foreign language proficiency, but also a deep understanding of the political, economic, cultural, and historical backgrounds of China and foreign countries.It is also necessary to be good at translating and interpreting from the other party's perspective using authentic foreign languages to avoid differences in understanding and conflicts caused by cultural differences.When choosing a job, excellent translators often consider factors such as salary and career development, but in fact, jobs related to intangible cultural heritage are boring jobs with poor development prospects and low salaries, which leads to the reluctance of many excellent translators to engage in such jobs.Due to the limitations of their own ability, today's intangible cultural heritage translation practitioners have a large limitation in their thinking of cross-cultural translation work, and most of the translations are literal translations.Although the original Chinese text has strong logic, the corresponding translation quality is relatively low.It is difficult for foreign audiences to really understand the meaning.In addition, China has not yet formed a set of excellent Chinese culture publicity translation talent training mode.The translation majors in China's colleges and universities are all comprehensive training, and the degree of professional expansion is not enough. [4]
Ineffective regulators
Due to the expansion of enrollment in universities, the translation majors in various universities have expanded significantly.At that time, the entire translation industry experienced a lively scene of full expansion, but there was a mixed bag of good and bad.Laymen think that publicity translation is a very simple thing, as long as you know a foreign language.But the fact is that publicity translation is a highly professional and specialized field.Without solid language skills, strong cross-cultural communication skills and rich translation experience, it is difficult to produce high-quality translations.Since it is a professional field, it needs to be managed by a specialized department.The threshold of publicity translation in China is too low, and there is a lack of specialized departmental management.At the same time, due to the lack of supervision, some translation practitioners pursue speed and sacrifice the quality of translations just to get monetary returns.This not only affects the effect of cultural publicity, but also harms the image of the whole industry, so the relevant departments should strengthen the management and improve the quality and level of practitioners.
Lack of high-quality English version publicity materials
At present, there is no official English translation name for Shu Brocade.There are several translation forms available online, such as "Shu brocade", "Sichuan figured satin", "Tapestry from Sichuan", "Sichuan brocade", and so on.Shu Brocade also has no official English version publicity materials.There is no official English introduction about Shu Brocade available online, regardless of its development history or unique features.On the internet, only a small number of English introductions about Shu Brocade written by freelance writers can be found.Regarding the history of Sichuan brocade, there is only one English introduction with a few grammar errors and some terminology that is not very authentic.Moreover, there is no separate introduction to Shu Brocade found online.Only in the articles introducing the four famous brocades, it is introduced, but the characteristics of Shu Brocade are not well demonstrated in English.In short, Shu Brocade lacks high-quality English version publicity materials.Only through detailed descriptions in the form of pictures and high-quality English introductions can Sichuan brocade culture be better disseminated.
Strengthening publicity efforts
Shu Brocade is an excellent Chinese culture, world-class intangible cultural heritage.Departments at all levels should improve the awareness of the publicity of intangible cultural heritage, recognize the important value and significance of intangible cultural heritage in improving the international influence of Chinese culture and enhancing the cultural confidence of the people.Nowadays, with the rapid development of Internet technology, new media platforms have become an indispensable thing in people's lives, which provides an important reference significance to the publicity of the Shu Brocade.We need to cultivate a group of propaganda talents who understand the characteristics of Shu Brocade and new media, and actively promote Shu Brocade on domestic and foreign new media platforms.
Strengthening supervision and management
Implementing the strategy of "telling China's story" and "culture going out" requires the support of governments at all levels, and the publicity and construction of intangible cultural heritage needs to give full play to the role of publicity interpreters.The role of publicity translation is important for the inheritance and promotion of Chinese culture and the enhancement of the international influence of Chinese culture.However, at present, there are many problems in the publicity translation of Shu Brocade, so the government and relevant departments need to increase their attention and establish a dedicated publicity and translation department for Shu Brocade and intangible cultural heritage, specifically responsible for improving and regulating the quality of Shu Brocade translations, and promoting the standardized promotion of Shu Brocade and other intangible cultural heritage internationally.
Training of translators
Translator is the most important link in the process of publicity translation, and its business level and working attitude play a decisive role in the quality of publicity translation.The Shu Brocade has a history of more than 2000 years and has profound historical and cultural value.In the publicity translation of Shu Brocade, the translator is required to have a broad knowledge reserve of Chinese and foreign cultures, solid theoretical background of translation as well as conscientious and responsible working attitude, and to use scientific and reasonable translation theories and cultural background to translate professionally.At the same time, they should actively refer to the publicity and translation of intangible cultural heritage in other regions to standardize the publicity and translation standards of Shu Brocade, correct problems in the translation process, and become disseminators of spreading excellent Chinese culture and telling Chinese stories well.
Utilizing new media platforms
New media has become an indispensable part of most people's lives, which provides new ideas for the inheritance and development of intangible cultural heritage. [5]Many inheritors of intangible cultural heritage have started using short videos and online social platforms to showcase their intangible cultural heritage skills and share relevant knowledge and experience.These "amateur intangible cultural heritage craftsmen" not only rely on their passion and learn and communicate independently on the internet, but also are committed to exploring new and creative intangible cultural heritage handicrafts, injecting new connotations and vitality into traditional intangible cultural heritage, and becoming a new force for the inheritance of intangible cultural heritage.All relevant departments should fully recognize the role of new media platforms and organize a group of people who understand Sichuan brocade, translation as well as publicity to promote and translate Sichuan intangible cultural heritage such as Sichuan brocade through the new media platform.
Conclusion
The quality of publicity translation of intangible cultural heritage determines the dissemination and influence of intangible cultural heritage such as Shu Brocade in the international arena, and is the focus of the whole publicity translation.With the spread of globalization and the increase of China's influence in the international arena, publicity translation is becoming more and more important, and publicity translation bears the important task of helping international friends to understand our country.Therefore, the existing problems of publicity translation in China should be emphasized by all circles.Based on the current problems, we should tailor the measures to the case, strengthen the construction of a standardized system, improve the professional literacy of practitioners, use appropriate translation strategies, objectively and clearly express the translation content, effectively improve the quality of intangible cultural heritage publicity and translation, and present excellent Chinese culture internationally. | 2,800 | 2023-01-01T00:00:00.000 | [
"Linguistics"
] |
Measles Vaccination and Outbreaks in Croatia from 2001 to 2019; A Comparative Study to Other European Countries
Due to the current burden of COVID-19 on public health institutions, increased migration and seasonal touristic traveling, there is an increased risk of epidemic outbreaks of measles, mumps and rubella (MMR). The aim of the present study was to analyze the epidemiological data on MMR immunization coverage and the number of measles cases in 2001–2019 in Croatia and a number of European countries. Results revealed a decreasing trend in vaccination in 2001–2019 throughout Europe. However, Croatia and Hungary still have the highest primary and revaccination coverage, compared to other analyzed countries. The highest number of measles cases was in 2017 in Romania. There was no significant correlation between the percentage of primary vaccination and the number of measles cases (r = −0.0528, p = 0.672), but there was a significant negative correlation between the percentage of revaccination and the number of measles cases (r = −0.445, p < 0.0001). In conclusion, the results of the present study emphasize the necessity to perform a full protocol of vaccination to reach appropriate protection from potential epidemic outbreaks. Furthermore, in the light of present migrations, documenting the migrants’ flow and facilitating vaccination as needed is of utmost importance to prevent future epidemics.
Introduction
Throughout the world, vaccination with safe, effective, and affordable vaccines for measles, mumps and rubella (MMR) is freely available. The measles vaccination was introduced into the compulsory vaccination program in the Republic of Croatia in 1969. The first vaccination effort vaccinated all children one to six years old. Vaccination against rubella (1975) and mumps (1976) soon followed [1]. MMR vaccination began in 1976. Since the beginning of vaccination against MMR, a vaccine of domestic production (Immunology Institute) has been used. The measles vaccine strain, Edmonston-Zagreb, as well as the rubella vaccine strain, RA 27/3, were produced on human diploid cell culture, while the mumps virus vaccine strain was produced on chicken fibroblast cell culture [2,3]. The use of vaccines against these diseases has led to their almost complete eradication in Croatia, with sporadic cases. The Croatian law prescribes the required minimum coverage for measles vaccination of 95%. In Croatia, outbreaks of measles occurred in 2015 and 2018. In 2018, an outbreak in the southern-Adriatic part of the country was a consequence of the infection of an adult returning from Kosovo, with 15 epidemiologically-linked cases [4]. The median age of infected persons was 33 years, while one case was an 8-month-old infant. Two of these cases had received two doses of a measles-containing vaccine, one person had taken one dose and three were unvaccinated, while for nine cases, vaccination status was unknown [4]. In regard to neighboring countries, in 2017, there was a small outbreak of measles in Hungary, in close proximity of Osijek-Baranja County, which was not spread wider over the state border due to good epidemiological measures [5]. Measles outbreaks in 2018-2019 in the Croatian cities of Zagreb, Slavonski Brod, Split and Dubrovnik demonstrated possibly suboptimal vaccination coverage in certain cluster(s) of the population [5]. Despite the proximity of Slavonski Brod (tens of kilometers) and frequent commutation between counties, Osijek-Baranja County was not affected.
In the light of flaming waves of the COVID-19 pandemic, other contagious diseases somehow were put aside, partly due to successful vaccination programs, particularly in European countries. In the period between the end of 2019 and the spring of 2022, the COVID-19 pandemic significantly influenced interpersonal contacts, which also have different impacts on measles, mumps and rubella vaccination efforts and burden across the world. For example, Brazil reported a reduction in the number of MMR vaccine doses [6], while interestingly, Japan reported de-creased estimated annual burdens in 2020 for measles (98%), mumps (47%) and rubella (94%) compared with those in 2019 due to social distances in COVID-19 pandemic [7]. However, due to the current burden of COVID-19 on public health institutions, one may expect a decrease in vaccination coverage and potential new outbreaks in near future. We hypothesized that there is a relationship between vaccination coverage and the number of measles cases. The present study aimed to analyze the epidemiological data on population immunization and reflect the number of measles cases in the 2001-2019 period for Croatia and countries of the European region.
Statistical analysis: Differences in percentage of vaccination (% vaccination) in observed period among regions/countries were analyzed using Two-way ANOVA, with appropriate post hoc test for multiple comparisons (Sidak's or Tukey's multiple comparisons test). Correlation between number of cases per year and vaccination coverage was assessed using Spearman's correlation. p < 0.05 was considered statistically significant. GraphPad v6.0 (GraphPad Software, San Diego, CA, USA) and SigmaPlot, version 11.2 (Systat Software, Inc., Chicago, IL, USA) were used for statistical analysis.
MMR Primary Vaccination and Revaccination in Croatia and
Osijek-Baranja County 2001-2019 Table 1 presents data on the primary MMR vaccination in Croatia and particularly, Osijek-Baranja County (OBC). There was no significant difference in the percentage of primary vaccination between Croatia and OBC each year from 2001 to 2018. However, there was a significantly higher percentage of primary vaccination in Croatia compared to OBC only in 2019 (p < 0.05). Table 3 presents differences in MMR primary vaccination among Croatia and neighboring countries. There was no significant difference in the percentage of primary vaccination between Croatia and neighboring countries from 2001 until 2015. Afterward, Croatia, Slovenia and Serbia generally had the best primary vaccination success, while Bosnia and Herzegovina (BiH) and North Macedonia had the lowest; e.g., in 2016, the percentage of primary vaccination in BiH was significantly lower compared to Croatia (2016, p < 0.05) and Slovenia (2016, p < 0.05), while in 2017 and 2018, the percentage of primary vaccination in BiH was significantly lower compared to Croatia (p < 0.05), Slovenia (p < 0.05), and Serbia (p < 0.05). Additionally, in 2018, North Macedonia had significantly lower percentage (75%) of primary vaccination compared to Croatia (2018, p < 0.05), Serbia (2018, p < 0.05), and Slovenia (2018, p < 0.05). Montenegro and the years 2013 and 2019 had to be excluded from the analysis due to missing data.
MMR Primary Vaccination and Revaccination in European Countries 2001-2019
Data for Tables 5 and 6 cover Austria, Hungary, Croatia, Czech Republic, Denmark, Germany, Italy, Poland, France, Belgium and Ukraine. Table 5 presents data on MMR primary vaccination in European countries. Austria and the Czech Republic and the years 2008, 2013, 2018 and 2019 had to be excluded for analysis because of partly missing data. In 2001, the percentage of primary vaccination in Italy was significantly lower compared to Hungary (p < 0.05), Croatia (p < 0.05), Denmark (p < 0.05) and Poland (p < 0.05). Furthermore, in 2001, Belgium had a significantly lower percentage of primary vaccination compared to Hungary (p < 0.05). In 2002, the percentage of primary vaccination in Italy was significantly lower compared to Hungary (p < 0.05), Denmark (p < 0.05) and Poland (p < 0.05). Moreover, in 2002, Belgium had a significantly lower percentage of primary vaccination compared to Hungary (p < 0.05), Denmark, (p < 0.05) and Poland (p < 0.05). In 2003 and 2004, Belgium had a significantly lower percentage of primary vaccination compared to Hungary (p < 0.05). In 2009, the percentage of primary vaccination in France was significantly lower compared to Hungary (p < 0.05), Croatia (p < 0.05), Germany (p < 0.05), Italy (p < 0.05), Poland (p < 0.05) and Belgium (p < 0.05). There was no significant difference in the percentage of primary vaccination between EU countries from 2005 until 2007 and from 2010 until 2017. According to available data, differences in the percentage of primary vaccination between Ukraine and other European countries were analyzed for the period between 2013 and 2017. In 2014, 2015 and 2016, Ukraine had a significantly lower percentage of primary vaccination compared to Hungary (p < 0.05), Croatia (p < 0.05), Denmark (p < 0.05), Germany (p < 0.05), Italy (p < 0.05), Poland (p < 0.05), France (p < 0.05) and Belgium (p < 0.05). There was no significant difference in the percentage of primary vaccination between Ukraine and Italy in 2013, and 2017, Austria and the Czech Republic were excluded from the analysis because of partly missing data. Analysis of the association between the percentage of primary vaccination and number of measles cases, just as between the percentage of revaccination and the number of measles cases between 2001 and 2019 included available data from the following countries: Austria, Hungary, Croatia, Czech Republic, Denmark, Germany, Italy, Poland, France, Belgium, Ukraine, Bosnia and Herzegovina, North Macedonia, Montenegro and Serbia. There was no significant correlation between the percentage of primary vaccination and the number of measles cases (r = −0.0671 p = 0.298) but there was a significant moderate negative correlation between the percentage of revaccination and the number of measles cases (r = −0.357 p < 0.0001).
Discussion
After the introduction of the measles vaccination, the number of affected patients decreased significantly. Before 1968 (when compulsory vaccination against measles was introduced in Croatia), the average annual number of patients in Croatia was around 15,000, while in the last ten years, this number has stayed below 20, with the exception of 2015, when we had an epidemic with 206 patients, and in 2018, with the measles epidemic in Dubrovnik-Neretva County [4]. Interestingly, our results show that in Croatia, the percentage of revaccination compared to primary vaccination significantly increased in 2016 and 2017. Analysis of neighboring countries of Croatia revealed that Croatia, Slovenia, and Serbia generally had the best primary vaccination success, while Bosnia and Herzegovina (BiH) and North Macedonia had the lowest primary vaccination percentage and BiH also had the lowest percentage of revaccination compared to neighboring countries (Tables 3 and 4). Interestingly, in the first decade of the 21st century, Italy, Belgium and France had the lowest MMR primo-vaccination coverage of analyzed European countries. There was no significant difference in the percentage of primary vaccination between EU countries from 2005 until 2007 and from 2010 until 2017. However, in the period 2013-2017, Ukraine had the lowest primary vaccination and revaccination coverage compared to other European countries (e.g., in 2016 Ukraine had 47% primary vaccination and 31% revaccination). The highest percentages of coverage are seen in Hungary and Croatia (Tables 5 and 6). This is in agreement with the notification rate per million population for measles. In Croatia from November 2020-October 2021 [8] and February 2021-January 2022, there were zero cases, and in Slovenia, Hungary, Slovakia, Czech Republic, Bulgaria, Greece, and Portugal, there were no cases of measles. Other EU countries reported 0.001-0.099 cases [9]. In contrast, in the period February 2020-January 2021, only Croatia, Hungary and Slovakia reported a notification rate per million of zero, while the majority of other EU countries had from 0.001-0.999 [10]. This could be attributed to suboptimal vaccine coverage in Europe, which led to a major resurgence of measles in recent years [32]. Several reasons may underline that situation, including increasing trends of vaccine hesitancy or refusal due to perception of measles risk and burden, mistrust in experts, concerns about vaccine safety, effectiveness, and accessibility [32]. Furthermore, migrations and consequences of wars or economical migrations from the countries with disturbed health care systems also influence vaccination coverage of the population. Importantly, one may hypothesize that the decrease in vaccination in the EU and neighboring countries increases the risk of an epidemic surge in the near future.
The biggest problem is the continuous decline in vaccination coverage of preschool children, which is below the minimum 95% and can lead to an epidemic [33]. Recently, in the study conducted in the frame of the CABCOS3 project, it was reported that the Hungarian serum samples and Croatian serum samples were largely overlapping in seropositivity ratios, which might be attributed to the intrinsic biological dynamics of vaccination-based humoral immunity to measles. Individuals 34-43 years old had the lowest seropositivity ratios (78%) [34]. A prospective study conducted in Prague, Czechia, on a total of 2782 participants aged 19-89 years, analyzed the level of measles-specific antibodies in serum samples and showed that the seropositivity rate in naturally immunized participants (before 54 years) was significantly higher than in fully vaccinated persons aged 19-48 (98.0% (95% CI: 96.5-99.0%) vs. 93.7% (95% CI: 92.4-94.9%)). Lower seropositivity persistence (86.6%) was found in a cohort of those born in 1971-1975, vaccinated mostly with one dose, compared to naturally immunized persons or compared to participants fully vaccinated with two doses [35]. Furthermore, in 2019, 59 measles cases were reported between 1 January and 11 March in Austria; 47 of them fulfilled the cluster case definition. Forty out of 47 patients (85.1%) were unvaccinated, while the age distribution of cases suggested measles immunity gaps in adults [36]. In Zagreb, Croatia, in the period from December 2014 to April 2015, 122 measles cases were notified, 93% of which were unvaccinated persons, age younger or equal to four years, and older than 20 [37]. The outbreak was successfully resolved, and Croatia has an excellent measles elimination profile [38]. Interestingly, in Korea, 2019, there were 26 measles case-patients, aged 18-28 years. Twenty-five of them had previously received the MMR vaccine (12/26, 46% (two doses); 13/26, 50% (one dose)), and 16 (62%) had positive results of measles IgG prior to measles diagnosis [39]. Altogether, these are important information in the light of the previously mentioned outbreak among adults in Dubrovnik-Neretva County [4], suggesting that the lack of previous immunization, together with a decrease in seropositivity, present a risk for future epidemic outbreaks.
It has been shown that several factors may influence the parental decision to choose MMR vaccination, such as confidence in experts and vaccine, measles severity, responsibility toward child and community health and peer judgment [32]. Through educational activities foreseen within CABCOS, our goal is to increase public awareness of the importance of vaccination and increase the share of vaccinated children. Trends of a decrease in immunization coverage are followed in other countries in the region. For example, in the years 2018 to 2020, in Kosovo, >90% (N = 430) of children 12-24 months old had fully completed immunization personal plans. There were delays in immunizations, from 1 to 3 months, mainly due to the COVID-19 pandemic, lack of time for parents to take the child for vaccination or the child being sick at the scheduled time of vaccination. The difference between non-vaccination and full vaccination was only related to the age of children (p < 0.001) [40].
In contrast to the situation in Croatia (Tables 3 and 4), in Serbia, over the period 2000-2017, there was a significant decline in coverage of primary vaccination against measles, mumps, rubella (MMR) (p ≤ 0.01). In the same period, coverage of all subsequent revaccinations significantly decreased, e.g., in the second dose against MMR before enrolment in elementary school (p < 0.05) [41]. In Western Europe, the situation with vaccination coverage varies. 2018-2019 data in the UK, London area, showed that the coverage of children with dose two of MMR vaccine at their fifth birthday has been consistently low (76.3%) [42]. Results of the present study showed that Germany (Tables 5 and 6) A recent systematic review (PROSPERO CRD42019157473; 1 January 2000 to 22 May 2020) identified studies on vaccine-preventable disease outbreaks involving migrants resid-ing in the EU/EEA and Switzerland (including measles, mumps and rubella). 47 different vaccine-preventable disease outbreaks in 13 countries were reported in 45 studies. 40% of outbreaks (mostly varicella and measles) occurred in shelters or temporary refugee camps. Measles were the most reported outbreaks involving migrants (n = 24; 6496 cases) and 11 of them were associated with migrants from eastern European countries. There were only three reported rubella outbreaks (487 cases) and two reported mumps outbreaks (293 cases) [43]. As a study in 2017 demonstrated, the most important factor that prevented the resurgence of measles was vaccine coverage rates, regardless of the economic status of the country or the number of incoming travelers or migrants. In 2017, the incidence of measles was the highest in Romania (46.1/100,000), which has the lowest coverage rate (75%), followed by Ukraine (10.8/100,000) and Greece (8.7/100,000). Overall vaccination coverage with two doses in these countries was less than 84% [44]. Data from a 2017 survey on national immunization strategies to provide vaccinations for migrants show that Portugal, Italy, Croatia and Slovenia offer migrant children and adolescents all vaccinations included in the National Immunization Plan, and Greece and Malta provide only certain vaccinations, including those against measles-mumps-rubella and diphtheria-tetanuspertussis and poliomyelitis. Portugal, Malta, Italy and Croatia also offer vaccination to adults. Vaccinations are delivered in holding centers and/or community health services in all countries. No country delivers vaccinations at the entry site to the country [45]. Thus, the finding of the present study, that there is a significant moderate negative correlation between the percentage of revaccination and the number of measles cases, provides additional support for the importance of the completion of vaccination protocols, since this correlation was not found in primo-vaccination.
Conclusions
In conclusion, the present study demonstrates that there is a negative correlation between the second vaccination (revaccination) and the number of measles cases, which emphasizes the necessity to perform a full protocol of vaccination to reach appropriate protection from potential epidemic outbreaks. Thus, it is important to have a strategy to document migrants' flow and facilitate vaccination as needed; this is of utmost importance to prevent future epidemics. Additionally, follow-ups on seropositivity upon vaccination in the adult population should be monitored to highlight potential regions or sub-population at greater risk to be points of epidemic outbreaks. | 4,107.6 | 2022-03-31T00:00:00.000 | [
"Medicine",
"Biology"
] |
Analysis of Factors Influencing the Application of Accounting Information Systems in UMKM, Dimsum, Seceng Pajaresuk,
: This study aims to determine the factors that influence the application of accounting information systems in UMKM Dimsum Seceng Pajaresuk, Pringsewu. The research was conducted using qualitative methods, the researchers analyzed the data and information obtained from interviews and observations. The subject of this research is UMKM Dimsum Seceng Pajaresuk Pringsewu. The results showed that UMKM Dimsum Seceng Pajaresuk Pringsewu had not implemented an accounting information system due to factors namely limited capital, limited human resources, weak business networks and capabilities, limited business facilities, and infrastructure. This has caused UMKM Dimsum Seceng Pajaresuk to not be able to use the accounting information system.
INTRODUCTION
UMKM is one of the pillars of the Indonesian economy.This is supported by various facts and data which state that the Indonesian economy is still dominated by sectors with low productivity, such as agriculture, trade, and home industry.Such a sector is usually referred to as UMKM.The obstacle that is often faced in the development of UMKM is the limited supporting facilities and infrastructure, especially technology for accounting and making financial reporting, they are not able to provide financial reports that are timely, accurate, and according to guidelines set by the government.An accounting Information System is an organization of forms, records, and reports that are coordinated in such a way as to provide the financial information needed by management to facilitate company management (Mulyadi, 2016).
The application of accounting in a business is still poorly understood by UMKM actors.Business actors usually only do bookkeeping, limited to recording income and expenses.This resulted in the company's net profit being difficult to know so that applying for credit for venture capital was difficult to obtain because most of the UMKM actors had limitations in producing quality financial reports so many entrepreneurs had gone out of business due to a lack of application of accounting records to obtain funds from banking.Financial management is a problem that is often neglected by UMKM actors which then has an impact on accounting records.The importance of preparing financial reports for UMKM is actually not only for ease of obtaining credit, but for controlling assets, capital, liabilities, income planning, cost efficiency, and making business decisions.
UMKM Dimsum Seceng Pajaresuk is a business engaged in the food (culinary) sector.The establishment of the UMKM Dimsum Seceng started with an owner who enjoyed culinary delights so that the owner was interested in opening the business.Because besides being easy to process, dim sum is also practical for selling.The year the UMKM Dimsum Seceng was founded was in August 2021 with the owner's name Etikasari and located at Lintas Barat Pajaresuk Jl.Imam Bonjol.There are 3 branches, namely in Sukoharjo, Pajaresuk, and Pringsewu (Jalan Kh.Gholib).
The existence of an electronic-based Accounting Information System shows that this is a change caused by increasingly sophisticated technological developments.Recording of the accounting cycle, which in ancient times used a manual system, has shifted to using a computer (electronic) system (Abral, 2016).Judging from the background above, the formulation of the problem in this study is how to apply accounting information systems.What factors have caused UMKM Dimsum Seceng Pajaresuk Not to Use an Accounting Information System?
LITERATURE REVIEW
The Accounting Information System (AIS) is an integration of various transaction processing systems or sub-AIS (Sugiyono, 2017).Because each processing system transactions have transaction processing cycles, so the Accounting Information System can also be said to be an integration of various transaction processing cycles (Susanto, 2017).Accounting Information Systems can be manual pencil and paper systems, complex systems that use modern information technology, or something in between.However, the process is the same, namely collecting, processing, storing, and reporting data and information (Romney, 2014).
The development of Accounting Information Systems can be added value to a company.The existence of an Accounting Information System affects performance (Kwarteng, 2018).Moreover, in the era of increasingly sophisticated technology and the existence of an Accounting Information System, it can facilitate the activities of a company.In the Accounting Information System, there are components, namely system users (HR), procedures and instructions, data regarding the organization and its activities, data processing software, information technology infrastructure, internal controls, and security measures that store AIS data (Romney, 2014).
Flow chart (flowcharts) is a pictorial analytical techniques used to explain some aspect of an information system in a clear, concise, and logical manner.Flow charts record the way business processes are carried out and the way documents flow through the organization (Romney, 2014).1. Micro Business a. Have a net worth of IDR 50,000,000 (five million rupiahs) excluding land and business buildings or; b.Have annual sales revenue (turnover) of IDR 300,000,000 (three hundred million rupiahs).2. Small Business: a. Has a net worth of Rp. 50,000,000 (fifty million rupiahs) up to Rp. 500,000,000 (five hundred million rupiahs) excluding land and buildings for business premises or; b.Have annual sales (turnover) of Rp. 300,000,000,000 (three hundred million rupiahs) up to Rp. 2,500,000,000 (two billion five hundred million rupiahs).
Medium Enterprises:
a. Has a net worth of more than Rp.500,000,000 (five hundred million rupiahs) up to Rp. 10,000,000,000 (ten billion rupiahs) excluding land and buildings for business premises or; b.Have annual sales (turnover) of more than Rp.
The factors that cause the application of information systems include: 1) Level of education The ability of business owners greatly influences the preparation and use of accounting information.The ability of the owner of a small and medium company can be determined by the formal education of the owner of the company.The level of formal education of owners of small and medium enterprises greatly influences their preparation and use of financial and management accounting information.The owner's low level of formal education (elementary to high school education level) will have low preparation and use of accounting information compared to the owner's high level of formal education (university).This is why higher accounting teaching materials are given in higher education compared to lower education (Astuti, 2014).
2) Length of effort
The length of business in this case is the length of time UMKM has been established or the age of the UMKM since the business was founded until the completion of business activities (Arizal, 2013).The longer the business runs, it will result in the development of a significant business in a positive or negative direction.The development of this business is in the trade climate and competition that occurs in the business world or market and usually, businesses that have been around for a longer time tend to be more developed because they already have more experience in running their business.So that it is more able to compete with other businesses or UMKM actors.
3) Accounting training
Accounting-related training determines how well a manager or UMKM can master accounting techniques.The more often a manager attends accounting training, the better the manager's ability to use accounting information (Muhammad, 2014).
METHODS
This study uses qualitative methods, researchers describe the actual situation in the field.Qualitative methods are used to obtain in-depth data and information and the data obtained contains true meaning (Sugiyono, 2017).With qualitative research, researchers can recognize the object of research and can feel what is in the field.In this study, the data and information obtained could be in the form of interview transcripts, field notes, documentation, and visual materials which were analyzed qualitatively (Sugiyono, 2017).So in this study, the researcher was involved in the situation and setting of the phenomenon being studied.Researchers conducted interviews and observations to obtain data and information regarding the Accounting Information System at the UMKM Dimsum Seceng Pajaresuk, Pringsewu.
RESULTS AND DISCUSSIONS
UMKM Dimsum Seceng Pajaresuk has not used an accounting information system, this is by subject 1's statement that so far the business has not implemented an accounting information system, because there are factors that cause UMKM Dimsum Seceng Pajaresuk not to use an accounting information system: 1. Limited amount of capital UMKM Dimsum Seceng Pajaresuk, Pringsewu uses their own capital to run their business.This is in accordance with the answer from subject 1 which says that the "source of initial capital at that time was to use their own capital"
Limited Human Resources
Human resources at the UMKM Dimsum Seceng Pajaresuk, Pringsewu are still limited.This is in accordance with the answer from subject 1 which says that"There are only 2 employees working here.With a high school education level"
Weak business networks and capabilities
Market Penetration Small businesses, which are generally family business units, have a very limited business network and low market penetration ability because the products produced are very limited in number and have a less competitive quality.In contrast, large businesses that already have a solid network and are supported by technology can reach internationally and good promotion.This is proven in the results of the interview namely "This business is still allocated in areas around Pringsewu, it has not penetrated nationally or internationally" 4. Limited business facilities and infrastructure UMKM Dimsum Seceng Pajaresuk, Pringsewu are still limited in terms of facilities and infrastructure, this is by subject 1's statement that "recording and bookkeeping still use manual, not yet computerized.The accounting records and bookkeeping that are carried out are only the recordings of sales and purchases of raw materials, there are no records for bookkeeping of the operating cycle of UMKM Dimsum Seceng Pajaresuk, Pringsewu".
CONCLUSION AND RECOMMENDATION
UMKM Dimsum Seceng Pajaresuk, Pringsewu have not used an accounting information system in their business.Some factors affect the business of UMKM Dimsum Seceng Pajaresuk, Pringsewu such as limited human resources from the education level and the limited number of employees and capital due to businesses using their capital or private owner's capital.
Based on this research, the researchers provide advice to the UMKM Dimsum Seceng Pajaresuk to increase their knowledge by leveling up the education of both the owner and the human resources who work in these UMKM.In addition, it is expected that in the future the UMKM Dimsum Pajaresuk Pringsewu will carry out overall recording and bookkeeping regarding the operating cycle supported by the use of computerization.Suggestions for future researchers to examine whether there are other factors other than those described above as reasons why accounting information systems cannot be used as additional references for future knowledge.
Flowcharts have different symbols in terms of form and use.Like there are input/output symbols, processing, flow symbols, and other symbols.Flowchart (flowchart) consists of three types, namely document flowchart (document flowchart), system flowchart (system flowchart), and program flowchart (program flowchart).A document flow chart (document flowchart) is used to browse documents starting from the origin of the document, the distribution of the document, and the destination of the document until the document is not used.System flowchart (system flowchart) describes the relationship between the input, processing, and output of an accounting information system.Program flowchart (program flowchart), explains the logical sequence of data processing carried out by the computer in running the program (Krismiaji, 2015).The following are the symbols used in drawing flowcharts.Simbol Input/Output Processing symbols Current symbols and other symbols According to Law no.20 of 2008 concerning UMKM (DPR, 2008) businesses that are classified as UMKM are as follows: | 2,569 | 2022-11-30T00:00:00.000 | [
"Business",
"Computer Science"
] |
Quantum Kinetic Transport under High Electric Fields
Quantum kinetic transport under high electric fields is investigated with emphasis on the intracollisional field effect (ICFE) in low-dimensional structures. It is shown that the ICFE in GaAs one-dimensional quantum wires is already significant under moderate electric field strengths (> a few hundreds V/cm). This is a marked contrast to the cases in bulk, where the ICFE is expected to be significant under extremely strong electric fields (> MV/cm). Employing the Monte Carlo method including the ICFE, the electron drift velocity in quantum wires is shown to be much smaller than that expected from earlier investigations.
INTRODUCTION
The fast development of semiconductor devices has brought about renewed interest in high-field carrier transport in semiconductors.Among others, the breakdown of the semiclassical Boltzmann transport equation under high electric fields, on which most conventional analyses of carrier transport in semicon- ductor devices are based, has been widely predicted.J1]However, the predicted electric field strengths under which the semiclassical treatment of carrier transport is-supposed to break down vary from several tens kV/cm to MV/cm.Therefore, our under- standing of quantum carrier transport under high elec- tric fields is still immature.In the present paper, we investigate the quantum effects in electron transport from a rather different point of view; we apply the quantum kinetic transport equation (the Barker-Ferry equation [2]) to low-dimensional structures so that the quantum effects are, in some cases, more pronounced than in the three-dimensional (3-D) bulk cases and their physical meanings become more transparent.
We consider ideal 1-D GaAs quantum wires and investigate the intracollisional field effect (ICFE) on electron transport via the Monte Carlo method.[3] To the best of our knowledge, this is the first time the ICFE on high-temperature electron transport in low- dimensional structures has been taken into account.
Since the ICFE is mainly ascribed to the energy change of an electron by the electric field during the collision duration with phonons, [2] it is expected that the ICFE could be most significant when the electron motion is restricted to the dimension parallel or anti- parallel to the electric field.As we shall show, this is indeed the case in 1-D quantum wire structures; the ICFE becomes significant even under moderate elec- tric field strengths and greatly affects transport char- acteristics such as the drift velocity.
QUANTUM KINETIC TRANSPORT EQUATION
Our starting point is the quantum kinetic transport equation (the Barker-Ferry equation) for the one-par- ticle distribution function of electrons fk, which is derived from the quantum Liouville equation for the reduced electron density matrix.[4] Under a constant electric field F with the non-degenerate condition, the Barker-Ferry equation for electrons is given by fk(t) eF. )fk(t) )t ) 2Re f0 dxlMq 12 (gq-f-+) q,,_-(1) q)X A-q 1:) -e-<*-"q-*+nm)A (t x) }, with k F k-eFx.Here, e k is the electron energy with wave-vector k, q the phonon wave-vector, Mq the matrix element for the electron-phonon interaction, COq the longitudinal optical (LO) phonon energy, and Nq the phonon occupation number given by the Bose- Einstein statistics, rl (-1) for phonon emission (absorption).Note that we use the units of h throughout the paper.
Notice that Eq. ( 1) is very similar in form to the semiclassical Boltzmann transport equation except for the collision integral [the right-hand side of Eq.
(1)]; i.e., the time retardation of the collision dynamics with phonons is taken into account.Because of this time retardation, a numerical evaluation of Eq.
(1) is still a formidable task.Since the electron energy distribution usually spreads over the wide energy ranges under high electric fields, the time retardation of the electron distribution function in the collision integral may be safely ignored.As a result, the colli- sion integral I c in Eq. ( 1) is rewritten as t 2Eq,__ Uq 2 (Nq + 1 / 2 + ) {S(rl,ke)A-q(t) S(-rl,ke)A(t)}, (2) where the spectral density S(TI, k F) is given by S(rl,k) de-rcos[(e+n-eerlm)]. (3 Here, F k is the total phonon scattering rate and should be determined self-consistently with Eq. (1).Recall that Eq. ( 1) is derived under the Born approxima- tion, [4] in which only the lowest-order correction for the electron-phonon interaction is included.There- fore, we put damping factor F k in the collision inte- gral and extend the upper limit of the time integral to infinity because the time correlation would be destroyed by a consecutive collision with phonons.
When the electron lifetime is infinite (F k 0) and the electric field is suppressed (F-0), Eq. ( 3) correctly reduces to the energy-conserving delta function and Eq. (1) with Eq. ( 2) becomes identical to the Boltz- mann transport equation.Fortunately, Eq. ( 3) can be analytically solved for a fixed F k and plotted with solid lines under F 500 V/crn in Fig. 1.The electric field is assumed to be directed to the negative direc- tion.Notice that the direction to which the spectral density is skewed is dependent on the direction of the electron motion; when the electron propagates against the electric field [Fig.to the positive direction, and vice versa.This will be explained in the next section along with the phonon scattering rates for quantum wires.
It is clear from Fig. that the ICFE essentially con- sists of two different effects: (1) the energy-conserving delta function is broadened and (2) the energy detuning (zero-point) is shifted.Both effects are directly related to the strength and the direction of the electric field, and the magnitude of both the broaden- ing and the shift, A F, is approximately given by A-"2: (4 where m* is the electron effective mass.We would like to stress that A F is dependent of the direction of the phonon wave-vector, or equivalently, of the elec- tron motion.Since the electron motion is strictly con- fined along the electric field in 1-D quantum wires, A F is always finite and, thus, the ICFE is always effective.On the other hand, the ICFE is not always significant in 3-D bulk because electrons can move in any direction.
QUANTUM TRANSPORT IN 1-D QUANTUM WIRES
The quantum kinetic equation described in the previous section can be easily applied to ideal 1-D quantum wires in GaAs.For simplicity, we assume that the rec- tangular quantum wire with the size 30 nm is formed by an.infinitely deep potential well and only the low- est subband is occupied.In addition, only the Fr6hlich optical phonon scattering in bulk mode is considered.Since the emphasis of the present analyses is on how the dynamical quantum effects affect electron trans- port in 1-D quantum wires, the details of the scatter- ing modes related to the low-dimensional structure itself essentially do not change the present results.
Figure 2 shows the Fr6hlich scattering rates (solid and dashed lines) obtained from the spectral density Eq. (3) under two different field strengths; F 100 and 500 V/cm.As well known from earlier studies on the collisional broadening (CB), [6] the divergence of the scattering rate, indicated by dotted lines in Fig. 2 scattering rates at room temperature under F-(a) 100 V/cm and (b) 500 V/cm.The solid and dashed lines represent the scattering rates for the electrons propagating to the positive and negative directions, respectively.The dotted lines show the scattering rates evaluated by the Fermi golden rule is associated with the 1-D density of states and removed by the energy broadening in the spectral density.Notice that the divergence is mainly due to the phonon emission processes so that the divergence occurs at the LO phonon energy (--36 meV).The major feature associated with the ICFE is, however, that the scattering rates become asymmetric with respect to the direction of the electron motion.This is closely related to the shift of the energy detuning in the spectral density, as already seen in Fig. 1, and can be explained as follows.When an electron moves against the electric field, it gains energy from the electric field during the collision duration with phonons.Therefore, it can emit an LO phonon even if the electron energy is smaller than the threshold energy for phonon emission.In other words, the phonon energy is effectively reduced when the electron moves against the electric field, and the scattering rate shifts to the lower energy-side.On the other hand, an electron loses its energy when it moves along the electric field and, thus, the phonon energy is effec- tively increased.As a result, the scattering rate shifts to the higher energy-side.
The transport characteristics could be determined from the quantum kinetic equation described in the previous section.Since the quantum kinetic equation is almost identical to the Boltzmann transport equation, conventional Monte Carlo methods can be applied to numerically solve it by substituting a proper spectral density in the collision integral instead of the energy-conserving delta function.[5] Unfortu- nately, the spectral density shown in Fig. is not pos- itive definite.Therefore, we approximate it by a Gaussian form with the width and shift given by Eq. ( 4).The approximated spectral densities employed in the Monte Carlo simulations are plotted by dashed lines in Fig. 1.
Figure 3 shows the electric field dependence of the electron drift velocity at room temperature obtained from the Monte Carlo simulations (dashed line).For comparison, the drift velocity evaluated by the con- ventional Boltzmann transport equation is also shown (solid line).The dotted line represents the result when only the CB is included.For the CB, we assume a constant energy-broadening (F k 2.5 meV), as done by Leburton and his group.[6] We would like to stress that the drift velocity is greatly reduced when the ICFE is included and the reduction becomes greater as the electric field increases.Notice that the CB also reduces the drift velocity, but to a much smaller extent.This implies that the reduction of the drift velocity due to the ICFE is mainly ascribed to the asymmetric phonon scattering rates.Namely, more electrons occupy the states with positive wave,-vector because the electric field is directed to the negative direction and, hence, the scattering rate shifted to the lower energy-side (represented by solid lines in Fig. 2) is, as a whole, more conducive to electron transport than that shifted to the higher energy-side (represented by dashed lines in Fig. 2).Furthermore, the large peak in the phonon scattering rate indicates the electron transport characteristics is dominantly con- trolled by electrons with energy below the phonon- emission threshold energy unless the electric field is extremely strong.Therefore, a slight change in the phonon scattering rates greatly affects the drift veloc- ity and, thus, the ICFE is already significant under moderate electric fields, which is in marked contrast to bulk cases.
CONCLUSIONS
High-temperature quantum kinetic transport under high electric fields has been investigated with emphasis on the intracollisional field effect (ICFE) in low- dimensional structures.We have shown that the ICFE in GaAs one-dimensional quantum wires is already significant under moderate electric field strengths (> a few hundreds V/cm) because of the strong restriction on the electron motion imposed by the low-dimen- sional structures.Employing the Monte Carlo method including the ICFE, it has been shown that the elec- tron drift velocity in quantum wires could be much smaller than that expected from earlier investigations.
FIGURE
FIGURE Electron spectral density as a function of energy detun- ing for the electrons propagating to the (a) positive and (b) negative directions under F 500 V/cm (solid lines).The dashed lines rep- resent the approximated spectral densities employed in the present Monte Carlo simulations ,
FIGURE 2
FIGURE 2 Energy dependence of the Fr6hlich optical phonon'
FIGURE 3
FIGURE 3 Electric field dependence of drift velocity in the ideal I-D quantum wire.The solid line represents the drift velocity with no quantum correction, whereas the dotted and dashed lines show the respective drift velocities when CB and ICFE are taken into ,account | 2,707.6 | 1998-01-01T00:00:00.000 | [
"Physics"
] |
Understanding the High Creep Resistance of MRI 230D Magnesium Alloy through Nanoindentation and Atom Probe Tomography
: Due to their low density, magnesium alloys are very appealing for light-weight construc-tions. However, the use of the most common magnesium alloy, AZ91 (Mg 9 wt.% Al, 1 wt.% Zn), is limited to temperatures below 150 ◦ C due to creep failure. Several alloys with an improved creep resistance have been developed in the past, for example the alloy MRI 230D or Ca-alloyed AZ91 variants. However, there is an ongoing discussion in the literature regarding the mechanisms of the improved creep resistance. One factor claimed to be responsible for the improved creep resistance is the intermetallic phases which form during casting. Another possible explanation is an increased creep resistance due to the formation of precipitates. To gain more insight into the improved creep resistance of MRI 230D, nanoindentation measurements have been performed on the different phases of as-cast, creep-deformed and heat-treated samples of MRI 230D and Ca-alloyed AZ91 variants. These nanoindentation measurements clearly show that the intermetallic phase (IP) of the alloy MRI 230D does not lose strength during creep deformation in contrast to the Ca-alloyed AZ91 variants. High-temperature nanoindentation measurements performed at 200 ◦ C clearly show that the intermetallic phases of the MRI 230D alloy maintain their strength. This is in clear contrast to the Ca-alloyed AZ91 variants, where the IP is significantly softer at 200 ◦ C than at room temperature. Atom probe measurements have been used to gain insight into the differences in terms of chemical composition between the IPs of MRI 230D and the Ca-alloyed AZ91 variants in order to understand the dissimilar behaviour in terms of strength loss with increasing temperature. Author Contributions: Conceptualization, S.L., H.W.H. and P.F.; methodology, S.L.; H.W.H. and P.F., validation, D.M.-A., H.W.H., M.G. and P.F.; investigation, P.T. and S.L.; resources, M.G. and P.F.; data curation, D.M.-A., S.L. and P.T.; writing—original draft preparation, D.M.-A.; writing—review and editing, H.W.H., P.F. and M.G.; visualization, D.M.-A., S.L. and P.T.; supervision, H.W.H., P.F. and M.G.; project administration, H.W.H. and P.F.; funding acquisition, M.G. and P.F. All authors have read and agreed to the published version of the manuscript. of ‘Engineering of Materials’.
Introduction
Since vehicle mass is a key contributor to fuel efficiency, the use of lightweight materials can have a significant impact on their CO 2 emissions. The increased use of magnesium alloys is therefore very appealing, as magnesium is the least dense metal regularly used in structural applications. However, the most widely used magnesium alloys are based on the Mg-Al system and their application is limited to temperatures below approximately 150 • C as a result of the low creep strength of these alloys [1][2][3]. Conversely, e.g., for automotive powertrain applications, alloys with an increased creep strength for use up to 200 • C under long-term loading conditions are needed. Several successful attempts have been made to increase the creep strength of Mg-Al alloys by adding rare earth elements or alkaline earth elements, see for example [4][5][6][7][8]. However, there is a lack of understanding and an ongoing debate in the literature on the mechanisms responsible for the increased creep strength.
The microstructure of all derivatives of the above-mentioned Mg-Al-alloy family shows a skeleton-like structure of hard intermetallic phases (IP) in which the Mg grains are embedded. In several publications, it is proposed that the skeleton of hard IPs is responsible for the improved creep strength of these alloys, see [9][10][11][12][13][14][15][16]. For Ca-alloyed A91 variants it could be shown that an increased interconnectivity of the IP network results in an increased creep strength [17]. This is supported by the works of Zubair et al. [18,19] where the skeleton of hard IPs was claimed to be responsible for the improved creep properties of the investigated Mg-Al-Ca alloys. Their microstructural investigations of creep-deformed specimens clearly showed strain localizations at the interfaces of α-Mg and IPs and a fracture of the IP in areas of high strain localization [18,19].
However, a comparison of the Ca-alloyed AZ91 variants [17,20] with the commercial alloy MRI 230D shows that the alloy MRI 230D has an improved creep strength over similar Ca-alloyed AZ91 variants. Even an AZ91 alloy with 5 wt.% Ca addition (AXZ951) has a lower creep strength compared to MRI 230D, especially in a regime of low creep rates [17]. This is quite surprising, considering MRI 230D has a lower content of Ca, only about 2 wt.%, compared to approximately 5 wt.% in the case of the alloy AXZ951, i.e., a smaller amount of IP. The quantification of the interconnectivity of the IP skeleton additionally revealed that the interconnectivity is less pronounced than in the case of the alloy AXZ951; the interconnectivity of the IP of the alloy MRI 230D is more equal to that of the alloy AXZ931 with 3 wt.% of Ca-see [17]. Since the creep rate of the alloy AXZ931 is more than one order of magnitude higher than the creep rate of the alloy MRI 230D under identical conditions [17], the question about the origin of the improved creep strength of MRI 230D arises. Recently, the increased creep strength of the Mg grains due to precipitation was investigated by Lamm et al. [21]. They showed that MRI 230D has much more thermally stable nano-sized precipitates in the Mg grains, at least partially explaining the increased creep resistance. What is however still unclear is if the IP network also behaves differently in MRI 230D vs. the AZ91 variants.
Light microscopy images of the microstructure of the investigated alloys are summarized in Figure 1. The microstructure of all alloys consists of α magnesium grains (light grey) surrounded by intermetallic phases (dark grey). The IP shows an increasing interconnectivity with increasing Ca content. Due to the lower thixomolding temperature of the alloy MRI 230D, primary solid α-Mg can be found in the microstructure. A detailed analysis of the microstructure of the investigated alloys including the reproducibility of the microstructural observations as well as the quantification of the interconnectivity can be found in [17].
As known from previous work, the creep resistance of AZ91 alloy variants increases with increasing Ca content; see for example [1,17,20,[22][23][24][25]. The alloys tested in this work show the same behaviour, as depicted in Figure 2, where their respective creep curves are shown. The observed strain rate decreases significantly with an increasing Ca content. An increase in the Ca content from 1 wt.% to 3 wt.% leads to a decrease in the measured minimum creep rate of about one order of magnitude. A further increase in the Ca content to approximately 5 wt.% further decreases the minimum creep rate by more than one order of magnitude. The creep rate of MRI 230D is comparable to that of AZ91 with 5 wt.% Ca addition, although the Ca content of MRI 230D is much lower with only about 2 wt.%; see also [17]. As the ductility of AZ91 decreases with increasing Ca content (see [22]) the tensile elongation of the alloy with 5 wt.% of Ca is below 1%. A lower Ca content of the alloys is therefore beneficial for structural applications.
Other works show that a combined addition of Ca and Sr to Mg-Al alloys leads to improved creep properties when compared with the addition of only Sr or Ca [26,27], but no reasons for these improved properties due to a combined addition of Sr and Ca are given. To gain more insight into the mechanisms responsible for the enhanced creep resistance, in this work the mechanical properties of the individual phases have been determined via nanoindentation. To this end, a commercially sourced sample of MRI 230D has been investigated in different states of creep deformation, namely as-cast, at minimum creep rate and after long-term creep loading at a typical service temperature of 200 • C (approximately 250 h). These results are compared with a series of experimental AZ91 alloy variants with Ca additions varying from 1-5 wt.%. The nanoindentation measurements have been performed at room temperature and at 200 • C. Measuring the local mechanical properties of the individual phases is a new approach to gain more insight into the origins of the creep resistance of magnesium alloys, especially in order to understand the high creep strength of the alloy MRI 230D.
In the case of the Ca-alloyed AZ91 variants, the IP mainly consists of Mg 17 Al 12 , Al 2 Ca and (Mg, Al) 2 Ca; see for example [28][29][30]. X-ray diffraction measurements from Aghion et al. [31] revealed only the presence of α-Mg and Al 2 Ca in the case of the alloy MRI 230D. Other groups report ternary MgAlSrCa phase compositions in Mg-Al-Ca-Sr alloys; see for example [32][33][34][35].
Experimental Section
A commercially available alloy, MRI 230D, was investigated and compared with Caalloyed AZ91 alloys with nominally 1, 3, and 5 wt.% of added Ca, named AXZ911, AXZ931 and AXZ951, respectively, according to ASTM B275 standard. All alloys were thixomolded by Neue Materialien Fürth GmbH on a Japan Steel Works machine with a closing force of 220 t. The cooling rate in thixomolding is rather high, comparable with the cooling rate of high-pressure die casting, and should be comparable for all alloys investigated in this work. In the case of the Ca-alloyed AZ91 variants, a thixomolding temperature of 605 • C was used, so these alloys were cast in a fully liquid state. For the alloy MRI 230D the thixomolding temperature was 590 • C, so the alloy was cast in semi-solid state Due to this, primary α-Mg can be found in the microstructure; see also Figure 1. Comparative measurements showed no significant influence of the thixomolding temperature on the hardness of the individual phases. The chemical composition for all investigated alloys as measured by GDOES (Glow Discharge Optical Emission Spectroscopy) is summarized in Table 1. Throughout this work, great care has been taken to take the samples out of the centre of the casting plates and to avoid taking samples out of the regions of the casting skin.
Creep tests were performed in compression mode at a temperature of 200 • C under constant true stress of 100 MPa on samples of approximately 5.5 mm height and about 20 mm 2 in cross-section. The cylindrical samples were ground to a parallelism of better than 10 µm. The solid cross-section, where casting porosity has been taken into account, is calculated from the sample mass m and the sample height h 0,RT (averaged over five measurements) as A 0,RT = m/(ρh 0,RT ). A constant solid density ρ = 1.811 g/cm 3 was assumed for all alloys. For the calculation of the sample cross-section at the elevated test temperature of 200 • C the thermal expansion was accounted for with a thermal expansion coefficient of 2.6 × 10 −5 K −1 .
The compressive constant true stress experiments were carried out using a lever-arm creep-testing machine, with the height change ∆h measured by a tube-rod extensometer system based on a linear variable differential transformer. Compressive strain is determined as ε = −ln(1 + ∆h/h 0 ). Compressive stress is calculated as σ = −F/A 0 × exp(−ε) from the force, F, measured by a load cell with a maximum capacity of 20 kN. Please note that for reasons of clarity in this work for compressive strains and stresses absolute values are used. For all creep tests in this work a temperature of 200 • C and a stress of 100 MPa has been used.
For microstructural investigations and nanoindentation measurements, samples were cut out of the castings or the creep-deformed samples and ground and polished with diamond suspension. The last preparation step was polishing with colloidal SiO 2 suspension (OPS, Struers, Denmark) to ensure high surface quality with low residual deformation. To prevent corrosion, water-free isopropyl alcohol was used as a lubricant for all polishing steps. In the case of the creep deformed samples, a surface parallel to the deformation direction was investigated.
The nanoindentation measurements at room temperature (meaning a temperature range of 18 • C-25 • C) were performed on an Agilent Nanoindenter XP. For the measurements at 200 • C a G200 from Surface equipped with a heating stage was used. To prevent the oxidation of the samples, measurements at 200 • C were performed under a protective argon atmosphere. All nanoindentation experiments used a Berkovich indenter tip and continuous stiffness mode (CSM). Due to the limited size of the intermetallic phase, the maximum indentation depth was 250 nm and the hardness for each measurement was averaged over a depth range between 200 and 250 nm. Figure 2 shows exemplarily indents in a sample of the alloy AXZ911 which has crept to the minimum creep rate.
Atom probe tomography (APT) experiments were carried out in a Cameca LEAP 4000X HR (Cameca Instruments Inc., Madison, WI, USA) with a reflectron detector system offering 37% detection efficiency. The samples were prepared via FIB lift-out in a FEI Helios 660i (FEI, Hillsboro, FL, USA) and a Zeiss Crossbeam 1540 EsB (Carl Zeiss AG, Oberkochen, Germany). While the lift-out process was done with a 30 kV Ga-beam, the shaping of the samples as soon as they were smaller than 1 µm was carried out under reduced acceleration voltage of 10 kV. This eliminates spurious ion irradiation damage within the investigated volumes caused by ion channelling. For the final milling step, the ion energy was reduced to 5 kV. A more precise description of the lift-out procedures is given in Refs. [36][37][38]. All APT measurements were performed using high voltage pulses with a repetition frequency of 200 kHz to trigger field evaporation with the sample bias voltage regulated to achieve an average detection rate of 2 kHz. The temperature of the tips was 40-45 K and the pulse amplitude was 20% of the bias voltage.
Creep Behaviour
The creep curves showing the creep rate over the strain of the investigated alloys in as-cast condition can be seen in Figure 3. Although several samples (minimum two experiments each) have been tested, only one creep curve for each alloy is shown for the sake of clarity. The creep tests showed a reproducibility in the strain rate within a factor of two. The creep strength of the alloys significantly increases with an increase in Ca content, which is in accordance with literature. The creep rate of the alloy MRI 230D is approximately the same as the creep rate of the alloy AXZ951, although the Ca content of the alloy MRI 230D is much lower than that of the alloy AXZ951.
Nanoindentation
In order to assess the mechanical properties of the individual intermetallic phases in the different alloys, we performed nanoindentation. The results at room temperature of these nanoindentation measurements are summarized in Figure 4. The room temperature hardness of the α-Mg is similar for all alloys and all creep states. The measured hardness of the α-Mg in as-cast condition increases slightly with the increasing Ca content of the alloy, so the hardness is lowest for AXZ911 and highest for AXZ951. However, the differences in hardness are rather small. The hardness of the α-Mg phase of the alloy MRI 230D equals approximately the hardness in the alloy AXZ951. A comparison of the hardness of the α Mg phase for the different creep states reveals no clear trend concerning the hardness evolution during creep testing in the case of the alloys AXZ911, AXZ931 and AXZ951. The differences in hardness for the different alloy conditions do not exceed the experimental scatter and there is no systematic evolution of the hardness with increasing creep testing time of the samples. In the case of the alloy MRI 230D one can observe that the hardness of the α-Mg phase decreases with increasing creep deformation, e.g., time at elevated temperature (see also Lamm et al. [21]).
In the case of the Ca-alloyed AZ91 variants, the hardness of the IP increases with the Ca content of the alloy, i.e., the room temperature hardness of the IP of the alloy AXZ951 is highest for all alloys investigated in this work. The hardness of the IP of the Ca-alloyed AZ91 variants AXZ911, AXZ931 and AXZ951 does not change significantly during creep deformation. Remarkably, this is different for MRI 230D, where the room temperature hardness of the IP significantly increases during creep deformation. The hardness of the IP of MRI 230D is comparably low in the as-cast condition, but in the sample which has been creep deformed to the minimum creep rate, the hardness of the IP has significantly increased. Further creep deformation leads to a further increase in the hardness of the IP. Although the IP of MRI 230D in as-cast condition is significantly softer than the IP of the alloys AXZ911, AXZ931 and AXZ951, after creep deformation the IP of the alloy MRI 230D has approximately the same hardness as the IP of the alloys AXZ931 and AXZ951 and is significantly harder than the IP of the alloy AXZ911.
When comparing the nanoindentation results at room temperature with the measurements at creep temperature, a significant difference in behaviour between the alloy series emerges. The hardness of the IP at 200 • C of all AXZ alloys is significantly lower than at room temperature-by about 0.5 GPa; see Figure 5. This is completely different for MRI 230D, where the IP maintains its hardness. At room temperature, the hardness of the IP of the alloy MRI 230D is lower than in the case of the Ca-alloyed AZ91 variants (compare also Figure 4) but at 200 • C the measurements show a completely different picture as the IP of the alloy MRI 230D is harder than the IP of the Ca-alloyed AZ91 variants.
APT Measurements
In order to assess the differences in composition, including minor elements, that may have led to such an improved high-temperature hardness of the IP in MRI 230D, we carried out atom probe measurements on the different IPs. The results of these are shown in Figure 6. These measurements clearly show that the IP in MRI 230D contains not only Al, Ca and Mg but also a significant amount of Sr. Sr is present as an alloying element in MRI 230D, but is absent in the AZ91 type alloys. This implies that Sr strengthens the IP of Mg-Al-Ca alloys. Even relatively small amounts of Sr seem to have an huge effect on the mechanical properties of the IP, as the overall amount of Sr in the alloy MRI 230D is only approximately 0.5 wt.% [17].
The APT measurements suggest that the IP in the alloy MRI 230D has a rather uniform composition where the elements Al, Ca, Sr, Zn and Mn segregate to the IP. A second APT measurement on the same alloy showed similar concentrations of the elements in the IP. Works from the literature report ternary MgAlSrCa phase compositions in the microstructure of Mg-Al-Ca-Sr alloys; see for example [32][33][34][35].
The results shown here hint that Sr plays an important role in the mechanical properties of the IP of MRI 230D. Sr is absent in the alloys AXZ911, AXZ931 and AXZ951 (see also Table 1) and the mechanical properties of the IP of these alloys suffer from degradation of their strength at 200 • C, whereas the IP of the alloy MRI 230D nearly maintains its strength at 200 • C compared to its strength at room temperature.
Creep Behaviour
Interestingly, the alloy MRI 230D has approximately the same creep resistance as the alloy AXZ951, although the Ca content of the alloy MRI 230D is significantly lower, being only about 2 wt.%, than the Ca content of the alloy AXZ951. This has significant advantages as the ductility of the alloys decreases with increasing Ca content [22], so that the elongation to fracture of the alloy AXZ951 is below 1%; see [22]. Even when a Sr content of approximately 0.5 wt.% is added, the total content of alkaline earth elements Ca + Sr is significantly lower than 5 wt.% in the case of the alloy AXZ951. A comparison of the creep resistance of both alloys in a broader stress range [17] shows that the creep resistance of the alloy MRI 230D is even higher than the creep resistance of the alloy AXZ951, especially in a regime of low strain rates. This is even more surprising when one considers the connectivity of the skeleton-like structure of the IP, as the interconnectivity of the IP of the alloy AXZ951 is significantly higher than that of the alloy MRI 230D [17]. A quantification of the interconnectivities of the IPs reveals that the interconnectivity of the IP of the alloy MRI 230D is approximately the same as that of the IP of the alloy AXZ931 [17]. This clearly implies that the creep resistance of the alloys is not only governed by the interconnectivity of the IP, but that other factors are also important for creep resistance.
Hardness of the α-Mg
No significant differences in hardness of the α-Mg for all investigated alloys and conditions could be found. The nanoindentation measurements in Figure 4 show a small decrease in hardness after creep exposure for about 250 h, however this decrease in hardness is within the range of the experimental scatter. Lamm et el. [21] found clusters rich in Ca, Mn and Al in the α-Mg of the alloys AXZ931 and MRI 230D. Those clusters coarsen during high-temperature exposure, e.g., during aging or creep testing. In the case of the MRI 230D alloy the clusters stayed small during high-temperature exposure, whereas the clusters of the alloy AXZ931 grew significantly; see [21]. However, this did not seem to have an impact on the nanoindentation hardness of the α-Mg. In both alloys, the hardness of the α-Mg decreased when the samples were exposed to creep testing at 200 • C for approximately 250 h; see Figure 4. If the clusters found in [21] significantly contributed to the hardness of the α-Mg grains no drop in hardness of the α Mg would be expected in the case of the alloy MRI 230D after creep exposure, whereas in the case of the alloy AXZ931 a drop in hardness would be anticipated. Although it is not clear to what extent the clusters found in [21] contribute to the hardness of the α Mg, it appears to be reasonable that these very small clusters will not have a pronounced effect on hardness. To what extent these clusters influence the mechanical properties under creep conditions, e.g., long-term loading with low deformation velocities, cannot be determined from nanoindentation experiments.
Hardness of the IP
The hardness of the IP of the Ca-alloyed AZ91 variants does not change significantly during creep, as is shown in Figure 4. On the other hand, an increase in the measurement temperature to 200 • C leads to a significant softening of the IP of these alloys; see Figure 5. In contrast, the IP of the alloy MRI 230D shows a small increase in hardness after creep. By directly comparing the results of MRI 230D and AXZ951, which both revealed a very comparable creep behaviour, it becomes obvious that the higher Ca content of AXZ951 leads to a significantly higher strength of the IP at room temperature in the as-cast condition and after creep exposure to the minimum creep rate. However, as the Ca-content is rather similar, the strength of the IP of the alloy MRI 230D should be at the same level as the strength of the IP of the alloy AXZ931. Interestingly, this is not the case for the as-cast condition, where the hardness of the IP of the alloy MRI 230D is significantly lower than that of the alloy AXZ931. In contrast to the very small changes in the hardness of the IP in AXZ931 and AXZ951 when comparing the different conditions, e.g., as cast-and creep-loaded, the IP of the MRI 230D alloy behaves completely differently. A pronounced increase in hardness with increasing creep exposure time is noticed when the hardness at room temperature is considered.
Additionally, the results of the hardness tests of the IPs at 200 • C have to be taken into account. While pronounced softening was obtained for the AXZ-alloys when compared to the room-temperature hardness data, for the IP in MRI 230D no differences of the hardness at room temperature and at 200 • C were obtained. Moreover, at 200 • C, the IP in MRI 230D was notably harder than the IP of the comparable AXZ931 alloy and showed a comparable hardness to the IP of AXZ951. Interestingly, the hardness measured at the creep minima follows the same trend. The hardness of the α-Mg obviously does not play an important role, as the hardness of the AXZ931 and AXZ951 for the condition after exposure to the creep minimum is higher than that of MRI 230D, whereas the creep rates are worse or equivalent. Taking the APT results into account too, we can learn that the small Sr additions in MRI 230D lead to an increase in mechanical strength at an elevated temperature of the IPs when compared to AXZ931, which contains the same Ca content. From Figure 3 we also can see that long-term exposure to 200 • C leads to a further increase in the hardness of the IP, which is not observed in the case of AXZ931.
Bringing all these results together, it becomes obvious that creep behaviour in the Mg-Al-Ca-alloys is not only governed by the morphology of the skeleton structure of the hard IPs, but also by the high-temperature strength and thermal stability of the IPs.
Conclusions
Nanoindentation measurements revealed that the IP of the alloy MRI 230D is significantly harder at 200 • C than the IP of Ca-alloyed AZ91 variants, while the differences in the hardness between the alpha grains of the respective alloys even at high temperatures are low. This enhanced hardness of the IP at creep temperatures is likely a significant factor contributing to the high creep strength of MRI 230D compared to the Ca-containing AZ91 variants. This is surprising when taking into account that the IP of the MRI 230D alloy has a lower connectivity than the Ca-alloyed AZ91 variants. APT measurements show that the IP of the MRI 230D contains Sr, an alloying element which is absent in the Ca-alloyed AZ91 variants. It is thus likely that Sr has a creep-strengthening effect in MRI 230D. | 6,137.6 | 2021-10-29T00:00:00.000 | [
"Materials Science"
] |
MicroRNAs as Therapeutic Targets in Nasopharyngeal Carcinoma
Nasopharyngeal carcinoma (NPC) is a malignancy of epithelial origin that is prone to local invasion and early distant metastasis. Although concurrent chemotherapy and radiotherapy improves the 5-year survival outcomes, persistent or recurrent disease still occurs. Therefore, novel therapeutic targets are needed for NPC patients. MicroRNAs (miRNAs) play important roles in normal cell homeostasis, and dysregulations of miRNA expression have been implicated in human cancers. In NPC, studies have revealed that miRNAs are dysregulated and involved in tumorigenesis, metastasis, invasion, resistance to chemo- and radiotherapy, and other disease- and treatment-related processes. The advantage of miRNA-based treatment approaches is that miRNAs can concurrently target multiple effectors of pathways involved in tumor cell differentiation and proliferation. Thus, miRNA-based cancer treatments, alone or combined with standard chemotherapy and/or radiotherapy, hold promise to improve treatment response and cure rates. In this review, we will summarize the dysregulation of miRNAs in NPC initiation, progression, and treatment as well as NPC-related signaling pathways, and we will discuss the potential applications of miRNAs as biomarkers and therapeutic targets in NPC patients. We conclude that miRNAs might be potential promising therapeutic targets in nasopharyngeal carcinoma.
INTRODUCTION Nasopharyngeal Carcinoma (NPC)
Nasopharyngeal carcinoma is a non-lymphomatous squamous cell carcinoma that arises from the epithelial lining of the nasopharynx. Local invasion and early distant metastasis are common in NPC. Etiologic factors for NPC include Epstein-Barr virus (EBV) infection, genetic predisposition, and environmental factors (1,2). It is extremely difficult to detect early because of its deep location and lack of obvious clinical signs in its early stages. Concurrent chemotherapy and radiotherapy is a standard treatment for late-stage NPC (3). Nevertheless, despite the effectiveness of concurrent chemotherapy and radiotherapy in treating NPC, local or regional failure in the form of persistent or recurrent disease occurs in some patients. Therefore, novel biomarkers and therapeutic strategies to improve treatment outcomes are urgently required for NPC patients.
MicroRNAs (miRNAs)
MiRNAs are a class of endogenous non-coding RNA molecules that are typically 22-25 nucleotides long (4,5). They are transcribed from intragenic or intergenic regions by RNA polymerase II into pri-miRNAs (at a length between 1 and 3 kb) (6), and further processed by the RNase III ribonucleases Drosha and DiGeorge syndrome critical region gene 8, DGCR8, complex in the nucleus into a hairpin intermediate pre-miRNA (consisting in a stem-loop structure of about 70 nucleotides) (7). The pre-miRNA is then transported from the nucleus to the cytoplasm by exportin 5 (8). After strand separation, the mature doublestranded miRNA, also known as the guide strand, is incorporated into an RNA-induced silencing complex (RISC), whole the passenger strand (miRNA * ) is typically degraded. The RISC is the effector complex of the miRNA pathway and comprises miRNA, Argonaute proteins (Argonaute 1 to Argonaute 4) and other proteins. The mature strand is important for target recognition and for the incorporation of specific target mRNAs into RISC (8,9). Each miRNA can potentially target many genes (about 500 on average), and about 60% of mRNAs have at least 1 evolutionarily conserved sequence that is believed to be targeted by miRNAs (10,11).
Exosomal miRNAs
Exosomes are microvesicles that are 40-100 nm long. They originate in intracellular endosomal compartment and are secreted by cells into their microenvironment. Exosomes transport DNA fragments, proteins, mRNAs, and miRNAs from donor cells to recipient cells and are therefore crucial to intercellular communication. Exosomal miRNAs miR-21 and miR-29a are secreted by tumor cells and can bind to toll-like receptors on nearby immune cells, thus initiating an inflammatory response that promotes metastasis (19). Furthermore, miR-21 was observed at a higher level in exosomes from the serum of patients with esophageal squamous cell carcinoma than in serum from patients with benign diseases without systemic inflammation, and an association was found between exosomal miR-21 and the presence of metastasis with inflammation (20). In addition, exosomal miR-223 was reported to be elevated in breast cancer cells and promote breast cancer invasion (21). In NPC, exosomal miR-9 was found to inhibit angiogenesis through regulating PDK/AKT pathway (22). And exosomal miR-24-3p serves as a potential biomarker for NPC prognosis (23). Therefore, exosomal miRNAs are involved in the initiation and progression of cancers including NPC, and could be biomarkers for NPC patients.
EBV-Encoded miRNAs
EBV, a herpesvirus that infects the majority of the population worldwide asymptomatically (24), was the first human virus reported to encode miRNAs (25). More than 44 viral miRNAs are encoded from EBV. In NPC, EBV expresses EBNA1, LMP1, and LMP2A, EBERs, and BARTs (26). miR-BARTs, which are EBV-encoded miRNAs derived from BamH1-A rightward transcripts, are highly expressed in NPC and promote its development. A recent study showed that EBV-encoded miR-BARTS, including BART5-5p, BART7-3p, BART9-3p, and BART14-3p, downregulated the expression of a key DNA doublestrand break repair gene, ataxia telangiectasia mutated (ATM), by targeting several sites on its 3 ′ -UTR (27). Thus, those 4 EBVencoded miRNAs work cooperatively to suppress ATM activity in response to DNA damage, contributing to NPC tumorigenesis. Those findings indicate that EBV-encoded miRNAs can be used as a novel therapeutic strategy for NPC.
THE MECHANISM OF miRNA REGULATION IN CANCERS
Genes encoding miRNAs are often located at or near fragile sites and in minimal regions of loss of heterozygosity, in minimal regions of amplification, and in common cancerrelated breakpoints (28). Upregulated expression of miRNAs can be caused by genomic alterations such as translocations o amplification, and loss of function can be caused by alterations such as deletions, insertions, or mutations (29). For example, the mir-17-92 cluster, which is made up of mir-17, mir-18a, mir-19b, mir-19b-1, mir-201, and mir-92-1, resides in an 800 base-pair region of the non-coding gene MIR17HG (also called C13orf25), a genomic region known to be amplified in lymphomas (30). The mir-17-92 cluster is often overexpressed in hematological cancers (31,32). In contrast, the mir-15a-mir-16-a cluster, which resides in the chromosome 13q14 region (between exons 2 and 5 of the non-coding gene DLEU2), is often downregulated in patients with chronic lymphocytic leukemia due to genomic deletion of this region (31,33). In addition to structural genetic alterations, epigenetic modulations, including DNA promoter hypermethylation and histone hypoacetylation, have been described in solid tumors (34). For example, miR-127 is downregulated because of promoter hypermethylation in human bladder cancer (34). Usually, hypermethylation of tumor-suppressive miRNAs leads to miRNA silencing, and hypomethylation of onco-miRNAs leads to their activation and to tumorigenesis (35). In addition, long non-coding RNAs (lncRNAs) can target miRNAs, resulting in in tumorigenesis and chemo-and radioresistance. For example, lncRNA FTH1P3 promotes ATP binding cassette subfamily B member 1 (ABCB1) protein expression by targeting miR-206, acting as a miRNA "sponge, " leading to the activation of paclitaxel resistance in breast cancer (36). Circular RNAs (circRNAs) also can act as miRNA sponges to regulate miRNA expression. For example, circNT5E was recently reported to directly bind to miR-422a and inhibit its activity, promoting glioblastoma tumorigenesis (37).
The aberrant miRNA expression in cancer can also be caused by downstream miRNA processing. Merritt et al. reported that miRNA expression could be globally suppressed by short hairpin RNAs against Dicer and Drosha, 2 critical ribonucleases involved in miRNA processing (38). This miRNA suppression promotes cellular transformation and tumorigenesis.
The alteration of miRNA expression in cancers can also be caused by aberrant transcription factor activity, which leads to increased or decreased transcription from miRNA genes. The miR-34 miRNA family (comprising miR-34a, miR-34b, and miR-34c) is directly induced by the tumor suppressor p53. In cells with high levels of p53, miR-34 expression is elevated; furthermore, chromatin immunoprecipitation assays revealed that p53 can bind to the promoter of miR-34 (39,40). The MYC oncoprotein downregulates transcription of tumor suppressor miRNAs such as let-7 and miR-29 family members. MYC can bind to conserved sequences of the miRNA promoter that it suppresses, and the suppression of miRNAs by MYC has been found to facilitate lymphomagenesis (41) (Figure 2).
miRNA DYSREGULATION IN NPC INITIATION
The function of miRNAs is largely influenced by the expression of their main targets. Some miRNAs promote tumorigenesis in some cell types and suppress it in others. The classification of a miRNA as an oncogene or a tumor suppressor, therefore, requires knowledge of the type of cell in which it acts. Typically, miRNAs do not cause a specific phenotype by aiming at a single target. Instead, miRNAs target multiple mRNAs concurrently and engage in complex interactions with the machinery that controls the transcriptome. In cancers, miRNAs often are dysregulated and function collectively to mark differentiation states or individually as oncogenes or tumor suppressors. In NPC, miRNAs were reported to be expressed aberrantly and exert pivotal effects by altering the expression of their specific mRNA targets (42) ( Table 1).
Onco-miRNAs in NPC
Tumor suppressor genes are usually inhibited by miRNAs directly, and such miRNAs are considered to be onco-miRNAs. For example, cell cycle inhibitor CDKN1A (also called P21) is believed to be directly targeted by miR-663 in NPC, leading to the promotion of NPC cell proliferation and tumorigenesis (43). An onco-miRNA, miR-125b, was found to be significantly upregulated in the NPC tissue compared with healthy nasopharyngeal mucosa, and this upregulation was correlated with poor survival outcomes; furthermore, high expression of miR-125b was identified as an independent predictor for shorter survival durations in NPC patients (44). The same study found that miR-125b promoted proliferation and inhibited apoptosis in NPC cells. A direct target of miR-125b, the tumor necrosis factor alpha-induced protein 3 gene TNFAIP3 (formerly called A20), functions as a tumor suppressor in NPC and mediates miR-125b-promoted NPC tumorigenesis by activating the nuclear factor κB (NF-κB) signaling pathway. Together, these findings demonstrate that onco-miRNAs target tumor suppressor genes, leading to NPC initiation.
Tumor-Suppressive miRNAs in NPC
Oncogenes can be suppressed by miRNAs directly, and such miRNAs are considered to be tumor-suppressive miRNAs. For example, the C-X-C motif chemokine receptor 4 oncogene CXCR4 can be directly targeted by tumor-suppressive miR-9, resulting in the inhibition of NPC pathogenesis. In NPC clinical specimens, miR-9 was observed to be downregulated (45). Interleukin-17 (IL-17), a proinflammatory cytokine, suppresses immune defense, and immune surveillance while promoting tumor growth. A study showed that IL-17 was targeted by miR-135a, resulting in the inhibition of NPC cell proliferation (46). In another study, overexpression of miR-320b was shown to suppress NPC cell proliferation and enhance mitochondrial fragmentation and apoptosis (47). In contrast, silencing miR-320b enhanced NPC tumor growth and inhibited cell apoptosis. The TP53-regulated inhibitor of apoptosis 1 gene, TRIAP1, has been found to be directly targeted by miR-320b, which mediates TRIAP1's role in NPC cell proliferation inhibition and apoptosis induction. It has also been reported that miR-326/330-5p clusters can target cyclin D1 gene, CCND1, exerting their tumor-suppressive roles on NPC initiation (48). Some tumorsuppressive miRNAs target lncRNAs that function as oncogenes. For example, miR-25 expression was found to be upregulated in NPC cells, and its ectopic expression was shown to suppress NPC cell growth and motility by targeting metastasis-associated lung adenocarcinoma transcript-1 (MALAT1), a proto-oncogenic lncRNA (49). The tumor-suppressive miRNAs miR-451 and miR-539-5p inhibit NPC initiation by targeting the macrophage migration inhibitory factor (MIF) and Kruppel-like factor 12 (KLF12) genes, respectively (50,51). Finally, lentivirus can be used as a delivery system to overexpress specific tumorsuppressive miRNAs in NPC, resulting in the inhibition of NPC initiation. For example, lenti-miR-26a was shown to inhibit the tumorigenicity of NPC cells in nude mice significantly, providing a useful strategy for treating NPC patients (52).
miRNA DYSREGULATION IN NPC PROGRESSION
NPC is an aggressive disease that tends to spread locally and metastasize to regional lymph nodes and distant organs. Distant metastasis is the principal mode of treatment failure (53). It is known that NPC metastasis is associated with miRNA dysregulation. The functions of miRNAs often are contradictory because they are determined by the cellular environment and the stage of the metastatic process. Therefore, identifying which miRNAs promote or suppress the metastatic process of NPC could lead to the development of new, efficient therapeutic agents to prevent or delay metastasis ( Table 1).
Metastasis Promoter miRNAs in NPC
Mounting evidence implicates miRNAs in the modulation of angiogenesis, which is essential to the metastatic process. For instance, it has been reported that exosomal miR-23a overexpression promotes angiogenesis in NPC by directly targeting the testis-specific 10 gene, TSGA10 (54). Furthermore, miR-23a overexpression in pre-metastatic NPC tissue was identified as a prognostic biomarker for early metastasis. In addition, EBV-encoded miR-BART1 has been shown to induce NPC metastasis by regulating pathways that depend on the phosphatase and tensin homolog gene, PTEN (55). Another EBV-encoded miRNA, miR-BART7-3p, was shown to promote the epithelial-mesenchymal transition (EMT) and metastasis of NPC cells by suppressing PTEN and consequently activating the PI3K/AKT/GSK-3β signaling pathway (56).
Metastasis Suppressor miRNAs in NPC
The tumor-suppressive miR-29c is also a metastasis suppressor that inhibits NPC cell migration, invasion, and metastasis by targeting the T cell lymphoma invasion and metastasis 1 gene, TIAM1, directly (57). In NPC patient samples and cell lines, miR-101 was found to be downregulated, and its ectopic expression significantly inhibited cell migration, invasion, and angiogenesis both in vitro and in vivo. The prometastatic gene integrin subunit alpha 3 (ITGA3) has been identified and validated as a target of miR-101 and shown to mediate the suppressive effects of miR-101 on NPC metastasis. Interestingly, the systemic delivery of lentivirus-mediated miR-101 in NPC suppressed lung metastatic colony formation with no noticeable toxic effects (58). Also, miR-203a-3p was found to be dysregulated and to act as a tumor suppressor in NPC. This miRNA suppresses NPC metastasis by targeting the LIM and SH3 protein 1 gene, LASP1 (59). Metastasis suppressor miRNAs can also target or be targeted by lncRNAs in NPC. For example, lncRNA H19 has been found to be overexpressed in NPC tissue, and H19 knockdown significantly suppressed invasion NPC cells. H19 knockdown downregulated the expression of the enhancer of zeste homolog 2 gene, EZH2, which is upregulated in NPC and promotes invasion. H19 does not bind directly to EZH2 but instead modulates its expression by suppressing the activity of miR-630, which inhibits EZH2 and interacts with H19 in a sequencespecific manner. H19 also suppresses E-cadherin expression and promotes invasion in NPC cells through the miR-630/EZH2 pathway (60). And He et al. demonstrated that EBV-miR-BART6-3p suppressed EBV-associated cancer cell migration and invasion by targeting lncRNA MIR3936HG (also known as LOC553103) and reversing the EMT process. And MIR3936HG knockdown by specific siRNAs was shown to phenocopy the effect of EBV-miR-BART6-3p, while elevated MIR3936HG expression enhanced tumor cell migration and invasion to promote EMT (61).
miRNA DYSREGULATION-RELATED SIGNALING PATHWAYS IN NPC
Several signaling pathways are involved in miRNA dysregulationrelated processes in NPC. For example, miR-125b was shown to promote NPC tumorigenesis by activating the NF-κB signaling pathway, which plays a critical role in NPC tumorigenesis and progression (44,62). In addition, miR-19b-3p was found to be upregulated and to be an independent predictor for poor survival outcomes in NPC patients. MiR-19b-3b increased NPC cell radioresistance by targeting TNFAIP3 and then activating the NF-κB signaling pathway (63).
The PTEN/AKT pathway plays an important role in NPC processes related to miRNA dysregulation. One study found that miR-141 was markedly elevated in NPC tissues and negatively correlated with both patient survival and the expression of the bromodomain containing 7 gene, BRD7; BRD7 overexpression activated the PTEN/AKT pathway, but restoring miR-141 expression suppressed this activation and partially restored NPC cell proliferation and tumor growth. The BRD7/miR-141/PTEN/AKT axis therefore is important to NPC progression and could provide new treatment targets and diagnostic markers (64). In addition, EBV-encoded miRNAs miR-BART1 and miR-BART7-3p promote NPC metastasis by modulating the PTEN/PI3K/AKT signaling pathway (55,56). PI3K signaling is also involved in miRNA dysregulation-related processes in NPC. A study of NPC tumor specimens found that tumor-suppressing protein PDCD4 suppresses the pPI3K/pAKT/c-JUN signaling pathway, which in turn modulates miR-374a's binding to CCND1, resulting in dysregulation of NPC cell growth, metastasis, and chemoresistance (65). In this study, miR-374a expression was positively correlated with PDCD4 expression and negatively correlated with CCND1 expression. The PI3K/AKT/mTOR signaling pathway also significantly affects NPC tumorigenesis and development (66). For example, miR-3188 was shown to inhibit NPC cell cycle transition and proliferation, to sensitize cells to chemotherapy, and to extend survival in tumor-bearing mice, and to inactivate p-PI3K/p-AKT/c-JUN signaling by targeting mTOR directly, further suppressing the cell cycle through the p-PI3K/p-AKT/p-mTOR pathway (67).
miRNA DYSREGULATION IN NPC THERAPIES
Radiotherapy and chemotherapy are 2 main treatments for NPC. Mounting evidence shows that miRNAs are dysregulated during radio-or chemotherapy for NPC and may reduce or induce the sensitivity of NPC cells to radiotherapy or chemotherapy ( Table 1).
miRNA Dysregulation in Radiotherapy
Radioresistance is the main reason for NPC treatment failure (68). Multiple studies have shown that miRNA expression in various cell types changed upon irradiation, as did the specific effects of various miRNAs on cellular radiosensitivity. It has been reported that miR-324-3p reduces NPC radioresistance by directly targeting the well-known oncogene Wnt family member 2B (WNT2B), inhibiting the gene's translation (69). Studies have also reported that miR-519d sensitizes NPC cells to radiation by directly targeting the 3 ′ -UTR of PDRG1 (p53 and DNA damage regulated 1) mRNA (70) and thatiR-24 increases radiosensitivity in NPC by targeting both COPS5 and SP1 (specificity protein 1) (15). In contrast, Huang et al. demonstrated that miR-19b-3p upregulation decreases-and downregulation increases-NPC sensitivity to radiation. The researchers also found that miR-19b-3p directly targeted TNFAIP3, and the gene's upregulation reversed miR-19b-3p's suppressive effects on NPC cell radiosensitivity. Thus, miR-19b-3p was shown to enhance radioresistance in NPC cells by activating the TNFAIP3/ NF-κB pathway (63). Together, these studies indicate the potential use of miRNAs as radiosensitizing agents in NPC treatment.
miRNAs Dysregulation in Chemotherapy
The importance of miRNAs in chemotherapy response has been demonstrated in multiple human cancers, including cancer of the tongue (71). In NPC, miR-3188 has been found to inhibit cell growth and resistance to fluorouracil by directly targeting the mechanistic target of rapamycin kinase gene, MTOR, and regulating the cell cycle (67). Another study showed that the metastasis suppressor miR-29c can also increase NPC cells' sensitivity to both radiotherapy and cisplatin-based chemotherapy (72). The above evidence shows that miRNAs mainly function as chemosensitizers in NPC.
miRNAS AS BIOMARKERS AND NOVEL THERAPEUTIC APPROACHES IN NPC
In the above sections, we showed that miRNAs are dysregulated during NPC initiation, progression, and therapy. In addition, several studies have reported that miRNA dysregulation is associated with the survival of NPC patients, and miRNAs may serve as independent biomarkers for NPC diagnosis, recurrence, and prognosis. Furthermore, a few molecularly targeted drugs have emerged as clinically active against advanced NPC in recent years (73), and the exploration of miRNAs as drugs or drug targets against other cancer types is already underway (29).
miRNAs as Biomarkers in NPC
Several miRNAs show potential as biomarkers in NPC. A recent meta-analysis indicated that increased miRNA expression led to a poor overall survival and increased the likelihood of death of NPC patients (74). The tumor and metastasis suppressor miR-29c has been shown to be downregulated in both the serum and tumor tissue of NPC patients, indicating its promise as a biomarker for NPC diagnosis, prognosis, and recurrence (75). Also, NPC patients were shown to have significantly higher serum levels of miR-663 compared with healthy individuals, and high levels were associated with worse 5-year overall and relapse-free survival outcomes in NPC patients (76). In addition, chemotherapy significantly lowered NPC patients' serum miR-663 levels. These results suggest a critical role for miR-663 as a biomarker of NPC prognosis and response to chemotherapy. Recent studies showed that miR-31-5p was downregulated in present in NPC tissues and cell lines, acting as a tumor suppressive miRNA. And circulating miR-31-5p was identified to be a potential novel and non-invasive biomarker for the early diagnosis of NPC (77). The expression levels of tumoreducated platelet miR-34c-3p and miR-18a-5p are upregulated in NPC, which are promising novel liquid biopsy biomarkers for NPC diagnosis (78). In addition, miR-342-3p was significantly downregulated in NPC specimens and its low expression was significantly correlated with reduced overall survival of NPC patients, indicating miR-342-3p as a biomarker of NPC prognosis (79). All in all, the above results suggest miRNA can be a single biomarker for NPC diagnosis, prognosis, and response to therapy.
miRNAs as Therapeutic Approaches in NPC
One of the most appealing properties of miRNAs as therapeutic agents is their ability to simultaneously target more than 1 gene, making miRNAs extremely efficient for regulating distinct cell processes relevant to normal and malignant cell homeostasis. In NPC, miRNAs for gene therapy have been delivered using lentiviral vectors. For example, a previous study found that tumor suppressor miR-31-5pinhibited EBVpositive NPC tumorigenesis, and minicircle-oriP-miR-31, a novel EBNA1-specific miRNA delivery system, was constructed and shown to inhibit NPC cell proliferation and migration in vitro and to suppress xenograft growth and lung metastasis in vivo. The researchers also found that the WD repeat domain 5 gene, WDR5, is a target of miR-31-5p. The study proved that targeted delivery of miR-31-5p using a non-viral minicircle vector could serve as a novel therapeutic approach for NPC, indicating a promising miRNA therapy for NPC patients (85). More studies are still ongoing to apply miRNA therapy to patients with NPC.
CONCLUSIONS AND FUTURE DIRECTIONS
The need for novel therapeutic targets and agents for treating NPC patients is urgent, owing to NPC's anatomical location and resistance to both radiotherapy and chemotherapy. Large Frontiers in Oncology | www.frontiersin.org numbers of miRNAs are dysregulated during the process of NPC initiation, progression, and therapy; therefore, miRNAs have been proposed as useful biomarkers to predict prognosis in and therapeutic approaches to cure patients with NPC. The advantages of using miRNAs as drugs antagonizing NPC is that 1 miRNA can target multiple targets and the same target can be targeted by many miRNAs, showing their comprehensive potential roles in the clinic. Based on the previous studies on miRNA dysregulations in NPC, how to use and take the advantages of miRNAs in clinic is to be solved. And due to the fact that miRNAs can regulate many mRNAs, the potential of toxic phenotypes and other off-target effects of miRNA treatment approaches is a major concern. As a result, more and more studies focusing on the toxic effects of targeting miRNAs are required before such therapies can be used safely in NPC patients.
AUTHOR CONTRIBUTIONS
SW was responsible for the writing and editing of the manuscript. F-XC was responsible for modifying the manuscript. WW provided some critical useful suggestions. | 5,206.6 | 2019-08-13T00:00:00.000 | [
"Biology",
"Medicine"
] |
Performance study of a 3×1×1 m3 dual phase liquid Argon Time Projection Chamber exposed to cosmic rays
We report the results of the analyses of the cosmic ray data collected with a 4 tonne (3×1×1 m3) active mass (volume) Liquid Argon Time-Projection Chamber (TPC) operated in a dual-phase mode. We present a detailed study of the TPC's response, its main detector parameters and performance. The results are important for the understanding and further developments of the dual-phase technology, thanks to the verification of key aspects, such as the extraction of electrons from liquid to gas and their amplification through the entire one square metre readout plain, gain stability, purity and charge sharing between readout views.
Introduction
Dual phase (DP) Liquid Argon TPCs (LAr TPC) have been under development for more than a decade [1][2][3][4][5]. They combine the properties of the liquid argon as the tracking medium with the charge amplification in the gaseous argon phase above the liquid at the top of the detector, after extracting drift electrons from the liquid into the gas. This provides a full 3D imaging of charged particle interactions with high resolution and low energy threshold. As such, they can be a potent tool to study neutrino interactions and other rare phenomena [6]. DP LAr TPCs at the 10 kt scale, located underground and coupled to powerful neutrino beams offer a rich scientific portfolio, that includes the study of leptonic Charge-Parity (CP) violation and the determination of the neutrino mass ordering. Thanks to the VUV scintillation of the liquid argon, they can operate in self-triggering mode to study atmospheric and astrophysical neutrinos with very high statistics and extend the sensitivity to proton decay.
The possibility to deploy and operate large scale DP LAr TPCs underground has been studied in detail in the context of LAGUNA-LBNO [7] and subsequently in DUNE [8]. Two prototype LAr -1 -
JINST 16 P08063
TPCs (known as Dual Phase and Single Phase ProtoDUNE) with about 300 tonnes in active mass each have been constructed and operated at CERN [9] by the DUNE collaboration. In 2017, a 4 tonne (3 × 1 × 1 m 3 ) DP LAr TPC was operated, collecting over 300,000 cosmic ray interactions. A first look at this data presented in ref. [10] clearly demonstrates the excellent imaging capabilities of the detector and provides a proof of principle of the DP technology at the tonne scale. Follow up studies performed with the scintillation light in the detector have been published in [11].
A more detailed analysis of the ionisation charge signals produced in the pilot detector is the subject of this paper. Based on the acquired cosmic ray data, we evaluate the impact and the relative importance of various parameters on quantities such as signal-to-noise ratio, hit finding efficiency and the uniformity of the response on a 3 × 1 m 2 area. The 3 × 1 × 1 m 3 demonstrator is the first large scale prototype of a DP LAr TPC and a reduced scale replica of upcoming larger detectors. In this context, a detailed study of its response and imaging capabilities is of utmost importance to better gauge the performance and provide more accurate simulations of large scale DP LAr TPCs.
In section 2, we briefly present the overview of the detector along with the response to the charge readout system. We also describe the key detector parameters that can have the largest impact on the quality of the data. In section 3, we present some details on the Monte Carlo and data samples, and the algorithms used for track reconstruction. Finally, we evaluate the detector response and performance in section 4.
Overview of the 3 × 1 × 1 m 3 demonstrator detector 2.1 Experimental setup
The 3 × 1 × 1 m 3 demonstrator detector is described in detail in ref. [10]. It consists of a 3 × 1 × 1 m 3 liquid argon active volume, defined by a cathode at the bottom, a field cage, and the readout plane at the top. A charge amplification and anode readout stage is positioned in the gas phase a few millimetres above the liquid argon surface. Figure 1 shows several reconstructed cosmic muon track candidates from different events superimposed, along with a schematic of the signal amplification region in the right panel.
At the passage of an ionising particle inside the liquid, electron-ion pairs are created. The electrons drift towards the surface at a velocity of about 1.6 mm/µs, thanks to the nominal electric field of 500 V/cm provided by the cathode and drift cage system. Scintillation photons with wavelengths peaked at 128 nm are also produced and detected by five PMTs placed in line at the bottom of the cryostat. They can provide the trigger and the reference time for the event with an accuracy of a few nanoseconds [11]. The produced charge is free to drift in the ultra pure liquid argon up to the surface where it is extracted to the gas phase. Once in the gas, the drifting electrons are multiplied by twelve 50 × 50 cm 2 Large Electron Multipliers (LEMs) and collected on a two-dimensional and finely segmented anode. The amplification in the LEMs allows to produce signals with amplitudes which are significantly above the level of the ambient noise, resulting in a high quality and good resolution image of the interacting particles.
The electron extraction, amplification, and collection are performed inside a 3 × 1 m 2 structure called Charge Readout Plane (CRP). The CRP is electrically and mechanically independent from the drift cage and can be remotely adjusted to the liquid level. The electrons are efficiently extracted -2 - from the liquid to the vapour by applying an electric field in the liquid above 2 kV/cm [12]. This electric field is provided by a 3 mm pitch extraction grid positioned 5 mm below the liquid argon surface. Once amplified inside the LEM holes, the charge is collected on a two-dimensional segmented anode which consists of a set of independent strips that provide the and coordinates of an event with a 3.125 mm pitch. The anodes are electrically bridged together so as to provide two orthogonal sets of three and one metre long readout strips called views. The anodes are carefully designed in such a way that the amplified charge is equally shared and collected on both views [13].
View 0 (view 1) consists of 320 three metre-long (960 one metre-long) strips running parallel to the x (y) coordinate axis. As an example, in figure 2 a typical raw cosmic track, recorded with the 3×1×1 m 3 demonstrator is shown. For each view the charge recorded per channel number is plotted -3 -as a function of the drift time, and the colour scale, proportional to the strength of the signal, has been optimised for optimal track visualisation. The readout pitch on the horizontal axis is 3.125 mm and the drift time acquisition window of 667 µs is digitally sampled at 2.5 MHz which results in 1667 time samples. At a 500 V/cm drift field, this leads to a ∼0.64 mm spatial resolution along the drift coordinate. Each readout view therefore provides a high resolution digitised image of the interaction products with a pixel unit size of ∼ 3.125×0.64 mm 2 . A list of relevant parameters of the TPC charge readout section along with properties of the liquid and gas argon is provided in table 1. Table 1. Some parameters related to the imaging of the TPC along with a summary of coefficients used for the analysis in this paper. Unless otherwise indicated, all values are taken from ref. [18], computed for a liquid with temperature of 87.3 K, pressure of 1 atm, and drift electric field of drift = 500 V/cm. For the first Townsend coefficient we use the value given in [19] for a pressure of 980 mbar and temperature of 87K.
Property
Value Unit
Effective gain and signal-to-noise ratio
An ionising particle depositing an energy Δ dep in the liquid, produces an ionisation charge, free to drift in the liquid, 0 = (Δ dep / ion ) · Birk where ion is the work needed to obtain the ionisation and Birk denotes the remaining fraction of produced electrons that do not immediately recombine with the ions. The latter depends on the ionisation density and the drift field and is well parametrised by Birk's Law [20]. The values ion and Birk are reported in table 1. At a drift electric field drift of 500 V/cm, a minimal ionising particle (MIP) produces a charge yield of about 1 fC/mm after recombination. The charges drift for a duration drift and electron attachment to impurities in the liquid attenuates the signal by a factor = − drift / where is called the electron lifetime. Its value is dependent on the performance of the cryogenic purification and recirculation system. As shown in [10], a study of the data for this detector suggests a lifetime better than 4 ms, or an oxygen equivalent impurity level of ∼75 ppt [O 2 ] eq . The amount of charge from a MIP collected on an anode strip of view can therefore be written as: where the effect of transverse and longitudinal diffusion ( , ) is neglected for the relatively short 1 metre drift distance (see table 1). The space resolution values are below the channel pitch. The factor 1/2 represents the fact that the charge yield is equally split between both collection views of the anode. The projected charge on one anode strip, mip proj , depends on the azimuthal and polar angles of the track crossing the TPC fiducial volume. eff is the effective gain of the dual phase chamber which primarily relies on the multiplication factor of the electrons inside the LEM holes but also depends on the overall electron transparency (T ≤ 1) of the extraction grid, LEM and anode inside the CRP. The effective gain can be analytically decomposed as: where is the first Townsend ionisation coefficient for an amplification field LEM inside the LEM hole and gas density ; denotes the effective amplification length. and are parameters which depend on the gas properties and are obtained from numerical calculations [19,21]. The value of along with the nominal pressure and temperature of the gas argon during operation of the detector are reported in table 1.
Stable effective gains of around 20 have been reported inside litre-scale dual phase TPCs [13,22] by operating the CRP at the electric field settings shown in figure 1. The 3 × 1 × 1 m 3 demonstrator is however limited in effective gain due to the high voltage issues on the grid and LEMs reported in [10]. The results discussed in section 4.3 show that data has been collected at eff = 1.9. A MIP crossing the TPC at the polar and azimuthal angles of ( , )=(90 • , 45 • ) will produce on average MIP ≈ 2.1 fC of charge on the 3 mm strips (without amplification) and neglecting attenuation due to electron lifetime. The effective gain drives the signal-to-noise ratio of the charge readout of the The ENC on a channel is the combination of the intrinsic noise due to the input capacitance of the anode strips and all other sources of coherent or incoherent noise which may be picked up. As shown in section 3.1 an ENC of around 2'000 electrons on view 1 and 2500 electrons on view 0 is reported at the beginning of the TPC operation period, those values drop to around 1'000 electrons for each view after the application of coherent noise removal. We note that the amount of collected charge per strip and hence the quoted value of the / is dependent on the track topology. For the rest of the paper, we will quote the / per view for the specific case of a MIP track crossing the strips at ( , )=(90 • , 45 • ) before noise filtering algorithms are applied. This provides a conservative lower bound on the / since in general, and especially for cosmic particles, tracks are inclined and will thus have a larger polar angle.
The value of eff , as well as its uniformity over the readout surface, therefore directly impacts the imaging quality of the event and is hence a fundamental quantity used to benchmark the detector performance. The right side of figure 3 illustrates the expected / for a MIP crossing a dual phase detector with the operational parameters specified on the figure as a function of eff and drift distance. For the rather modest one metre drift of the 3 × 1 × 1 m 3 demonstrator, a / of 11.2 can be expected even at an effective gain of 1.9. With similar assumptions for the values of purity and ENC, operating with eff = 10 would correspond to obtaining / = 59.3 for one metre drift and / = 11.2 for a TPC with 12 metre drift. Twelve metres is the drift distance of the proposed DUNE 10kt dual phase detector [23]. The plot illustrates the importance of the effective gain especially if the targets in terms of noise performance or LAr purity are not met.
-6 -In order to have a uniform and stable effective gain, the electric field across the LEM, its thickness as well as the density of the argon vapour must be carefully controlled. This is illustrated on the left side of figure 3: due to the exponential nature of eff , variations in the 50 × 50 cm LEM thickness or fluctuations in the gas density may have a relatively large impact on its value. In the 3 × 1 × 1 m 3 demonstrator, the thickness uniformity of all the 50 × 50 cm LEM plates was carefully checked and the gas argon density was maintained at a stable value. Both were uniform within typically one percent (see ref. [10] and table 1). The obtained uniformity of the effective gain in the 3 × 1 × 1 m 3 demonstrator is discussed in the technical paper (ref. [10] with some more details on the improvements made since that result in section 4.3. In all these studies, diffusion is not accounted for, since over these short drifts its effects are negligible and not measurable. The effects of diffusion, however, should be properly considered in large detectors.
Charge readout detector response
By design the dual phase TPC, and specifically the 3×1×1 m 3 , allows to place the front-end electronics close to the readout strips, profiting from the cryogenic temperatures while at the same time ensuring its accessibility during detector operation. The front end electronic boards are described in [10]: they have a DC decoupling stage, high voltage surge protection components and an ASIC chip which contains the CMOS pre-amplifiers and signal shaping. The pre-amplifiers feature a double-slope response with a linear gain for input charges of up to 400 fC and a logarithmic response in the 400-1200 fC range. The double slope feature allows to extend the dynamic range of the detector and acquire events with high ionisation yield. The data presented in this publication is mainly collected at effective gains of around 2 and therefore we only consider the linear regime of the pre-amplifier response.
The current produced by the cloud of drifting charge is extracted and multiplied in the LEMs, collected on the anode and pre-amplified. The recorded signal to be digitised out ( ) is a convolution of the induced current ( ) on the anode strip and the response of the pre-amplifier after shaping of the signal response ( ): with the amplifiers response function: where the values of 1 and 2 are reported in table 1. The signal, out ( ), after digitisation, called the waveform, features a standard shape with a peaking and falling time given by the internal RC-CR shaping performed in the ASIC. The waveform is characterised by its amplitude, integral and width (defined as the peak full width at half maximum). The integral is proportional to the amount of collected charge. Typical waveforms recorded on one channel of view 1 and of view 0 for the same amount of injected charge are shown on the left side of figure 4. They are acquired with the system described in [10], which allows to inject calibrated pulses on groups of 32 strips from each view separately. Since an anode strip has a capacitance to ground of ∼160 pF/m [10,13] the front end pre-amplifiers connected to the channels of view 0 and view 1 have different capacitive loads. This results in a non-identical pre-amplifier response between both views: for an equal amount of injected charge -7 -
Charge extraction and electron transparency
For a given LEM amplification field, the electron transparency T ≤ 1 is defined as the product of the efficiencies in the extraction and induction regions: where trans denotes the probability for a given charge to be extracted from the liquid and transmitted inside the LEM holes, and ind represents the efficiency of collecting the amplified charges from the top of the LEMs on the anode strips. In this paper, we define the transmission efficiency as trans ≡ liq extr × LEM extr which includes the contributions from the electron transmission from the liquid to the gas phase ( liq extr ) and that of the collection of the electrons entering the LEM from the bottom electrode ( LEM extr ). While the latter is estimated from simulations, the former is taken from previous measurements performed under similar conditions by Gushchin et al. [12], shown by the black data points on the left plot of figure 5.
Above 2 kV/cm of extraction field in the liquid (E liq extr ), most of the charge is extracted within a timescale of less than 100 nanoseconds ("fast extraction", red markers in figure 5), as the electric field decreases a growing fraction of the charge is transmitted via thermionic emission [24], with characteristic times at the tens of microseconds scale [5]. The impact of the extraction field at which the TPC is operated is noticeable on the shape of the signal as shown in the right panel of figure 5, where waveforms from data collected at different extraction fields with similar track topologies, amplification and induction fields are shown. The increase of the width at lower extraction fields due to the slow extraction is clearly visible.
-8 - It is therefore important not only to operate the TPC at the maximal possible effective gain but also to understand the interplay between the electric fields and estimate the best possible settings in terms of detector transparency. The optimal value of the transparency can be estimated from simulations of electron amplification and transport in the vapour phase. In figure 6, we show an illustration of a simulated electron avalanche performed with the Garfield++ software package [25]. The electric field maps are imported from ANSYS [26], and the gas properties (argon vapour at 90 K) are computed using MAGBOLTZ [21]. The 3D simulation is performed over a 3×3 mm 2 area, -9 -Performing such simulations for multiple combinations of field settings allows to generate maps of trans and ind as a function of the amplification electric field ( amp ). The results are shown in figure 7. The induction efficiency increases with the induction electric field ( ind ) and tends to Figure 7. Simulated values of trans (left) and ind (right). The red dots correspond to the best operating fields the 3 × 1 × 1 m 3 demonstrator was operated for the data taking run. The black dots indicate the field values at which the previous dual phase TPC [13,19,22] was operated under stable conditions at eff = 20.
plateau at values near 5 kV/cm. The value of ind at which this plateau is reached increases with the amplification field. The optimal setting for the extraction electric field in the liquid ( liq extr ) is reached at around 2 kV/cm. Above this value, the fraction of electrons extracted from the liquid and entering the holes decreases, since more electrons are lost on the bottom LEM electrode. Also shown in the figure are markers which indicate the best operating points of the 3 × 1 × 1 m 3 demonstrator and a smaller prototype equipped with a 10 × 10 cm 2 LEM referred to as the 3L TPC [13,19,22]. The latter was operated stably at eff ≈ 20 with electric fields of 5, 2, 33 kV/cm for ind , liq extr and amp , respectively. These field settings are those reported in figure 1 and referred to as the nominal settings. The 3 × 1 × 1 m 3 demonstrator is operated at lower field settings of 1.5, 2.0, 28 kV/cm for ind , liq extr and amp which, according to the simulated maps, corresponds to a transparency of 0.4× 0.6 =24%.
Summary of the collected data
The TPC operation ranged over a period of about 5 months. As detailed in [10], technical issues on the absolute high voltage of the extraction grid and limitations on the maximum applied voltage of the 50 × 50 cm 2 LEMs prevented us from operating the TPC for a long duration and at the nominal field settings. Therefore most of the run period was dedicated to detector optimisation, and in this paper, we discuss the data from approximately 300k triggered events. Results are shown for data collected at electric field settings which optimise the / (see table 2 for details), taking into account the above mentioned limitations. For the rest of this paper, the long (about 5 hours) stable run used for detector performance studies under stable conditions is referred to as Reference Run.
In table 2, we show the settings and some properties of the analysed data-sets along with that of the detector Monte Carlo (MC) simulation tuned to match the effective gain and electron lifetime of Reference Run as well as the detector acceptance.
The four corner LEMs are operated at 24 kV/cm [10].
-10 - Table 2. Summary of data taking conditions for the data-sets used in this paper. The / is given before the application of the coherent noise filter (see text for definitions). The evolution of the RMS of the noise on the charge readout during the operation period of the TPC is shown on the left side of figure 8. The periods corresponding to the acquisition of the data discussed in this paper are also indicated. The dashed bars indicate the two shutdown periods of approximately 1 month when maintenance was performed on the LEM high-voltage power supplies. As can be seen, the noise RMS level is significantly reduced (by more than a factor of two) by the coherent noise filter to reach a stable value of around 1.1 ADC counts (1000 electrons) in both views over the entire operation period of the chamber. The origin of the increase of the coherent noise after both shutdown periods is not well understood. It is most likely due to ground loops created during the maintenance periods on the CAEN HV supply and re-connection of sets of slow control cables.
Reference Run
Detailed Fast Fourier analysis of the noise frequency discussed in [27], shows a prominent peak at around 900 kHz. A good stability is nevertheless observed for each period. The right side of figure 8 shows the electron lifetime measured at different dates. The measured purity was found to be stable throughout the whole data-taking period. Further information how the value given in figure 8 is computed is given in section 4.3.
All the data are acquired with a trigger requiring a coincidence of all 5 PMTs. Their thresholds are set to maintain a data acquisition rate at the level of 3 Hz [10,11]. At those settings, our event sample predominantly consists of electromagnetically or hadronically induced cascades (showers), emitting large amounts of scintillation light, or MIPs, which cross most of the fiducial volume at a relatively shallow angle.
Detector simulation
A detailed Monte Carlo simulation of the detector was developed to reproduce the detector acceptance and track topologies. Typical events for both data and MC at Reference Run settings are shown in figure 9. These include a muon decaying in the active volume where the Michel electron is clearly -11 - visible, a hadronic and an electromagnetic shower. The data are shown with a colour scale representing the hit amplitude while reconstructed MC hits are coloured according to the particle species. The simulation is used to check the energy range of the cosmic muon sample, verify the muon selection purity and quantify the track reconstruction efficiency, using a method explained in section 3.4.
Since comparison with data requires a reproduction of the atmospheric flux and simulation of the detector acceptance, an event sample simulating the atmospheric flux was generated with the CORSIKA [28] package. Primary particles from CORSIKA were uniformly generated on a plane, located 3 metres above the top of the TPC fiducial volume to simulate events produced from the interactions of these particles with the cryostat or detector materials.
The trigger was simulated to reproduce the event rate, event topologies and the relative amount of tracks and showers observed in our data sample. The detector was divided into cubic areas with a volume of 25 × 25 × 25 cm 3 each. From the centre of each volume, a light map was built simulating 100 million photons to compute the probability of and the arrival time for a photon produced at the centre of that volume to reach each photomultiplier. The probability and arrival time for each position in the detector was obtained by a linear extrapolation from the simulated values at the centre of each volume, as described [11]. A simplified trigger simulation was applied, which required all five PMTs to measure at least 1750 photons in a time window of 80 ns. Figure 10 shows the momentum distribution of the generated particles entering the TPC and those which pass the simulated trigger condition. The polar angle distribution of the triggered muons is also shown. The particles are grouped according to the topology of events they produced when interacting in the 3 m 3 of liquid argon target. Given the 15 cm radiation and 84 cm interaction lengths of LAr, both electromagnetic (EM) and hadronic showers initiated by electrons/gammas and hadrons, respectively, will generally be contained in the detector volume. As seen in the figure, the triggered event sample consists of muons with a broad momentum distribution peaking at around 10 GeV and with polar angles of ∼120 degrees, and a quite substantial content of showering events in the 1-10 GeV range. Triggered muons in those energy ranges are MIPs and after careful track selection can be used to characterise the TPC effective gain.
Waveform generation and noise simulation
The energy deposited in the liquid argon is converted into ionisation charge. The fraction of charge lost due to electron-ion recombination is estimated with Birk's Law and the drift of the charge can be performed following an electric field map which is extracted from electrostatic simulations of the detector [29]. Charge attenuation due to electron attachment as well as the spatial spread of the electron cloud from transverse and longitudinal diffusion along the drift are also included in the simulation according to the values shown in table 1.
Once the electron cloud has reached the anode, the waveforms on each view are generated by convoluting the corresponding current with the charge readout detector response function described in section 2.3. The effective gain is taken into account by multiplying the number of electrons inside the cloud by the desired number. In addition, the broadening of the signal width and reduction of its amplitude at low extraction fields is included in the simulation by smearing the arrival time of the electrons in the slow component according to an exponential distribution, see figure 5. The time constant of the exponential is chosen so that the width of the generated waveform matches those from the extraction field scan data.
A simulation of the coherent noise is also included. This is done by using an Inverse Fourier Transformation to produce a waveform based on the frequency spectra of the noise in data. An identical random phase is assigned to groups of channels, following the same correlation scheme that is observed in the data. Details on the waveform simulation at low extraction fields can be found in [30], and on the simulation of the noise in [27]. In figure 11 we show a simulated muon event (view 1) and a MIP candidate from data at the settings of the Reference Run. The noise pattern, the dead channels and the waveform are well reproduced in the MC. The detector simulation features a Figure 11. Event display showing a MIP candidate from data (left) and a simulated muon in the data driven simulation (centre), in the time versus channel number plane. The continuous line is due to a dead channel, while the dashed line indicates the channel from view 1 of the waveform shown in the right plot. data-driven approach to describe the waveform shape and the noise. This allows a simulation of the TPC response at any field settings and an effective gain with good agreement to data. A comparison of the reconstructed track level between MC and data for the specific case of the Reference Run is provided in section 4.
Cosmic track reconstruction
The off-line reconstruction of tracks from both simulation and acquired data is performed within the LArSoft software package [31] with the same procedure applied for data and MC events. The entire reconstruction procedure is illustrated in figure 12 for a muon track candidate. The first step of the event reconstruction procedure consists in identifying so called regions of interest (ROIs) where the signal exceeds a certain threshold above the baseline pedestal. This is performed by standard threshold discrimination: the waveform is scanned for peaks above a certain average pedestal. This initial pedestal is computed in order to optimise signal over background discrimination, and the following iterative procedure guarantees that its initial value has a very small effect on the final S/N ratio. To avoid any bias due to physical signals, the ROIs are neglected during the subsequent pedestal subtraction and noise filtering process. The absolute value and slow fluctuations of the pedestal are subtracted by applying a polynomial fit on the waveform. Noise with a periodic behaviour is removed by the successive application of Fast Fourier transform algorithms. A coherent noise filtering algorithm is also applied to reduce the amplitude of noise patterns which are common to groups of channels at a given time. The entire process is repeated in an iterative manner since at each step the image of the event is improved and new ROIs may be identified.
At the end of this process the baseline has less fluctuations and physical hits can be extracted from the waveforms by means of standard threshold discrimination. Once hits have been identified on both views, neighbouring hits are merged together to form 2-dimensional clusters. The clusters from each view are then matched together, allowing to build 3D track objects by assigning a trajectory to the corresponding group of hits. On some occasions, clusters may be interrupted because of a malfunctioning or dead readout channel. During the operation of the TPC, about 14 channels (∼1% of the total) were found to be problematic. In addition, each strip immediately above -15 -the 2 mm of copper guard ring surrounding the LEM borders [10] does not register a signal (four strips on view 0 and twelve on view 1). Those channels are removed from the track reconstruction algorithm to ensure the missing hits do not interrupt the track fitting procedure.
While this approach permits reconstructing straight tracks with high efficiency, it is not designed to provide an accurate 3D representation of showering events which most of the time will be reconstructed as multiple small clusters. Since we have no a priori knowledge of the event topology, the procedure described above is applied on all data and shower-like events are separated from muon-track candidates by off-line tools (see section 4.1). The same procedure is applied to the MC sample where the noise and detector response are simulated according to data raw events.
Track reconstruction efficiency
The efficiency to reconstruct tracks in 3D, , is estimated with the use of the flat phase space MC where primary particles, referred to as MC-Particles, are generated over the 4 phase space. The noise and electron lifetimes are set to the Reference Run settings shown in table 2. The quality of the reconstruction in the simulation is computed by associating the reconstructed hits inside a track with the original energy deposit from the related GEANT trajectory. Tracks with matched energy deposit greater than 50% are considered well reconstructed. More details on the method to compute the reconstruction efficiency are provided in [27]. The efficiency maps as a function of and for a TPC operated at eff of 2 and 1.5 are shown in figure 13. The former corresponds to the Reference Run setting, and for the latter the lower effective gain is obtained by reducing the extraction field to 1.5 kV/cm. As can be seen, for a TPC operated at eff = 2 is in general close to 100%. Specific geometrical topologies of near vertical tracks ( = 180 • ) or, to a lesser extent, tracks parallel to the strips of one view ( = 180 • , 90 • , 0 • , . . .) are not efficiently reconstructed in 3D because the matching of the 2D tracks fails due to an insufficient number of hits on one or both of the views. This is a purely geometrical effect for which the effective gain at which the TPC is operated has a minimal impact. Since the 2D hits are identified, an improvement in the 2D matching procedure -16 -
JINST 16 P08063
could increase in those regions. A 25% reduction in the effective gain translates in the same / reduction. As a consequence the efficiency for near horizontal tracks is greatly reduced as visible on the right map of figure 13.
In the 3×1×1 m 3 demonstrator, even though operated at a sub-optimal effective gain of around 2, cosmic tracks are therefore reconstructed with high efficiency regardless of their topology. At eff =1.5, the efficiency begins to drop for certain track orientations but remains near 100% for the average polar angle of the MIPs selected by the trigger ( = 120 • ). We note that those maps are generated for the specific case of the 3 × 1 × 1 m 3 demonstrator and clearly higher effective gains are needed for larger TPCs aiming to study neutrinos with longer drifts and possibly larger noise RMS. We also do not discuss the impact of the gain on other fundamental features such as particle identification or reconstruction of low energy neutrinos.
Detector performance studies
In this section the detector performance in terms of effective gain and charge sharing from the Reference run is discussed, taking into account the technical limitations encountered during data taking. Qualitative comparisons of data collected at various LEM fields with data from the 3L TPC are also discussed. As a reminder, the four corner LEMs of the TPC could only be operated stably at the maximum electric field of 24 kV/cm [10]. Unless otherwise specified, the hits belonging to channels of these LEMs are excluded from the analyses.
Muon track selection
Through-going muons are MIPs which deposit a known / 2.1 MeV/cm in liquid argon and are used to characterise the performance of the TPC in terms of effective gain. As explained in section 3.3, showers will often be reconstructed as many independent clusters which are subsequently fitted to straight lines, producing multiple short tracks. The distribution of the lengths of the reconstructed tracks are shown for both MC and data in the left panel of figure 14, where it is shown that most of the tracks associated to showers have lengths in the tens of centimetre range. The MC sample is subdivided into each particle type that belongs to the reconstructed cluster. A selection based on the length of the reconstructed track is thus an effective first step to distinguish showers from through-going muons. A cut on the track length ≥ 50 cm is applied, to reflect the point where an excess in data is observed over the main background due to (EM) showers.
To further clean up the sample, a second cut based on the amount of charge deposited transversely along the reconstructed 3D track is applied. While an EM shower may occasionally be reconstructed as a single track, since its Moliere radius in LAr is about 9 cm [16], the transverse spread of the charge will be relatively large compared to a MIP track. In order to take advantage of this property of an EM shower, a new variable called the Charge Box Ratio ( ) is defined using the fractional difference between the charge deposit in two boxes around the reconstructed track.
The amount of charge contained in the large white box in figure 15, out , and the charge in the orange dashed box immediate adjacent to the reconstructed track, in , are computed for each view, . The is then defined as the fractional difference between these two quantities with respect to the amount in the adjacent box, = ( out − in )/ in . A cut of less than 20% on the in both views is required to tag the cluster as a MIP track. This value is based on detailed Monte -17 - Figure 14. Distribution of reconstructed track lengths and polar angles for all triggered events and selected MIPs from data compared to the Monte Carlo sample at the Reference run settings. The discrepancies between data and Monte Carlo for angles larger than 120 degrees are most likely coming from an imperfect simulation of the photomultiplier trigger. Figure 15. Illustration of the algorithm which computes the charge box ratio on an EM shower reconstructed as a single track (left) and a muon candidate (right). The CBR is equal to 56% for the EM shower and 13% for the muon candidate. These event displays are from the Reference run and only view 0 is shown. The inner dashed box has a transverse dimension of 3.5 cm and the outer one 9 cm.
Carlo simulation studies and verified through visual scans of the event displays of candidate tracks to maximise the track selection efficiency and the muon sample purity. Table 3 shows the effect of the cuts on the track lengths and on a MC sample that reproduces the fraction of event categories expected in data. Of the total number of reconstructed tracks in the sample, only about 5.6% of them are from muons before the selection criteria are applied. When the track length cut of ≥ 50 cm is applied on the whole data set, about 93% of the EM showers and 85% of hadronic showers are eliminated, while preserving over 93% of the MIP-like tracks.
When the cut on ≤ 20% is applied, only 0.4% and 1.9% of the tracks from EM and hadronic showers are left, respectively. These correspond to only 6% and 5.5% of the remaining tracks resulting from EM and hadronic showers respectively, and about 88.5% from MIP. This -18 -means that given the initial sample composition shown in the table, the muon purity increases from 5.5% of all tracks to 88.5% after the two selection cuts are applied. This also corresponds to an overall selection efficiency of 61% for muons and over 99% and 98% rejection efficiency to EM and hadronic showers, respectively, clearly demonstrating the effectiveness of the selection criteria. As potential improvement for future analyses, a more sophisticated alternative selection method based on multivariate classifiers has also been developed to take advantage of correlations between track and shower variables. The distribution of the selected MIP track candidates from data with the cut-based method described above is also shown in yellow in figure 14. As can be seen, the distributions of all MC and data reconstructed tracks are in relatively good agreement before and after the selection criteria are applied, illustrating that the detector acceptance is well reproduced by the simulated trigger. The distribution of selected MIP candidates in data is close to that of the reconstructed muons in the MC.
Space charge and field distortion
A non-uniform electric field applied on the active LAr volume of the detector worsens the energy and position resolutions and makes straight tracks appear bent and distorted. The drifting ionisation electrons would not travel upwards along a straight line with a uniform speed and could gain a horizontal drift component in a non-uniform field. In a LAr TPC, a non-uniform field could be caused by positive argon ions produced in the ionisation by a traversing charged track. Due to their large mass, the ionised argon atoms drift towards the cathode with much lower speed than the ionisation electrons and remain in the liquid for a much longer time. These positive ions can accumulate within the detector active volume and create a non-uniform high charge distribution that could locally distort the drift electric field.
In addition, in a DP LAr TPC, for every electron created in the LEM avalanche, a corresponding argon ion is created. These ions drift towards the gas-liquid boundary, and a fraction of them could enter the liquid, adding to the space charge already residing in the active volume. The rest remains trapped on the surface of the liquid, distorting the extraction field. We define the fraction of ions crossing the liquid-gas boundary as the ion back-flow. It is important to precisely estimate the ion back-flow, to understand how large of an impact the ions created in the gas phase would have on the space charge effect in the active liquid volume.
- 19 -In order to estimate the magnitude of the space charge in the 3 × 1 × 1 m 3 demonstrator, a MIP sample was selected using the cuts described in section 4.1 in the data. Along the reconstructed path of each track, the local 3D distortion is computed as the distance with respect to a straight line connecting the start and end points of the track, as illustrated in figure 16 top left. To determine the effect of the drift field distortions due to space charge, drift field maps with various fractions of the ion back-flow (from 0% to 90% in steps of 10%) are computed using the COMSOL software [29] with the detector technical parameters given in [10]. The simulations consider both the space charge effect of ions coming from the gas and of ions produced in the liquid. From each of the obtained drift field computations, the time and space displacements of drifting electrons throughout the active volume are extracted and stored in a map used as an input to the GEANT4-based simulation. Figure 16 shows the average distortion of MIP-like tracks in data (bottom left), which displays pattern similar to the shape of the butterfly wings, compared to three different Monte Carlo simulations in the three plots on the right. The top right plot shows the case in which no ions, produced either in gas or in liquid, flow towards the cathode. Hence, only the multiple scattering can affect the track path. As the resulting distortion averages at zero, it implies that the multiple scattering alone cannot explain the observation of bent tracks in the data. The two remaining plots show 10% (middle right) and 90% (bottom right) ion back-flow fractions. As can be clearly seen, the higher the ion-back flow fraction, the better the MC samples mimic the field distortion observed in the data. While the 90% ion back-flow fraction MC shows the clear emergence of the pattern similar to the data, the degree of the distortion and the detailed features, observed in the data cannot be fully described by the ion back-flow fraction alone. It is clear from the data plot, additional factors such as geometry and fringe effects need to be investigated to fully determine the impact of the space charge.
Since the space charge can affect the 3D reconstruction of tracks, impacting adversely the imaging capability of the detector, this effect should be further investigated, especially for large scale future detectors with long drift distance and higher gains. In large volumes, ions will remain -20 -in the liquid for an even longer time. Additionally the targeted LEM gain in future detectors is an order of magnitude higher than the one achieved in the 3 × 1 × 1 m 3 detector. Thus a high ion back-flow will add a significant amount of ions to the liquid. Both effects combined might therefore lead to significant distortions to the electric drift field. Due to lack of data statistics, however, only qualitative measurements of space charge and ion back-flow effects could be done with the 3 × 1 × 1 m 3 demonstrator. Therefore, in the following results, no corrections due to the drift field non-uniformity are applied.
Effective gain, uniformity and charge sharing asymmetry
The quantity Δ /Δ is the energy locally deposited by the track in liquid argon per unit length for each readout view and is used to estimate the effective gain of the chamber. Figure 17 shows Δ 0 /Δ 0 (left panel) and Δ 1 /Δ 1 (right panel) for view 0 and view 1, respectively, from the sample of 3D reconstructed MIPs collected during the Reference run. The effective gain is defined as the sum of the charge collected per unit length in each view divided by the average charge deposit of a MIP predicted by the Bethe-Bloch formula: Taking into account the electron-ion recombination, Δ /Δ MIP ∼ 10 fC/cm. The distributions shown in figure 17 correspond to a chamber operated at eff = 1.9 at the field settings and purity conditions of the Reference run and lead to a / = 12.0 on each view for MIPs at ( , ) = (90 • , 45 • ). This result improves our previous measurement presented in [10], thanks to the use of the charge readout response function described in section 2.3 and a more accurate muon selection. For the same reasons, the electron lifetime quoted in this paper is also improved with respect to the previous result. The CRP is composed of 50 × 50 cm units, thus we should ensure that the effective gain is uniform over the 3 × 1 m 2 area. The uniformity of the effective gain is illustrated in figure 18 that -21 -shows the collected charge per unit length in both views (Δ /Δ ) averaged over each LEM as a function of the , coordinates of the reconstructed MIPs. The fluctuation of the effective gain in the area corresponding to the 8 central LEMs is contained within 5% tolerance.
In addition to demonstrating a good uniformity of the charge readout on a large area, it is important to verify the good charge sharing between the two views. The charge sharing asymmetry coefficient is defined as: This coefficient depends on the azimuthal angle and varies within ±1, as shown in figure 19 right.
In particular, when = ±45 • , ±135 • , the track unit length Δ is the same in both views. In this case, the collected charge in each view should be equivalent and A should therefore be equal to 0 (see figure 19 left). -22 -Despite the selection operated by the trigger conditions and difficulties in the reconstruction of tracks parallel to the strips, the asymmetry remains within 15%. A more careful evaluation of the systematic uncertainties on the effective gain measurement and its stability as well as on the uniformity of the CRP response will be carried out in the ProtoDUNE-DP detector, to ensure the good fitting of the LAr TPC dual phase technology within the requirements set by the DUNE experiment.
Scans of extraction and LEM fields
Field scans enable the study of the detector response variations as a function of the high-voltage settings and the interplay of the different fields in the final effective gain described in section 2.2. We consider here the scans of the extraction and the amplification fields summarised in table 2. The results from these scans, even if performed at sub-optimal settings and in restricted electric field ranges, can be used to qualitatively confirm the electron transmission efficiency at the liquidgas interface shown in figure 5 and to quantify LEM as a function of the amplification field. The conditions of the field scans do not allow to study independently each of the fields. The combination of the field scans together with the simulated maps from figure 7 allow us to extract the extraction efficiency and the amplification inside the LEMs in the detector. Figure 20. Left: liquid to gas transmission and extraction efficiencies as a function of the extraction field in LAr measured in the 3×1×1 m 3 demonstrator. The measured points at 28 kV/cm obtained from the extraction scan are superimposed on the transmission prediction (yellow band), the convolution of the extraction field and the LEM transparency. Right: LEM as a function of the amplification field (solid circles) measured in the 3 × 1 × 1 m 3 demonstrator is compared to the best fit of the 3L TPC data (dotted line) which was equipped with a single 10 × 10 cm 2 LEM, both normalised to the transparency of each detector.
The effective gain is defined as eff = T · LEM (see eq. (2.2) in section 2.2), where the transparency is defined as in eq. (2.8) by the product between transmission and induction efficiencies. Due to the high voltage limitations, we could not operate the detector at induction fields above 1.5 kV/cm. However, we were able to study the transmission efficiency, the product between the extraction efficiency of electrons from the liquid to the gas phase and the efficiency of the electrons successfully passing through the LEM holes, LEM extr . The blue region of figure 20 left shows the expected evolution of the extraction efficiency from the liquid to the gas, liq extr , as a function of the extraction field in liquid according to [12], as explained in section 2.4. The yellow band is -23 -
JINST 16 P08063
the product of this efficiency ( liq extr ) with the LEM extr obtained from the simulated maps described in figure 7 for 28 kV/cm. Data from the extraction scan shows a good agreement between the condition at which the demonstrator was operated and our prediction.
A scan of the amplification field in the limited high voltage region available was performed in order to study amplification inside the LEM holes and the resulting LEM . The right panel of figure 20 shows LEM as a function of the amplification field for the scan performed in the 3 × 1 × 1 m 3 demonstrator compared to the results from the smaller 3L TPC scan in ref. [19]. LEM is extracted by using eq. (2.2). The transparency is computed from figure 7 in the case of the 3 × 1 × 1 m 3 demonstrator data and the value for the 3L TPC is obtained from [19]. The two results show the same trend as a function of the amplification field. Since the two detectors were operated at different pressures and temperature (the 3 × 1 × 1 m 3 demonstrator was operated at 1000 mbar, while the 3L TPC ran at 980 mbar), extraction and collection field, we do not expect the two fitted curves for the LEM gains to fully overlap.
The relative gain difference between the two curves is ≈ 20% (40%) at 28 kV/cm (33 kV/cm). It can be partially explained by a 2% residual variation in density between the two detectors according to figure 3. Another contribution could come from charging up of the dielectric layer of the LEMs. While the data from the 3L TPC scan was collected before the dielectric layer of the LEM was fully charged up, we cannot conclude the same in the 3 × 1 × 1 m 3 demonstrator. The high voltage stability issues prevented operating the 3 × 1 × 1 m 3 demonstrator for a sufficiently long time to study properly the charging up effect of 50 × 50 cm LEMs. It was observed in 10 × 10 cm 2 LEM modules that the effective gain reduces by a factor of about 3 after a characteristic time of about 1.5 days when operated at eff ≈ 100 [19,22,32].
The results from these scans suggest that the electron extraction from the liquid to the gas is similar to previous measurements. The electron avalanche in the LEMs appears to be similar to that measured with smaller 10 × 10 cm 2 devices. Additional field scans in future prototypes, such as ProtoDUNE DP LAr TPC demonstrator, would bring the opportunity for a better evaluation of the performance of the 50 × 50 cm LEMs.
Conclusion
This paper presents the performance of the 3 × 1 × 1 m 3 demonstrator during its cosmic ray data taking operation in summer 2017. While the high voltage limitations prevented reaching the optimal expected amplification performance, many properties of the detector could be studied.
A data driven detector simulation including a realistic trigger based on the photo-detection system has been developed to reproduce the topology of the recorded tracks. The simulation has been used to tune an algorithm for track-shower separation, which is a crucial task to measure most critical performance metrics of the detector. The algorithm is based on the fractional charge difference deposited in two boxes surrounding the reconstructed track with a muon purity selection with respect to the cosmic ray sample of around ∼90%. Multivariate analysis techniques can be used in the future to further improve the purity of the selected sample. A data-driven Monte Carlo simulation was used to study the impact of both signal-to-noise and geometrical effects on the efficiency of cosmic muon track reconstruction.
-24 - The study of the straight muon-like tracks allowed us to measure the drift electron lifetime which is connected to the LAr purity. We obtain a weighted average lifetime of 7 ms consistent to that in [10] and stable over time. From the selected tracks, an effective gain of 1.9 was measured. The differences with respect to the number quoted in [10] are due to the improved track selection and the use of a different electronic response. The charge sharing asymmetry between the two views has been studied as a function of the azimuthal angle. In the case of = 45 • , the asymmetry is centered in 0, highlighting equally charge sharing among the collection views. As described in [10], we confirm the extraction of electrons from liquid to the gas phase in large areas of 3 m 2 . Despite the lack of statistics, we address qualitatively the interplay among the different fields involved in a dual phase detector, especially between the extraction and amplification fields.
The amplification gain in the LEMs has been measured and compared to the 3L TPC. A similar trend in gain variation as a function of the amplification field is observed. However, the 3 × 1 × 1 m 3 results have lower values, which could be explained by a 2% density variation between the two setups.
Despite the limitations, the results shown in this paper seem to confirm qualitatively the key aspects of the dual-phase technology such as the extraction of the electrons from liquid to gas and their amplification though the entire one-squared-metre CRP. Also the gain stability, purity and charge sharing between readout views looks in reasonable agreement from design expectations. However, an extensive study of the amplification process and its stability over a large readout surface during an extended period of time is still necessary to ensure the fulfilling of all the requirements for a multi-kilotonne neutrino detector. Although these results have set another milestone towards the understanding of the dual phase technology, future prototyping efforts are expected to confirm them under more controlled experimental conditions and with improved reconstruction techniques. | 13,174 | 2021-01-01T00:00:00.000 | [
"Physics"
] |
Application of a new scheme of cloud base droplet nucleation in a spectral ( bin ) microphysics cloud model : sensitivity to aerosol size distribution
A new scheme of droplet nucleation at cloud base is implemented into the Hebrew University Cloud Model (HUCM) with spectral (bin) microphysics. In this scheme, supersaturation maximum Smax near cloud base is calculated using theoretical results according to which Smax ∼ w3/4N −1/2 d , where w is the vertical velocity at cloud base and Nd is droplet concentration. Microphysical cloud structure obtained in the simulations of a midlatitude hail storm using the new scheme is compared with that obtained in the standard approach, in which droplet nucleation is calculated using supersaturation calculated in grid points. The simulations were performed with different concentrations of cloud condensational nuclei (CCN) and with different shapes of CCN size spectra. It is shown that the new nucleation scheme substantially improves the vertical profile of droplet concentration shifting the concentration maximum to cloud base. It is shown that the effect of the CCN size distribution shape on cloud microphysics is not less important than the effect of the total CCN concentration. It is shown that the smallest CCN with diameters less than about 0.015 μm have a substantial effect on mixed-phase and ice microphysics of deep convective clouds. Such CCN are not measured by standard CCN probes, which hinders understanding of cold microphysical processes.
Introduction
Droplet concentration is the key microphysical parameter that affects precipitation formation and radiative cloud properties (Pruppacher and Klett, 1997).The droplet concentra-tion determines major microphysical cloud properties such as height of precipitation onset and type of precipitation (liquid, mixed phase, and ice) (Khain et al., 2008;Khain, 2009;Freud and Rosenfeld, 2012;Tao et al., 2012).Droplet concentration is determined by concentration and size distribution of aerosol particles (APs) and by the maximum value of supersaturation near cloud base S max .S max is reached at a few tens of meters above cloud base (Rogers and Yau, 1996).The vertical grid spacing of most cloud-resolving models is too coarse to resolve this maximum.This can lead to errors in determination of droplet concentration.Therefore, it is desirable to parameterize the process of droplet nucleation near cloud base.One approach to the parameterization is based on lookup tables developed using precise 1-D parcel models (e.g., Segal and Khain, 2006).The other approach is based on analytical calculation of supersaturation maximum, S max , near cloud base.This approach has been developed in several studies using various assumptions concerning cloud condensation nuclei (CCN) activity spectra (Ghan et al., 1993(Ghan et al., , 1997;;Bedos et al., 1996;Abdul-Razzak et al., 1998;Khvorostyanov and Curry, 2006;Cohard et al., 1998;Abdul-Razzak and Ghan, 2000;Fountoukis, 2005;Shipway and Abel, 2010).In these studies calculation of a supersaturation maximum is reduced to solving a complicated integrodifferential equation assuming different expressions for CCN activation spectra.The parameters of activation CCN spectra, as well as the concentration and shape of the CCN size distributions, are often prescribed in atmospheric models and assumed to be invariant over time.The results and a comparison of these approaches are presented by Ghan et al. (2011).
In cloud models with a comparatively high resolution (Kogan, 1991;Khain et al., 2015), supersaturation S w is calcu-E.Ilotoviz and A. Khain: Application of a new scheme of cloud base droplet nucleation lated explicitly at each grid point.In these bin microphysics models APs playing the role of CCN are described using aerosol size distribution functions containing several tens of size bins.The value of supersaturation is used to calculate the critical radius of APs using the Köhler theory.All CCN with sizes exceeding this critical value are activated to droplets.This approach will be referred to as the standard approach (ST), where supersaturation maximum near cloud base is not resolved and the vertical profile of supersaturation may not contain such maximum.It leads to underestimation of droplet concentration in clouds, at least in their low part.
In a set of studies by Pinsky et al. (2012Pinsky et al. ( , 2013Pinsky et al. ( , 2014) ) formation of profiles of supersaturation and of droplet concentration was investigated both analytically and by means of a high-precision model of an ascending adiabatic parcel.Pinsky et al. (2012) proposed a simple method of calculating S max near cloud base for monodisperse aerosol size distribution.The detailed test showed that the method can be applied to any CCN spectrum.Pinsky et al. (2014) gave a theoretical basis for such conclusion by calculating droplet concentrations using multidisperse size spectra of APs.The method of calculating droplet concentration near cloud base using S max will be referred to as new approach (NA).
In this study we investigate the effects of application of NA on the microphysics of midlatitude deep convective clouds (hail storm) using the Hebrew University Cloud model (HUCM) with spectral-bin microphysics (SBM).The effect of the new approach is investigated in simulations with different parameters of CCN activation spectra.
Model description
The HUCM is a 2-D, nonhydrostatic SBM model with microphysics based on solving a system of equations for size distributions of liquid drops, three types of pristine ice crystals (plates, columns, and dendrites), snow/aggregates, graupel, hail, and partially frozen or "freezing drops".Each size distribution is discretized into 43 mass-doubling bins, with the smallest bin equivalent to the mass of a liquid droplet of radius 2 µm.APs playing the role of CCN are also defined on a mass grid containing 43 mass bins.The size of dry CCN ranges from 0.005 to 2 µm.
Primary nucleation of each ice crystal type is described using Meyers et al. (1992) parameterization.The type of ice crystals is determined depending on temperature range where the particles arise (Takahashi et al., 1991).Secondary ice generation is taken into account during riming (Hallett and Mossop, 1974).Collisions are described by solving the stochastic collection equations for the corresponding size distributions using the Bott (1998) method.Heightdependent, gravitational collision kernels for drop-drop and drop-graupel interactions are taken from Pinsky et al. (2001) and Khain et al. (2001); those for collisions between ice crystals are taken from Khain and Sednev (1995) and Khain et al. (2004).The latter studies include the dependence of particle mass on the ice crystal cross section.The effects of turbulence on collisions between cloud drops are included (Benmoshe et al., 2012).The collision kernels depend on the turbulence intensity and changes over time and space.
The time-dependent melting of snow, graupel, and hail as well as shedding of water from hail follows the approach suggested by Phillips et al. (2007).We have implemented liquid water mass in these hydrometeor particles that is advected and settles similarly to the mass of the corresponding particles.As a result, these particles are characterized by their total mass and by the mass of liquid water (i.e., the liquid water mass fraction).The liquid water fraction increases during melting.As soon as it exceeds ∼ 95 %, the melting particles are converted to raindrops.The process of time-dependent freezing is described according to Phillips et al. (2014Phillips et al. ( , 2015)).The freezing process consists of two stages.The first nucleation stage is described using the parameterization of immersion drop freezing proposed by Vali (1994) and Bigg (1953).Drops with radii below 80 µm that freeze are assigned to plates, whereas larger drops undergoing freezing are assigned to freezing drops.The freezing drops consist of a core of liquid water surrounded by an ice envelope.Time-dependent freezing of liquid within freezing drops is calculated by solving the heat balance equations that take into account the effects of accretion of supercooled drops and ice particles.Collision between freezing drops and other hydrometeors leads either to the freezing drops category if the freezing drop is larger than its counterpart.Otherwise, the resulting particle is assigned to the type of counterpart.Once the liquid water fraction in a freezing drop becomes less than some minimal value (< 1 %), it is converted to a hailstone.Hail can grow either by dry growth or by wet growth (Phillips et al., 2014(Phillips et al., , 2015)).Accordingly, liquid water is allowed in hail and graupel particles at both positive and negative temperatures.The shedding of water in wet growth is also included.
Water accreted onto aggregates (snow) freezes immediately at temperatures below 0 • C, where it then contributes to the rimed fraction.This rimed mass distribution is advected and settles similarly to the snow masses.Riming mass increases the density of the aggregates.As the bulk density of snow in a certain mass bin exceeds a critical value (0.2 g cm −3 ), the snow from this bin is converted into graupel.The appearance of water on the surface of hailstones as well as an increase in the rimed fraction of snowflakes affects the particle fall velocities and coalescence efficiencies.
The initial size distribution of CCN (at t = 0) is calculated using the empirical dependence (i.e., the Twomey formula) of concentration N CCN of activated CCN on supersaturation S w (in %): N CCN = N 0 S k w , where N 0 and k are the measured constants (Khain et al., 2000).The obtained aerosol size distribution is corrected in zones of very small and very large CCN, that is, in size ranges where the Twomey formula is invalid.At t > 0 the prognostic equation for the size distri-bution of non-activated CCN is solved.Using the value of S calculated at each time step and at each grid point, the critical radius of CCN particles was determined according to the Köhler theory.The CCN with radii exceeding the critical value are activated, and new droplets are nucleated.The corresponding bins of the CCN size distributions become empty.In ST, this procedure is used at all cloud grid points.
In NA, droplet concentration at cloud base is calculated using the formula for S max derived by Pinsky et al. (2012): where w is vertical velocity at cloud base, N d is droplet concentration, and coefficient C slightly depends on the thermodynamical parameters only (see Table 1 for notations).A brief derivation of Eq. ( 1) is presented in Appendix A. Since the droplet concentration at cloud base is equal to the concentration of CCN activated at S w = S max , the droplet concentration at the cloud base can be calculated as where f (r n ) is a size distribution of dry APs and r n_cr is critical radius of CCN activated under S max .According to the Köhler theory, the critical radius relates to S max as where coefficients A and B are the coefficients of the Köhler equation for equilibrium supersaturation (see Table 1 for notations).Substituting Eq. (2) into Eq.(1), one can obtain the following equation for S max : Taking into account the relationship in Eq. (3), Eq. (4) contains only one unknown S max .This equation is easily solved by iteration calculating S max , r n_cr (S max ), and concentration of nucleated droplets at cloud base at each time step.
The values of S max were calculated at all grid points corresponding to cloud base, which is determined as the first grid point from below, at which S w ≥ 0.
Design of simulations
All simulations were performed within a computational domain of 153.9 km × 19.2 km, and a grid spacing of 300 m in the horizontal direction and 100 m in the vertical direction.Effects of NA on cloud microphysics were tested
Symbol Description Units
water vapor mixing ratio (air) kg kg −1 q w liquid water mixing ratio kg kg −1 r max drop radius at z = z max m S w S w = e/e w − 1, supersaturation over water - density of a dry aerosol particle kg m −3 ρ w density of liquid water kg m −3 σ w surface tension of water-air interface Nm −1 ν n van't Hoff factor in simulations of a thunderstorm observed in Villingen-Schwenningen, southwest Germany, on 28 June 2006.Meteorological conditions (including sounding) of this storm are described by Khain et al. (2011).The background wind direction was quasi-2-D, which simplified the prescription of the background wind profile in the 2-D model.The wind speed increased with height from ∼ 10 m s −1 in the lower atmosphere to about 20 m s −1 at levels of 100-200 mb.Surface temperature was 22.9 • C, and the relative humidity near the ground was high (∼ 85 %), which led to a low lifting condensation level (LCL) of about 890 m.The freezing level was located at around 3.5 km.The observed maximum diameter of hailstones was about 5 cm.The convection was triggered by a cool pool, which is typical in simulations of long-lasting convection (Rotunno and Klemp, 1985).
Three sets of simulations were performed, each simulation in two versions: according to ST, where the critical CCN radius was calculated using a supersaturation calculated at the grid points using the values of temperature and humidity, and according to NA, where the critical CCN radius and S max were determined from Eq. ( 4).
The first set of simulations aims at the comparison of the microphysics between NA and ST in cases of high (N 0 = 3500 cm −3 ) and low (N 0 = 100 cm −3 ) CCN concentrations.Minimum CCN radii were set equal to 0.015 and 0.0125 µm, respectively.These values correspond to the data according to which the nuclei mode (the smallest CCN) in marine aerosol size distribution contains aerosols smaller than the nuclei mode in continental cases or even than in urban cases (Ghan et al., 2011).Similar CCN size distributions were used by Khain et al. (2011).These simulations are referred to as E3500 and E100 (ST) and EN3500 and EN100 (NA), respectively.
In the second set of simulations the smallest CCN were added into the AP spectra.The large impact of the smallest CCN in formation of ice crystals in cloud anvils was shown by Khain et al. (2012).The minimum CCN radii were taken equal to 0.006 and 0.003 µm in cases of high and low CCN concentrations, respectively.These simulations are referred to as E3500-S, EN3500-S, E100-S, and EN100-S, where the symbol S denotes small APs.
In the first and the second sets of simulations the slope parameter k was assumed equal to 0.9.
The third set of simulations was similar to the second one but with the slope parameter k = 0.5.In many studies investigating effects of aerosols on cloud microphysics, only parameter N 0 is changed.However, the slope parameter determines the relationship between concentration of smaller and larger CCN, so concentration of nucleated droplets also depends on the slope parameter.The simulations of the third set are referred to as E3500-S-05, EN3500-S-05, E100-S-05, and EN100-S-05.Size distributions of CCN in the simulations are shown in Fig. 1.
CCN concentrations in the simulations s are presented in Table 2.Although the difference between the total aerosol concentrations in the cases of k = 0.5 and 0.9 is not large, in the case of k = 0.5 the CCN size distribution contains more large CCN and fewer small CCN.These size distributions were assumed within the lower 2 km layer.Above this level, the CCN concentration in each mass bin was decreased ex- The model calculates supersaturation at the model grid points which typically do not exactly coincide with the cloud base level where supersaturation S w = 0. We consider the first level where S w ≥ 0 as the cloud base.Since the supersaturation maximum is reached not far from the cloud base level, especially for high-AP-concentration cases (Pinsky et al., 2012), we attribute the values of S max to this level.Correspondingly, the difference between the droplet concentrations in NA and ST is also attributed to this level.Figure 2 shows vertical profiles of supersaturation calculated in ST and NA simulations in the atmospheric columns where the velocity at cloud base was equal to 1 m s −1 .It is natural that the values of S max are larger in the case of low CCN concentration as compared to the case of high CCN concentration.For purposes of the present study, a more interesting finding is that the values of S max calculated using NA are substantially larger than S w calculated at model level associated with the cloud base in ST.The difference between NA and ST in the supersaturation values leads to a substantial difference in the droplet concentrations, especially in cases of high CCN concentration.Calculation of S max at cloud base changes the vertical profile of supersaturation above it.While in ST supersaturation changes only slightly or even increases with height within 100-200 m above cloud base, in the NA supersaturation decreases within this layer above the supersaturation maximum in agreement with the theory (Rogers and Yau, 1989;Pinsky et al., 2012Pinsky et al., , 2013)).To justify the values of supersaturation and droplet concentration obtained in NA, benchmark simulations using a parcel model were performed.The parcel model describes APs and drops using drop size distribution defined on a mass grid containing 2000 mass bins (Pinsky and Khain, 2002).It calculates growth of APs and droplets by solving the equation for diffusional growth written in the most general form without using parameterization of droplet nucleation.The time step used for solving the diffusional growth equation was 0.001 s.The model was used earlier for developing lookup tables relating parameters of APs and vertical velocity to droplet concentration (Segal and Khain, 2006).Simulations with the parcel model were performed for the same vertical velocity at cloud base, temperature, and CCN distributions as in the HUCM simulations.As can be seen from Fig. 2, the values of supersaturation and droplet concentration calculated using NA are much closer to those calculated using the parcel model than the values calculated using ST.
The model level associated with the cloud base (where S w ≥ 0) is slightly higher than the LCL, where S w = 0.At the same time, the calculations performed according to Pinsky et al. (2012) show that the level where S w = S max is lo-cated from about 20 m (for high CCN concentration) to about 60 m (for low CCM concentration) higher than the LCL.The estimations show, therefore, that the level where S w = S max is quite close to the model cloud base level.Accordingly, the droplet concentration determined at S w = S max is assigned to the corresponding grid point at the model cloud base.
High CCN concentration
In this section we compare the results for three pairs of simulations of clouds that were developed in a highly polluted atmosphere.Figure 3 In NA the N d maximum is reached near cloud base, and the droplet concentration decreases with height.This behavior of N d (z) is more realistic than in ST, where N d increases with height up to an altitudes 2-4 km, depending on the stage of storm evolution.This increase in the N d in ST is caused by in-cloud activation of mid-size CCN which were not activated at cloud base in the standard approach.In NA, these CCN were activated at cloud base.There is, therefore, a negative feedback in the supersaturation-droplet concentration relationship: an underestimation of supersaturation at low levels in the ST simulations leads to the underestimation of droplet concentration and to the corresponding increase in supersaturation at comparatively small distances above cloud base.These results indicate that, in models where droplet nucleation is calculated only at cloud base, the correct calculation of S max at cloud base is strictly necessary to obtain reasonable values of N d in clouds.At a height of about 4-5 km, droplet concentrations in ST and NA become nearly the same.Figure 4a and c also show that N d is very sensitive to the slope parameter of the CCN activation spectrum.The maximum N d reached at cloud base is about 1100 cm −3 in EN3500-S-05 (k = 0.5) as compared to ∼ 550 cm −3 in EN3500-S (k = 0.9).This difference is caused by the fact that in the case of k = 0.5 the concentration of CCN with sizes exceeding ∼ 0.015 µm (which are activated at cloud base) is larger than in the case of k = 0.9 (see Fig. 1).
The effect of the smallest CCN on N d (and on entire ice microphysical structure) becomes very important above 6 km.In simulations containing the smallest CCN, these CCN are activated, producing new small droplets at heights of around 6.5-8 km.The increase in N d is shown in Fig. 4a and c by red arrows.These smallest CCN are not activated at cloud base even in NA (where S max is larger than S w in ST).This in-cloud nucleation is caused by an increase in supersaturation at these levels due to a decrease in CWC (Fig. 4b, d) and an increase in vertical velocity (not shown).The increase in N d by activation at high levels and its effect on concentration of ice crystals in cloud anvils of deep convective clouds were also reported by Khain et al. (2012).
Since the slope parameter determines concentration both of larger CCN and of the smallest CCN, the slope parameter also affects the concentration of droplets nucleated at high levels.
Vertical profiles of CWC (Fig. 4b, d) are typical of deep convective clouds developing in the highly polluted environment: CWC is large and has its maximum at about 5 km, i.e., at quite high altitude.Figure 5a shows the vertical profiles of maximum concentration of plate crystals (in HUCM homogeneous freezing leads to formation of plates) averaged over the mature stage of cloud evolution (from 4860 to 5460 s).The number concentration of ice crystals in E3500 and EN3500 (in which there are no smallest CCN in the initial CCN spectrum) is a factor of 5 lower than in simulations with the CCN spectra containing the smallest CCN.The results show that ice crystal concentration in NA is higher only slightly than in ST.Thus the concentration of ice crystals in cloud anvils is determined to a large extent by the concentration of the smallest CCN in the CCN spectra and is substantially less sensitive to larger CCN, which are activated at cloud base.Figure 5b shows that this conclusion is valid for the entire period of the simulation.In agreement with Fig. 4c, the concentration of plates increased when NA was used (Fig. 5b).The comparative contribution of the smallest CCN and CCN additionally activated at the cloud base in NA (as compared to ST) is shown in Fig. 5b by arrows.
Figure 6 shows the vertical profiles of time-averaged maximum mass contents of ice crystals, snow, graupel, and hail + freezing drops at the storm mature stage.The maximum difference between ice crystal mass contents takes place at ∼ 10-11 km, where ice crystals are caused by homogeneous freezing.
The most pronounced effect of NA is an increase in the accretion rate.In agreement with results of simulations of aerosol effects on ice microstructure of deep convective clouds (Khain, 2009;Tao et al., 2012;Khain et al., 2016), the intensification of riming leads to a decrease in the snow mass content and to an increase in the mass contents of graupel (Fig. 6b, c).The existence of the smallest CCN concentration leads to further decrease in the snow mass content and to an increase in the graupel mass content.The smallest CCN lead to higher supercooled droplet concentration and to an increase in the liquid mass available for riming (Fig. 4d, e).
Low CCN concentration
In this section we compare the results for three pairs of simulations in which clouds were developed in the atmosphere with low CCN concentration: (a) E100 and EN100, (b) E100- The difference in droplet concentrations between ST and NA simulations decreases with height.Although the difference in N d between NA and ST is very pronounced, the absolute difference is not large (about 20 cm −3 ).This low N d determines a typical maritime microphysical structure of clouds in both NA and ST cases.
Figure 8 shows vertical profiles of the maximum values of droplet concentration and CWC averaged over the time period of 3420-4020 s (mature stage).One can see a dramatic difference in the profiles of droplet concentration and between CWC values at low CCN concentration as compared to high CCN concentration (Fig. 4).At low CCN concentration, droplet collisions are efficient and droplet concentration decreases with height much faster than in polluted air.As a result, the CWC maximum at low CCN concentration is located at a height of 2 km as compared to 5 km in the case of high CCN concentration.These differences determine the huge difference in the ice microphysics.
Figure 8 shows that both the droplet concentration and CWC are larger in NA than ST.The main differences between droplet concentrations near cloud base are, however, determined by the difference in the slope parameter value: at k = 0.5 there are more CCN of sizes exceeding 0.015 µm than at k = 0.9 (Fig. 1).These CCN are activated at cloud base, leading to higher concentration in simulations with k = 0.5, especially when NA is applied.
Efficient collisions (seen by the sharp decrease in the CWC above z = 2 km) and rainfall decrease the droplet concentration.As a result, the supersaturation increases and leads to in-cloud nucleation and an increase in the droplet concentration already at distances of a few hundred meters above the cloud base.However, since the concentration of CCN is low, the amount of new nucleated droplets in the simulations was only about 5-10 cm −3 .The second layer of intense incloud nucleation caused by activation of the smallest CCN is seen within the altitude layer from 4 to 8 km.The difference in droplet concentration within this layer is fully related to the existence/absence of the smallest CCN in the CCN size spectrum.The differences between droplet concentration in ST and NA simulations are not significant at these levels.
This result agrees with the case of high CCN concentration when droplet concentration at higher levels is to a large extent determined by the smallest CCN in the droplet spectrum.
Figure 9 presents the vertical profiles of maximum mass contents of ice crystals, snow, graupel, and hail + freezing drops at the mature stage of cloud evolution.Comparison with Fig. 6 shows that, with the exception of snow, the mass contents of different ice hydrometeors at low CCN concentration are substantially lower than at high CCN concentration.The main reason for such a difference is lower CWC at low CCN concentration, which leads to less intense riming and, consequently, to slow growth of ice particles.
Figure 9 shows that the profiles of ice hydrometeors in NA and ST are similar.It means that the ice microphysics is to a large extent determined by the mass of supercooled droplets at high levels, which in turn is determined by the smallest CCN in the CCN size spectrum.The effects of the smallest CCN and the shape of CCN size spectra on droplet concen- tration and the concentration on ice microphysics are much stronger than the effect of additional droplets nucleating at cloud base in NA.The reason for this effect was explained above.
The increase in the concentration of the smallest CCN and in droplet concentration leads to an increase in the ice crystals' mass content occurring at about the level of homogeneous freezing (Fig. 9a).
The mass content of snow decreases with the increase in the smallest CCN concentration, because intensification of riming of snow leads to its conversion to graupel (Fig. 9b).Consequently, the graupel mass content increases (Fig. 9c).With regard to mass content of hail, the increase in the smallest CCN concentration leads to a decrease in the hail content above 6 km and to its increase below this level (Fig. 9d).The higher hail mass content above 6 km layer in the absence of the smallest CCN is likely related to the fact that the low droplet concentration leads to formation of raindrops in high concentration.Although these raindrops are of comparatively small size, the total raindrop mass content is larger than that in the case of higher drop concentration.These raindrops rapidly freeze above the freezing level, producing hail (actually frozen drops) with total mass larger than at high CCN concentration.This effect is discussed by Ilotoviz et al. (2016) in detail.In HUCM, frozen raindrops are assigned to the hail category due to their high density.When hail is defined as particles with sizes exceeding 1 cm, the amount of hail at low CCN concentration is negligible.
Higher hail mass content below 6 km in the presence of the smallest CCN can be attributed to intense conversion of heavy rimed graupel to hail, as well as to more efficient hail growth by riming.Note that sizes of hail particles forming in a deep convective cloud developing in the polluted atmosphere are larger than hail forming in a cloud developing in clean air (Ilotoviz et al., 2016).Due to larger size, hail in the polluted case falls to the surface (Fig. 6d), while in clean air hail melts at 1.5 km in the absence of small CCN and in the vicinity of the surface if the CCN size spectrum contains the smallest CCN.
The impact on precipitation
Figure 10a shows the accumulated rain at the surface in the polluted air.Accumulated rain is maximum in EN3500-S-0.5, where the effect of the smallest CCN is combined with the effect of a comparatively large amount of large CCN.This synergetic effect of the smallest and large CCN is described by Khain et al. (2011).In most simulations, the masses of accumulated rain are quite similar.
Comparison of Fig. 10a and b shows that the accumulated rain at low aerosol concentration is lower than at high CCN concentration, which is in agreement with many previous studies.Accumulated rain in NA was found to be quite close to that in ST.The main difference in the values of accumulated rain at low CCN concentration is caused by effects of the smallest aerosols increasing the mass of precipitating ice particles.
The amount of hail at the surface in polluted air (Fig. 10c) is substantially larger than in clean air (Fig. 10d) due to lower sizes and faster melting of hail particles when CCN concentration is low.The effect of APs on the size and amount of hail at the surface was investigated by Ilotoviz et al. (2016) in detail.
The amount of hail at the surface in polluted air is slightly higher in EN3500-S-0.5 than E3500-S-0.5 (Fig. 10c).We attribute this effect to a higher rate 0 of riming in EN3500-S-0.5 due to a higher amount of supercoold water (Fig. 4b, d).There are no significant differences in the other cases of polluted air.
The main factor determining the differences in the amount of hail falling to the surface at low CCN concentration is the effect of the smallest CCN.The increase in concentration of the smallest CCN leads to an increase in hail growth by riming.
With regard to the ratio of hail amounts in the experiments with the smallest APs, sooner or later intensification of convective cells (which is more or less random) may affect the ratio.Since the mass of hail falling to the surface in clean air is very low, a larger computational area is required to obtain reliable statistics.
Conclusions
Sensitivity of the microphysics of deep convective clouds to the concentration of aerosols and to the shape of aerosol size distribution is investigated using a new version of a 2-D spectral (bin) microphysics cloud model (HUCM).A new component of the model is the calculation of maximum supersaturation at cloud base using the analytical expression derived by Pinsky et al. (2012).The cloud microphysical structure obtained using this expression is compared with that obtained with supersaturation calculated at model grid points.
The goal of the study was twofold: (a) to test the effects of the improved calculation of supersaturation maximum near cloud base (new approach vs. standard approach) at different aerosol loadings and (b) to evaluate sensitivity of cloud microphysics to concentration and shape of size distribution of aerosol particles.In the simulations, shape of CCN size distributions was changed by changing the value of the slope parameter in the expression for activation spectrum (the values of k = 0.5 and k = 0.9 were used) and by adding the smallest CCN with radii below 0.015 µm.
The values of S max near cloud base calculated by the theoretical analysis were found to be substantially larger than the supersaturation values calculated explicitly at model grid points associated with cloud base.The comparison of the values of supersaturation at cloud base and droplet concentration in the model simulations with the corresponding values calculated using a benchmark parcel model showed that NA simulates cloud base supersaturation and droplet concentration much more accurately than ST.Thus, the first main conclusion of the study is that the droplet concentration field in NA is substantially more realistic than in ST, with the maximum of droplet concentration in NA located near cloud base in agreement with classical results (Rogers and Yau, 1996).The increased droplet concentration makes the cloud base more pronounced.The improvement of the representation of the vertical profile of the droplet concentration is especially significant in the case of high CCN concentration, where utilization of S max leads to a substantial increase in the concentration of droplets near cloud base.Thus, even at 100 m vertical resolution, it is necessary to use analytical expressions for S max .At low CCN concentration, the improved representation of droplet concentration above cloud base has a comparatively weak effect on cloud microphysics.This result can be attributed to the fact that droplet concentration increases relatively slightly if it is more accurately calculated since the available CCN concentration is low.As a result, intense warm rain rapidly arises in both NA and ST.
The error in calculation of droplet concentration near cloud base in ST is compensated to a significant extent by in-cloud nucleation above cloud base.Indeed, in NA droplet concentration increases with height up to a level of 4 km (Fig. 4a).The only reason for such an increase is the in-cloud nucleation of comparatively large CCN.
Models with microphysical schemes that do not describe in-cloud droplet nucleation should include calculation of S max at cloud base to avoid large errors in simulation of the microphysical cloud structure.
The second main conclusion is the high importance of the shape of CCN size distribution.Cloud microphysics was found to be highly sensitive to the slope parameter of the CCN activation spectra.The effect is comparable with the change in the total CCN concentration by the change in the intercept parameter N 0 .The utilization of k = 0.5 instead of k = 0.9 nearly doubled droplet concentration near cloud base, leading to corresponding effects on cloud microphysics, in particular, to an increase in accumulated rain.
The third main conclusion is the high sensitivity of ice microphysics to the existence of the smallest CCN in the CCN size spectrum.Both in cases of low and high CCN concentration, the differences in ice microphysics are determined to a large extent by concentration of the smallest aerosols in the CCN spectra.In cases of high CCN concentration, the effect of the smallest CCN in NA becomes important above 5-6 km altitude, where they are activated, producing additional supercooled liquid droplets.The latter leads to the increase in the concentration of ice crystals above the level of homogeneous freezing by factor of about 5, to doubling of graupel mass maximum.The smallest CCN also influence hail size and mass content.
In the case of low CCN concentration the smallest CCN also lead to an increase in the concentration and mass contents of ice crystals and to a significant increase of graupel and hail mass contents.Note that many probes of CCN measure concentration of CCN at supersaturations not exceeding 0.6 %.In this case the concentration of the smallest CCN which remain non-activated at this supersaturation remains unknown.Such measurements do not provide necessary information for investigation of mixed-phase and ice microphysics.
Accumulated rain amount in the case of high CCN concentration turned out to be higher than in the case of low CCN concentration.This result was discussed by Khain (2009) and Ilotoviz et al. (2016), showing that formation of hail increases precipitation efficiency of midlatitude storms.
Ice precipitation (calculated in millimeters of melted hail) at the surface is much lower than liquid precipitation.Nevertheless, hail precipitation at the surface in the case of high CCN concentration is higher than in the case of low CCN concentration by an order of magnitude, in agreement with results by Khain et al. (2011) and Ilotoviz et al. (2016).This effect can be attributed by formation of larger hail particles in the case of high CCN concentration (high supercooled mass content).The large hail particles reach the surface, while smaller hail forming in the case of low CCN concentration melts without reaching the surface.
The concentrations of drops and ice crystals are important parameters determining cloud radiative properties.In this context, more accurate calculation of the concentrations using NA as well as taking into account the effects of the smallest CCN should improve the accuracy of evaluation of radiative cloud properties.The proposed approach of calculation of nucleation of droplets at cloud base is simple in the utilization and computationally efficient.It can be used in cloud-resolved models with different vertical grid spacing.The utilization of cruder vertical model resolution may lead to larger errors in cases when droplet concentration at cloud base is calculated using supersaturations calculated at model grid points.
Data availability
Numerical codes of the model are available upon request.
Figure 1 .
Figure 1.The initial size distributions of aerosols near the surface in different simulations.
Figure 2 .
Figure 2. Examples of vertical profiles of the supersaturation above cloud base calculated using HUCM and a benchmark parcel model.The columns with w close to 1 m s −1 at cloud base were chosen for comparison.The values of S max in HUCM were calculated according to Pinsky et al. (2012).The values of droplets concentration calculated at cloud base in different simulations are shown as well (see legend box).
shows the fields of droplet concentration N d at the developing stage of the cloud evolution in E3500-S-0.5 (a), EN3500-S-0.5 (b), E3500-S (c), and EN3500-S (d).The maximum N d in NA is reached at cloud base, which makes the cloud base well pronounced.The difference between droplet concentrations in ST and NA experiments decreases with height.The highest droplet concentration is reached in simulations where the CCN activation spectrum was characterized by the slope parameter k = 0.5.This can be attributed to the fact that at k = 0.5 the aerosol spectrum contains more CCN which are activated at cloud base than at k = 0.9.Vertical profiles of the maximum values of droplet concentration and of cloud water content (CWC) averaged over time periods of storm development (panels a and b) and over the mature stage (panels c and d) are presented in Fig. 4.
Figure 4 .
Figure 4. Vertical profiles of the maximum values of droplet concentration (a, d) and CWC (b, d) in simulations with high CCN concentration.The profiles are obtained by averaging over the time period of 2400-3000 s (upper row) and over the time period of 4860-5460 s (bottom row).Panel (c) shows a zoom of panel (b) for large CWC.
Figure 5 .
Figure 5. Vertical profiles of (a) maximum values of plate concentration and (b) time dependencies of averaged plate concentration.The profiles are obtained by averaging over the time period of 4860-5460 s.The low and the upper arrows in panel (b) show approximate contribution of the smallest CCN and the additional CCN activated in NA, respectively.
Figure 6 .
Figure 6.Vertical profiles of the maximum values of mass content: (a) total ice crystals, (b) snow, (c) graupel, and (d) total hail and freezing drops in simulations with high CCN concentration.The profiles are obtained by averaging over the time period of 4860-5460 s.
Figure 8 .
Figure 8. Vertical profiles of the maximum values of droplet concentration (a) and CWC (b) in simulations with low CCN concentration (N 0 = 100 cm −3 ).The profiles are obtained by averaging over the time period of 3420-4020 s.The red arrow shows the increase in droplet concentration due to in-cloud nucleation in simulations with the CCN spectra containing small CCN.
Figure 9 .
Figure 9. Vertical profiles of the maximum values of mass content: (a) total ice crystals, (b) snow, (c) graupel, and (d) total hail and freezing drops in the simulations with low CCN concentration.The profiles are obtained by averaging over the time period of 3420-4020 s.
Figure 10 .
Figure 10.Time dependencies of accumulated rain at the surface for polluted (a) and for clean(b) air.Accumulated hail at the surface for polluted (c) and for clean (d) air in different simulations in polluted cases.
Table 1 .
List of symbols.
Table 2 .
CCN concentrations in different experiments in the boundary layer.High CCN concentration, cm −3 Low CCN concentration, cm −3 Slope parameter No smallest CCN With smallest CCN No smallest CCN With smallest CCN | 9,469.6 | 2016-11-17T00:00:00.000 | [
"Environmental Science",
"Physics"
] |
Resveratrol prevents age‐related heart impairment through inhibiting the Notch/NF‐κB pathway
Abstract Resveratrol (RSV) is a natural polyphenol compound found in various plants that has been shown to have potential benefits for preventing aging and supporting cardiovascular health. However, the specific signal pathway by which RSV protects the aging heart is not yet well understood. This study aimed to explore the protective effects of RSV against age‐related heart injury and investigate the underlying mechanisms using a D‐galactose‐induced aging model. The results of the study indicated that RSV provided protection against age‐related heart impairment in mice. This was evidenced by the reduction of cardiac histopathological changes as well as the attenuation of apoptosis. RSV‐induced cardioprotection was linked to a significant increase in antioxidant activity and mitochondrial transmembrane potential, as well as a reduction in oxidative damage. Additionally, RSV inhibited the production of pro‐inflammatory cytokines such as interleukin‐1β (IL‐1β) and tumor necrosis factor‐α (TNF‐α). Furthermore, the expression of toll‐like receptor 4 (TLR4), nuclear factor kappa‐B p65 (NF‐κB p65), and notch 1 protein were inhibited by RSV, indicating that inhibiting the Notch/NF‐κB pathway played a critical role in RSV‐triggered heart protection in aging mice. Moreover, further data on intestinal function demonstrated that RSV significantly increased short‐chain fatty acids (SCFAs) in intestinal contents and reduced the pH value in the feces of aging mice. RSV alleviated aging‐induced cardiac dysfunction through the suppression of oxidative stress and inflammation via the Notch/NF‐κB pathway in heart tissue. Furthermore, this therapeutic effect was found to be associated with its protective roles in the intestine.
responses, causing protein oxidation, lipid peroxidation, and DNA damage.Therefore, ROS could affect normal cell function and even lead to apoptosis (Wang et al., 2021).Fortunately, the anti-oxidant agent can remove ROS, which slows down aging and reduces the risk of relevant diseases.As a result, some natural antioxidants were used to prevent damage caused by oxidative stress in some age-related diseases (Chen et al., 2023).
Resveratrol (RSV), a plant-derived polyphenol that is found in human foods such as peanuts, grapes, red wine, and berries, has attracted considerable attention due to its beneficial role in preventing aging by repairing the redox system (Koushki et al., 2018).
In addition to its anti-aging capacities, RSV has also been shown to possess protective effects against CVD by scavenging ROS and promoting the activities of various antioxidant enzymes (Carrizzo et al., 2013).Apart from the oxidative stress theory, inflammation has been strongly implicated in the pathogenesis of aging.In recent decades, accumulating studies have revealed that inflammation plays a critical role in causing age-related diseases, including cardiovascular diseases (Alemany et al., 2014;Steven et al., 2019).
It has been suggested that the nuclear factor kappa-B p65 (NF-κB p65) signaling pathway is involved in the occurrence and development of aging and its complications (Kanigur Sultuybek et al., 2019;Salminen & Kaarniranta, 2009).Increasing investigations have supported that Notch was an imperative upstream element to trigger NF-κB.Notch expression was upregulated in response to aberrant inflammation and then affected the subsequent NF-κB pathway (Xiao et al., 2018).In recent years, scientific investigations have demonstrated the advantages of short-chain fatty acids (SCFAs) for human health.These SCFAs are classified as diminutive organic monocarboxylic acids possessing carbon chain lengths ranging from two to six (Tain et al., 2021).SCFAs possess the ability to mitigate systemic inflammatory and metabolic illnesses (Li et al., 2020).Currently, there is limited evidence available to support the notion that RSV can prevent age-related heart dysfunction via the blockade of the Notch/NF-κB pathway.
Additionally, in a D-galactose (D-gal)-induced aging model, it is currently unknown if RSV-induced heart protection is associated with SCFAs.
Therefore, we hypothesized that RSV mediated its cardioprotective effects by regulating the Notch/NF-κB pathway and SCFAs.In animal studies, D-gal was a common modeling drug used to simulate the natural aging process (Qian et al., 2021).It can induce metabolic disturbances in cells, alter oxidative enzyme activity, increase superoxide anion and oxidative product production, and cause damage to proteins, lipids, and DNA.Furthermore, animal aging models generated by D-gal can simulate the normal aging process (El-Far et al., 2021).This resulted in apoptosis and inflammation, which ultimately contributed to the aging process.Therefore, this study aimed to investigate whether RSV could protect against D-gal-induced cardiac aging and elucidate its underlying mechanisms using various parameters related to oxidative stress, inflammation, the Notch/NF-κB pathway, and SCFAs.A protein isolation kit was procured from Kaiji Biological Co., Ltd.
A 10% neutral formalin fixative solution was supplied by Solaibao Technology Co., Ltd.
| Animals
A total of 60 female/male BALB/c Kunming mice aged 6-8 weeks (cleaning grade), with an average weight of 30.00 ± 2.00 g, were procured from Jiangxi University of Traditional Chinese Medicine in Jiangxi, China [Certificate Number SCXK (gan) 2018-0003].The mice were housed in groups in an animal room under controlled temperature (18-22°C) and relative humidity (50%-60%) and maintained under standard laboratory conditions with the natural dark and light cycles.They were allowed ad libitum access to tap water and food, and the litter, feed, and drinking water all met the relevant use standards.The guidelines for the Care and Use of Laboratory Animals (NIH Publication No. 85-23, revised 1996) were followed during all the experiments.All procedures described were reviewed and approved by the Animal Care Review Committee (approval number 0064257) at Nanchang University, China.
| Experimental protocols
Mice were randomly divided into five groups (n = 12): the control group, the D-gal group, the D-gal+RSV-low-dose group (RSVL, 6.25 mg/kg), the D-gal+RSV-medium-dose group (RSVM, 12.5 mg/kg), and the D-gal+RSV-high-dose group (RSVH, 25 mg/ kg).All of the groups except the control group were intraperitoneally injected with D-gal at a dose of 100 mg/kg of body weight (BW) daily for 10 weeks.Starting from the seventh week, we dissolved RSV in physiological saline and administered it to three RSV-treated groups by intragastric administration.The doses used were 6.25, 12.5, and 25 mg/kg.The remaining two groups received the same volume of normal saline in a similar manner.
At the end of the 10th week, on the 70th day, all the mice were euthanized by cervical dislocation.
| H&E staining
Hearts were harvested for H&E staining analysis and fixed in a 10% formalin solution for 24 h.The obtained tissues were further embedded in paraffin, then cut into slices, and stained with hematoxylin-eosin.Subsequently, the sections of tissue were observed under a light microscope (Nikon).They were then photographed and analyzed.
| Determination of myocardial apoptosis
Heart tissues were cut into pieces, and the tissue fragments were incubated with 250 μL of 1.25 mg/mL trypsinase at room temperature for 10 min.Digestion was terminated with Dulbecco's Modified Eagle Medium (DMEM), and the digestive fluid was filtered and placed in a centrifuge tube, which was then centrifuged at 2000 r/min for 5 min.After discarding the culture supernatant, the precipitate was resuspended to yield cardiomyocytes and washed twice with PBS.Subsequently, Annexin V-FITC (AV) staining and propidium iodide (PI) staining solutions were added to the cardiomyocytes, which were incubated on ice for 10 min and in the dark for an additional 5 min.The apoptosis rate of cardiomyocytes in each group was determined by flow cytometry (Becton Dickinson).
| Determination of mitochondrial membrane potential (MMP) and ROS in cardiomyocytes
After digestion of heart tissue with collagenase II, the isolation of mouse cardiomyocytes was performed as described (Alam et al., 2020).Rho123 and DCFH-DA were employed to determine MMP and ROS, respectively.According to the working principle, mitochondria could emit green fluorescence after uptaking Rho123 fluorescent dye, and the fluorescence intensity was positively correlated with MMP.DCFH-DA was used as a ROS-specific fluorescent probe to measure the level of ROS, and the fluorescence intensity of DCF was used to indicate the content of intracellular ROS, and the fluorescence intensity was proportional to ROS content (Zhou et al., 2017).In brief, we took the wellgrown cardiomycytes, incubated the cells with 10 mg/mL Rho123 or 20 μmol/L DCFH-DA at 37°C for 20 min in the dark, removed them from the medium, and then rinsed the cells with PBS three times.Eventually, the sample was detected by flow cytometry (Becton Dickinson).
| Determination of cardiac antioxidant enzyme activity and MDA content
10% homogenate of heart tissue was taken and devoted to measuring the content of T-SOD, GSH, CAT, and MDA using commercial kits with a microplate reader (Thermo Multiskan MK3 Microplate Reader; Thermo Fisher Scientific), following the manufacturer's instructions (Huang et al., 2018).
| Determination of cardiac cytokine secretion levels by ELISA
An appropriate amount of myocardial tissue was weighed and added to the pre-cooled PBS buffer.Subsequently, the tissue was homogenized at a low temperature with a tissue homogenizer to obtain a 10% concentration of tissue homogenate.The supernatant was then centrifuged, divided, and stored at −80°C for sequential use.IL-1β and TNF-α levels were ultimately measured according to the corresponding ELISA kit instructions.The absorbance was measured at 570 nm by using a Multiskan MK3 Microplate reader (Thermo Fisher), and the cytokine concentrations were determined from the standard curve.
| Western blot analysis
Heart tissue was harvested, homogenized, and then proteins were extracted using lysis buffer.Protein content was determined by the Bradford Protein Assay (Bio-Rad Laboratories).Meanwhile, 12% or 15% separation gel and 5% concentrated gel were prepared in strict accordance with the standard procedure.Subsequently, protein samples were mixed with Laemmli sample buffer and boiled for 5 min.To ensure equivalent protein loading, 20 protein samples were added to each hole and electrophoreted for 15 min at 150 V and then at a voltage of 100 V for 60, 90, and 120 min.After incubation with 5% nonfat milk powder in TBST buffer for 1 h at room temperature, the membranes were immunoblotted overnight at 4°C with primary antibodies (1:1000 dilution).Then, detection of primary antibodies was performed using HRP-conjugated secondary antibodies (1:3000 dilution).The resulting immune complexes were detected by ECL reagents and images analyzed by a Molecular Imager® system (Nikon).
Meanwhile, we used the anti-β-actin antibody as an internal control.
| Determination of colon contents, SCFAs, and pH in aging mice
100 mg of fecal samples were diluted in EP tubes at a ratio of 1:7 (w/v) with deionized water.Then, the tubes were exposed to an ultrasonic wave for 5 min and placed in an ice bath for 20 min.And samples were centrifuged at 4800 g for 20 min at 4°C.Finally, the supernatant was mixed for SCFA measurement using gas chromatography (GC, HP-5MS capillary column, 30 m × 0.32 mm, 0.25 mm) analysis and pH measurement using a pH meter (Tianjing analytic corp.).The analytical conditions used by Agilent 6890N gas chromatography were based on the method of Hu et al. (2012).
| Statistical analysis
Statistical analysis was performed using the IBM SPSS Statistics software, version 23.0.The mean values of different experimental groups were compared using one-way ANOVA, followed by the Dunnett method for between-group comparison.Data were expressed as mean ± SEM. p < .05 was regarded as statistically significant.
| Effects of RSV on the cardiac histopathological changes of aging mice
Figure 1a-e shows the cardiac tissue sections stained with H&E to observe the pathological structure in the current experiment.The control group (Figure 1a) exhibited a neat and intact heart structure, with well-arranged myocardial fibers and normal cardiac structure.
In contrast, myocardial fibers in the D-gal group were obviously fractured, as shown in Figure 1b.The space between myofibrils was enlarged.Compared with the D-gal group, the RSV treatment group exhibited significantly more regular arrangement of myocardial fibers, ameliorated fiber breakage, and the cardiac structure tended to be intact, as shown in Figure 1c-e.In conclusion, these results suggest that RSV could ameliorate swollen and vacuolated cardiomyocytes and the disorganization of myofibrils in the mouse heart.
| Effects of RSV on apoptosis in myocardial cells
Apoptosis, the regulation of the homeostatic balance between cell proliferation and programmed cell death, is essential for the development and maintenance of multicellular organisms (Voss & Strasser, 2020).Excessive apoptosis may be equally harmful to the organism, indicating that tight regulation of the apoptotic machinery is essential for survival.Hence, to investigate the effect of RSV on myocardial injury in D-gal-induced aging mice, the apoptosis level of cardiomyocytes was evaluated.As shown in Figure 1f,g, the percentage of cardiomyocyte apoptosis increased from 3.95 ± 0.77% in the control group to 17.88 ± 3.34% in the D-gal group, suggesting that D-gal could enhance cardiomyocyte apoptosis.The percentage of cardiomyocyte apoptosis in the RSV-treated groups was notably reduced, and the effect was elevated with the increase in RSV dosage (p < .001).In the RSVH group, the percentage of cardiomyocyte apoptosis was 1.84 ± 0.54%.These results demonstrated that RSV could attenuate the apoptosis of cardiomyocytes in aging mice and promote the survival of cardiomyocytes.
| Effects of RSV on MMP and ROS generation in myocardial cells
Mitochondria were known to be largely implicated in apoptosis activation, and the collapse of the MMP was suggested to be an important early event of mitochondrial injury, leading to ROS overproduction and the release of apoptosis factors that finally induced apoptosis (Yi et al., 2022).After treatment with D-gal for 10 weeks, low Rho 123 staining in cardiomyocytes reflected a markedly elevated loss of MMP when compared to the control group (p < .001; Figure 2a,c), and the collapse of MMP appeared in parallel with the enhancement of ROS generation (p < .001; Figure 2b,d
| Effects of RSV on antioxidant activities and lipid oxidation in aging mouse hearts
To investigate the role of RSV in antioxidant activities and lipid oxidation, T-SOD, CAT, GSH, and MDA were measured in the subsequent experiments.As shown in Figure 3a, T-SOD and CAT were suggested to be cellular antioxidant enzymes, and treatment of mice with D-gal alone for 10 weeks significantly reduced the activities of T-SOD and CAT when compared to the control group (p < .05).Similarly, GSH acted as a nonenzymatic antioxidant, and the data showed that D-gal-treated mice exhibited a significant decrease in the D-gal group.Meanwhile, MDA, an important product of lipid peroxidation, was significantly increased in the D-gal group, confirming that oxidative stress was implicated in the pathogenesis of aging and CVD.Correspondingly, RSV inhibited the changes of T-SOD, CAT, GSH, and MDA in D-gal-treated mice (p < .05),and these beneficial roles were most pronounced in the RSVH group.
These findings indicated that RSV protected mice from age-related heart impairment by modifying the redox system in vivo.
| Effects of RSV on the expression of cytokines in aging mouse hearts
IL-1β plays a key role in immune response, autoinflammation, and tissue repair (Yi et al., 2022).TNF-α is a pleiotropic cytokine that induces a variety of cellular responses, ranging from inflammatory cytokine production to cell survival, cell proliferation, and even cell death (Karki et al., 2021).As shown in Figure 4a the levels of IL-1β and TNF-α in the D-gal group were significantly higher than those in the control group (p < .01).Compared with the D-gal group, the concentrations of IL-1β and TNF-α were significantly reduced in the RSV-treated group (RSVL group vs. D-gal group, p < .05;RSVM and RSVH group vs. D-gal group, p < .01).In the RSVH group, concentrations of proinflammatory cytokines IL-1β and TNF-α were 116.44 ± 5.7 pg/mL and 58.91 ± 5.45 pg/mL.The findings provided evidence that RSV inhibited the expression of IL-1β and TNF-α in the hearts of aging mice.
| Effects of RSV on the expression of TLR4 and NF-κ B p65 proteins in aging mouse hearts
The NF-κB pathway encompassed an important family of inducible transcription factors, which were critical for the regulation of gene expression in response to injury and inflammatory stimuli.inflammatory mediators and ultimately mediating myocardial injury (Qi et al., 2019).Thus, TLR4 and NF-κB expression were investigated (Figure 4c).And the expression of TLR4 in the D-gal group was higher than that of the control group (p < .001).Compared with the D-gal group, the expression of TLR4 in the RSV-treated group was diminished (p < .001),and these beneficial effects were presented in proportion to the dosage of RSV.In the RSVH group, the expression of TLR4 was 0.57 fold that of the D-gal group, indicating that the expression of TLR4 was blocked dramatically in the hearts of aging mice.Furthermore, we also analyzed the content of NF-κB p65 (Figure 4b).Likewise, the data showed that the expression of NF-κB p65 was significantly boosted when compared to the control group (p < .001),and the change referring to the expression of NF-κB p65 could be significantly subsided in the aging mice by the treatment of RSV (p < .001).Also, the expression of NF-κB p65 was lowest in the RSVH group.Therefore, it was concluded that RSV possessed cardioprotective effects, which were implicated in the inhibition of the TLR4-mediated NF-κB pathway in aging mice.
| Effects of RSV on the expression of Notch1 protein in aging mouse hearts
Notch signaling has recently attracted much attention due to its potential involvement in the onset and progression of CVD.It was believed to interact with the NF-κB pathway, but the exact mechanisms are yet to be fully understood.In this study, the expression of Notch1 was found to be significantly higher in the D-gal group compared to the control group (p < .001; Figure 4d).However, the expression of Notch1 was significantly reduced after treatment with RSV in the heart tissue of aging mice (p < .001).Our findings suggested that RSV-mediated downregulation of Notch1 expression was associated with the inhibition of TLR4 and NF-κB p65 proteins.
These findings supported the crosstalk mechanisms between Notch and the NF-κB pathway and indicated that cardioprotective effects were related to the Notch/NF-κB pathway.
| Effects of RSV on colon contents SCFAs and pH in a D-gal-induced cardiac aging mouse model
SCFAs serve as a link between the microbiota and host cells.As shown in Figure 5a, we investigated the role of colon contents, SCFAs, and pH in a D-gal-induced cardiac aging mouse model.The results showed a significant decrease in six SCFAs (acetic acid, propionic acid, n-butyric acid, i-butyric acid, n-valeric acid, and i-valeric acid) in the colon content of the D-gal group compared to the control group (p < .05).In contrast, treatment of D-gal-induced aging mice with RSV demonstrated certain activities in recovering and increasing these SCFAs.Specifically, compared with the D-gal group, treatment with RSVM and RSVH significantly enhanced iso-butyric acid in the colon (p < .05).As for the other five SCFAs, RSV displayed a significant difference when compared to the D-gal group.We subsequently investigated the impact of RSV on intestinal pH.As shown in Figure 5b, the pH in the D-gal group was appreciably higher than that in the control group (p < .05).Compared with the D-gal group, the RSV treatment group showed a decline in pH, among which the high-dose group manifested the most evident decrease with statistical significance (p < .05).Collectively, these data showed that RSV could regulate SCFAs and pH in D-gal-induced cardiac aging mouse.
| DISCUSS ION
The incidence of cardiac dysfunction among the elderly is increasing, leading to a reduction in their quality of life and placing a huge burden on families and society (Zhang et al., 2019).RSV, a naturally occurring phytoalexin, has demonstrated various bioactivities, including antioxidant effects, anti-aging effects, and cardioprotective effects (Xueyan et al., 2018).In this study, we confirmed the beneficial effects of RSV against D-gal-induced heart impairment in mice.Furthermore, we addressed the multiple mechanisms responsible for its cardioprotective effects, including its influence on oxidative stress, inflammatory, and apoptotic markers, as well as intestinal status.
ROS plays a significant role in the pathogenesis of aging in mammalian cells, and many age-related chronic diseases, such as CVD, may result from ROS damage to biological macromolecules such as lipids, proteins, or DNA (Dubois-Deruy et al., 2020).To eliminate ROS, the body is equipped with SOD, CAT, and GSH (Yi et al., 2020).
Typically, MDA serves as a biomarker for ROS-induced lipid peroxidation (Razi & Malekinejad, 2015).Apoptosis is largely induced by the accumulation of ROS, which damages DNA and leads to a collapse of MMP and cytochrome c leakage, triggering caspase activation (Kim et al., 2020).In our study, we observed increased levels of Inflammation plays a significant role in the pathogenesis of CVD.
RSV has been reported to exert multiple biological activities against inflammation, oxidative stress, and aging (Li et al., 2018).Our results demonstrated that in the D-gal-induced group, levels of IL-1β, TNFα, TLR4, NF-κB p65, and apoptotic responses were significantly enhanced.Furthermore, RSV markedly alleviated these phenomena through the NF-κB pathway, suggesting that RSV could regulate inflammation and promote cardiac health in aging mice.Recently, it has been reported that Notch and NF-κB played a key role in cell-cell contact for activation to regulate cell fate decisions.This constituted a new exciting field to illustrate multiple complex mechanisms referring to the cardioprotection of RSV (Osipo et al., 2008).
In many physiological processes, such as cell growth, proliferation, differentiation, and organ development, activation of the Notch signaling pathway is critical (Samarajeewa et al., 2019).Furthermore, it was involved in various pathological processes, such as oxidative stress and inflammation, as well as cardiovascular development and differentiation (Nistri et al., 2017).Mammal Notch receptors were classified as Notch 1-4, together with five transmembrane ligands (Jagged1, Jagged2, Delta-like1, Delta-like3, and Delta-like4; Borggrefe & Oswald, 2009).Studies have shown that the intracellular segment gene (NICD) of the Notch 1 receptor transfected with adenovirus after myocardial infarction may improve hemodynamics, suggesting that Notch gene re-expression was an adaptive response secondary to myocardial injury and is closely associated with the repair and regeneration of viable myocardium (Kasahara et al., 2013).
Notch also plays a cardioprotective role by upregulating Hes-1 (Fang et al., 2020).Reports have shown that almost all subtypes of Notch receptor and ligand were upregulated at different levels during the remodeling of myocardium that was imparted (Anbara et al., 2020).
Notch is connected with the NF-κB signaling pathway at various levels.For instance, NF-κB upregulated the transcription of one of the ligands of the Notch receptor (Jagged-1; Moore et al., 2020).
Additionally, NF-κB stimulated the expression of hes-5 and Deltex-1, both of which were targets of Notch (Osipo et al., 2008).Our results demonstrated a significant increase in the expression of Notch 1 protein in the D-galactose-induced group, which was significantly reduced by RSV treatment.Therefore, RSV may exert its cardiac protective effects by activating the Notch/NF-κB signaling pathway.
Currently, the relationship between gut microbes and aging has been attracting increased interest from researchers.SCFAs, metabolites of intestinal microorganisms, are important physiologically functional substances in the intestinal tract.Acetate, which is a predominant SCFA found in both the gut and systemic circulation, has been observed to attenuate CVD (George et al., 2020).In addition, a possible mechanism underlying the anti-inflammatory effects of SCFAs is their utilization as metabolic substrates.In comparison to lipids, SCFAs were capable of bypassing multiple β-oxidation cycles prior to their entry into the tricarboxylic acid cycle, thereby reducing oxygen consumption and the production of ROS (Evans et al., 2020).
ROS accumulation is positively associated with inflammation (Rizwan et al., 2014).Our results showed that SCFAs were notably reduced in D-gal-induced aging mice, while SCFA species and content escalated after RSV treatment.Strikingly, we found that RSV could improve the level of intestinal SCFAs in the D-gal-induced aging mice.
Moreover, the present study also uncovered that RSV can reduce intestinal pH, which is facilitative to the growth and proliferation of probiotics and thereby has a protective effect on the body's aging process.Therefore, our findings suggested that the cardioprotective effects of RSV were linked to its role in increasing the level of intestinal SCFAs and decreasing intestinal pH.However, direct evidence of this gut-heart crosstalk has not been confirmed in the present study since we did not investigate the gut microbiota community and metabolites.
). Treatment of D-gal-induced aging mice with RSV significantly alleviated the collapse of MMP and ROS enhancement compared with D-gal treatment alone in mice (p < .001).These results indicated that RSV could potentiate MMP and inhibit ROS accumulation in myocardial cells in D-gal-induced aging mice, thereby protecting cardiomyocytes.
Accordingly, TLR4 may propagate the NF-κB pathway and facilitate the signal cascade, thereby contributing to the expression of F I G U R E 1 Pathological observation of heart tissue sections in mice and flow cytometric analysis of cardiomyocytes in aging mice.H&E staining showed that RSV protected against the structural damage caused by D-gal, scale bar = 200 μm.(a) H&E stain of the cardiac tissue in the control group.(b) H&E stain of the cardiac tissue in the D-gal group.(c) The RSVL group's cardiac tissues were stained with H&E.(d) The RSVM group's cardiac tissues were stained with H&E.(e) Cardiac tissues in the RSVH group were stained with H&E.(f) A flow cytometry assay showed that RSV attenuated apoptosis of cardiomyocytes in mice.(g) Quantitative analysis of cardiomyocyte apoptosis percentages in aging mice.Control group: treatment with normal saline; D-gal group: 100 mg/kg, D-gal-treated alone; RSVL group (D-gal+RSV-low-dose group): Treatment with D-gal (100 mg/kg) and RSV (6.25 mg/kg); RSVM group (D-gal+RSV-medium-dose group): Treatment with D-gal (100 mg/kg) and RSV (12.5 mg/kg); RSVH group (D-gal+RSV-high-dose group): Treatment with D-gal (100 mg/kg) and RSV (25 mg/kg).Data are presented as mean ± SEM (n = 5 for each group).### p < .001versus control group; ***p < .001versus D-gal group.(n = 5 for each group).
F
Flow cytometric analysis of mitochondrial membrane potential and ROS in aging mouse cardiomyocytes.(a) Flow cytometric analysis of mitochondrial membrane potential was performed with Rho123.RSV increased the levels of MMP in aging mice cardiomyocytes compared to the D-gal group.(b) ROS distribution in aging mouse cardiomyocytes was measured by flow cytometry.Compared with the D-gal group, RSV reduced the level of ROS in the cardiomyocytes of aging mice.(c) Quantitative analysis of the fluorescence intensity of Rho 123 in aging mice.(d) Quantitative analysis of the fluorescence intensity of DCF in aging mice.Control group: treatment with normal saline; D-gal group: 100 mg/kg, D-gal-treated alone; RSVL group (D-gal+RSV-low-dose group): Treatment with D-gal (100 mg/kg) and RSV (6.25 mg/kg); RSVM group (D-gal+RSV-medium-dose group): Treatment with D-gal (100 mg/kg) and RSV (12.5 mg/kg); RSVH group (D-gal+RSV-high-dose group): Treatment with D-gal (100 mg/kg) and RSV (25 mg/kg).Data are presented as mean ± SEM (n = 5 for each group).### p < .001versus control group; ***p < .001versus D-gal group.
F
Cardiac antioxidant enzyme level and MDA content in mice.The activity of T-SOD, CAT, and GSH was significantly increased in the RSV group compared with that in the D-gal group.RSV significantly reduced MDA content when compared with D-gal.(a) RSV's effects on T-SOD in aging mouse hearts.(b) The effects of RSV on GSH in aging mouse hearts.(c) The effects of RSV on CAT in aging mouse hearts.(d) RSV's effects on MDA in aging mouse hearts.Control group: treatment with normal saline; D-gal group: 100 mg/kg, D-gal-treated alone; RSVL group (D-gal+RSV-low-dose group): Treatment with D-gal (100 mg/kg) and RSV (6.25 mg/kg); RSVM group (D-gal+RSVmedium-dose group): Treatment with D-gal (100 mg/kg) and RSV (12.5 mg/kg); RSVH group (D-gal+RSV-high-dose group): Treatment with D-gal (100 mg/kg) and RSV (25 mg/kg).Data are presented as mean ± SEM (n = 5 for each group).# p < .05versus control group; *p < .05versus D-gal group.
celluar antioxidant (GSH) and antioxidant enzymes T-SOD and CAT in the heart tissues of aging mice treated with RSV.Additionally, MDA was significantly reduced in the heart tissue of aging mice that received RSV treatment.RSV treatment also attenuated apoptosis, MMP of cardiomyocytes, and ROS content in aging mouse cardiomyocytes, suggesting that RSV exerted cardioprotective effects by inhibiting apoptosis.
F
I G U R E 4 Effect of RSV on cytokine expression and Notch/NF-κB pathway-related protein expression in the heart tissue of mice.(a) Both IL-1β and TNF-α were significantly lower in the RSV-treated group than in the D-gal group.(b) When compared to the D-gal group, RSV treatment reduced the expression of NF-κB p65 substantially.(c) RSV treatment reduced TLR4 expression significantly compared to the D-gal group.(d) RSV treatment significantly reduced Notch 1 expression compared to D-gal treatment.Control group: treatment with normal saline; D-gal group: 100 mg/kg, D-gal-treated alone; RSVL group (D-gal+RSV-low-dose group): Treatment with D-gal (100 mg/kg) and RSV (6.25 mg/kg); RSVM group (D-gal+RSV-medium-dose group): Treatment with D-gal (100 mg/kg) and RSV (12.5 mg/kg); RSVH group (D-gal+RSV-high-dose group): Treatment with D-gal (100 mg/kg) and RSV (25 mg/kg).Data are presented as mean ± SEM (n = 5 for each group).## p < .01versus Control group; ### p < .001versus Control group; *p < .05versus D-gal group; **p < .01versus D-gal group; ***p < .001versus D-gal group.
Overall, our data indicated that RSV protected against age-related heart impairment by reducing histopathological changes, F I G U R E 5 The effect of RSV on SCFA and pH in aging mice.(a) Secretion levels of SCFAs in the colonic contents of mice.RSV treatment increased secretion levels of acetic acid, propionic acid, n-butyric acid, iso-butyric acid, n-valeric acid, and i-valeric acid.(b) The pH of colon contents in mice.In comparison with the D-gal group, the RSV group had a lower pH.Control group: treatment with normal saline; D-gal group: 100 mg/kg, D-gal-treated alone; RSVL group (D-gal+RSV-low-dose group): Treatment with D-gal (100 mg/kg) and RSV (6.25 mg/kg); RSVM group (D-gal+RSVmedium-dose group): Treatment with D-gal (100 mg/kg) and RSV (12.5 mg/kg); RSVH group (D-gal+RSV-high-dose group): Treatment with D-gal (100 mg/kg) and RSV (25 mg/kg).Data are presented as mean ± SEM (n = 5 for each group).# p < .05versus Control group; *p < .05versus D-gal group. | 6,739.6 | 2023-11-13T00:00:00.000 | [
"Biology",
"Medicine"
] |
Diagnosing d ominance : Problematic sandhi types in the Chinese Wu dialect of Jinshan
. The concept of phonological dominance plays an important role in typologizing tone sandhi behavior in Sinitic languages. We confront the diagnostic criteria for dominance with data from Jinshan, a northern Wu dialect with complex disyllabic lexical tone sandhi. Auditory and acoustic data for the seven Jinshan tones in monosyllabic words are presented and compared with their realization in several different types of disyllabic word tone. It is argued that the current criteria for deciding the dominance of tone sandhi require refinement, which, when applied, reveal examples of both left and right dominance within the same lexical tone sandhi system.
Introduction.
Language likes to exploit the polarity of metrical strength, one striking example being the typological difference found in the highly complex morphotonemics of China's eastern coastal provinces (Ballard 1984, Zhang 2007).In the northern Wu dialects, in the north of what Norman (1988: 202) called the sandhi zone, the sandhi shape of words is said to be determined by tones on the morphemes on syllables at the beginning of a word.The word-initial morphotoneme often spreads onto following syllables, thus obliterating any non-initial tonal contrast.The tone spreading was first documented in the northern Wu dialect of Tangxi (Kennedy 1953), long before autosegmental phonology.Shanghai is a canonical example (e.g.Zee & Maddieson 1979).This type of tone sandhi is often called left-dominant.In the southern Wu and northern Min dialects in the middle of the sandhi zone, lexical sandhi behavior is said to be almost exactly the opposite (Pan 1991: 287): it is the morphotonemes on the word-final syllables which determine the word's tonal shape.The tone on the word-final syllable is said to be 'preserved', 'unchanged' or 'in agreement with' the citation tone, and tonal contrasts on the preceding syllables tend to be neutralised, although the neutralisation groupings are often bewilderingly complicated.This type is often called right-dominant.
Although Northern Wu varieties are described as typically left-dominant, and southern Wu as right-dominant, both types of sandhi can in fact be found in a single variety, where the difference is usually associated with different morpho-syntactic structures like words and phrases.One of the things we show in this paper is differences in dominance within a single system of lexical tone sandhi.But our paper's main aim is to assay the diagnostic criteria for dominance in greater depth using citation tone and sandhi data from the Chinese northern Wu dialect of Jinshan (Rose & Yang 2022), and focus on some of the problems in applying the concept to Jinshan tone sandhi.Jinshan is a good choice: its tone sandhi is much more complex than its immediate neighbor Shanghai, which is often cited as an archetypical left-dominant variety.Jinshan thus provides much better data for testing the notion of dominance.As will be seen from our multispeaker tone acoustics, we also take seriously Zhang's (2014: 457) comment on the need for "careful phonetic descriptions of the acoustic correlates for stress and tone" to reduce the uncertainty inherent in the typically impressionistic descriptions upon which much of modern tonology is still based.
Previous work.
The typological dominance distinction in Sinitic varieties was, to the best of our knowledge, first proposed in Ballard (1984), who used the term focus for the feature.Ballard's focus feature is binary, with two values relevant to the beginning and ending of words: left and right.Ballard mentions several diagnostic features for focus, including segmental contrasts in Onset and Coda position, phonetic features of stress and spreading, and preservation of tone.However, from the number of times it is mentioned, preservation of tonal contrasts (p.5 et pass.) is for him the most important.Left focus is associated with preservation in the first position in lexical tone sandhi groups; right focus with preservation in the last position.Ballard does not specify what he means by tone, although it is likely from his use of tonal contrast that he has toneme in mind.
Later, Yue-Hashimoto (1987) used the notion of dominance in her typology of tone sandhi patterns in Chinese dialects.She recognises four types of sandhi, which she calls (pp.451, 452) first-syllable dominant, last-syllable dominant, merging and local modification.Dominance is involved in the first three types.Her single criterion for dominance (p.451) is the relationship between sandhi tones and citation tones (i.e. the tone used when reading a Chinese character out).Dominance can be identified by identity or similarity between citation tone and sandhi tone on the first or last syllable of the sandhi domain (p.451).The merging type is the absence of similarity or identity between sandhi form and citation form.
In describing the Wu tone sandhi types he surveyed, Qian (1997: 613) classifies the most common types under two headings: 前主后附 (lit.front dominant -rear appended) and 后主前 附 (lit.rear dominant -front appended).In the first type, it is the initial tone which determines the sandhi and the final tone which is appended.The second type he is unable to fully characterise due to relative scarcity of data.Although he conceives the sandhi involving a relationship between a monosyllabic tone 单音调 and a sandhi tone 连读调 Qian makes an important observation (p.612) as to the typical 'imbalance' 不平衡 between them in Wu.Because of this 'imbalance', he prefers to represent the monosyllabic tone with its Middle Chinese tonal category.
In the major work on Chinese tone sandhi (Chen 2002), dominance is not actually explicitly mentioned.However, the main tonal properties Chen associates with metrical strength -tonal stability, tone modification, tonal neutralisation, tonal loss -are very similar to those associated in previous studies with dominance, and thus it appears that dominance may be taken as synonymous with metrical prominence, an interpretation reinforced by Chen's treatment of tone sandhi in several Wu dialects including Shanghai.The arguments in Chen (2002) are couched in a GP/OT framework, from which one can assume that for him the comparison is also between a sandhi tone and a unique underlying morphotoneme.
Note that Chen emphasises (p.294) that one can infer the metrical structure from the tonal behaviour: "The fundamental insight implicit in virtually all studies on tones in context is that tonal behaviour is diagnostic of metrical prominence in that tonal stability is a characteristic of accentual prominence, while tonal modification, neutralisation, or complete loss typically affect syllables in metrically recessive position."Furthermore, he argues for a metrical solution to the tone sandhi.In other words the he says the sandhi makes more phonological and phonetic sense if it is considered part of the realisation of metrical structure, rather than just an isolated system of tone rules.
A detailed study of dominance is found in Zhang ( 2007), who is concerned with accounting for the asymmetry between left-and right-dominant systems in the tonological processes involved.Zhang also mentions dominance in his (2014) chapter on the typology of Chinese tone and tone sandhi patterns and their contribution to tonology, although he characterizes the distinction as crude.Zhang makes use of a GP/OT framework in his arguments, so for him the comparison is probably between a sandhi tone and a unique underlying morphotoneme.
All the researchers above observe that both left and right dominance can be found in the same dialect; however, the difference usually correlates with different morpho-syntactic behavior.For example in Shanghai dialect, verb + object phrases are right-dominant, whereas most of the lexical sandhi is left-dominant (Zhang 2007: 267;Ling & Liang 2019).
Common to this research on dominance are two ideas.The first is that dominance involves the relationship between some kind of basic tone (tone, toneme, citation tone, monosyllabic tone, unique underlying morphotoneme) and its realisation in sandhi.The second is that dominance is shown by the extent to which this basic tone is preserved.Lack of dominance, on the other hand, is shown in the extent to which the reference tone is modified, neutralised, or in some other way lost.As mentioned, different researchers have different ideas on the phonological status of their basic tone.For several reasons, we think it best for this investigation to initially define our basic tone as the tone observed on monosyllabic words, i.e. free morphemes.This avoids several thorny epistemological issues (Burton-Roberts et al. 2000: 1-18), including Kennedy's (1953) nightmare scenario: when a bound constituent morpheme occurs in a position which is always affected by tone sandhi, thus making it impossible to know the morpheme's underlying tone.
Our approach can be illustrated with the Jinshan word pig liver: [ʦɿ kʉ 34.51] 猪肝.This is a compound word built from two free morphemes {ʦɿ 猪 PIG} and {kʉ 肝 LIVER}, both of which have /51/ tones when they occur as monosyllabic words1 .One can reasonably argue from this that the native speaker knows that, in the word pig liver, the word pitch of [34.51] corresponds to a combination of /51/ and /51/ monosyllabic tones.We can then reasonably point to preservation of /51/ on the word-final syllable, and modification on the word-initial syllable, inferring therefrom an instance of right dominance.
Procedure.
Because of recent documented changes in the speech of younger Shanghai speakers (Gao & Hallé 2013), it was considered advisable to collect data from older Jinshan speakers, and so nine locals over 60 years old were selected and recorded by the second author, who is a native Jinshan speaker (albeit a youngish one).All participants self-reported to be native speakers of Jinshan who speak dialect in their daily life.
Informants were given a list of basic words for exemplifying Chinese dialect lexicon (HYFYCH 1964: 18-26) and asked to read out the equivalent Jinshan word.Recordings were made at 48 kHz, 24-bit on a computer in an acoustically absorbent recording studio using Adobe Audition 2021.Each speaker produced some 270 monosyllabic words comprising mostly nouns, stative verbs (adjectives), and functive verbs, and some 500 disyllabic lexical items.Together they provide a good idea of the nature of the monosyllabic and sandhi tones involved.Consent from all participants was acquired before the experiment.
The recordings were phonetically transcribed and manually labelled in Praat.Transcription is an essential part of the process: it enables one to become familiar with a voice and note features of possible phonetic and/or phonological importance (to take an actual example from the recordings, between-speaker variation in the use of labio-dental vs. labial-velar approximants ([ʋ], [w]).Tone acoustics (F02 and duration) were quantified with the same method used in previous studies of Wu varieties, e.g. Rose (2016a).
Jinshan monosyllabic word tones.
Jinshan has seven tones on monosyllabic words (Rose & Yang 2022).They can be named after their pitch features, and represented using Chao tone letters, as follows: high fall 51, mid rise-fall 341, high level 44, delayed mid rise 224, delayed low rise 112, short high 4 and short mid rise 34 (underscoring indicates a short pitch when the Chao tone letter is greater than one).Figure 1 shows normalised acoustics from five speakers.F0 was z-score normalised, as this has been shown to be the best method for reducing between-speaker variation (Rose 2016b).Duration was normalised as percent of mean tonal duration.The arrangement of the panels in figure 1 shows how the seven tones are cross-classified in the typical Northern Wu manner by dimensions of truncation and register.The third important feature is pitch target.These three dimensions are now briefly discussed.Another important realization of register is in phonation type.For several speakersalthough not all -lower register is extrinsically non-modal.The female speaker described in Rose & Yang (2022), for example, had very clear whispery voice for her lower register tones; another, male, had both growl (aryeppiglottic trill) and whispery voice.The non-modal phonation types occur with all types of syllable Onsets -obstruent, sonorant and zero -and are thus clearly not a function of syllable-initial obstruents.
Register also correlates to a certain extent with overall pitch height -upper register tones have pitch contours mostly in the upper half of the pitch range and lower register tones have pitch contours mostly in the lower half of the pitch range.But figure 1 clearly shows that there is considerable overlap between the high and low register tones' normalised F0 values, whichever value one choses to demarcate upper from lower (the mid z-score normalised F0 range lies of course at 0).This means it is not possible to define Jinshan register in terms of location of pitch/F0 in the upper or lower half of the pitch/F0 range, as is commonly assumed for Asian and African tone languages (Yip 2002;Hyman 1993: 77).
Since register is realized in many different ways, and therefore represents a higher level of abstraction, there is a problem with specifying it in terms of a single feature (unless of course the different realisations can all be related intrinsic-phonetically to the putative feature).In the tonological representations of figure 1, we have used phonation type (modal vs breathy) as the implementation of the upper vs lower register features.A possible articulatory characterization, following Moisik & Esling (2011), would be [+/-CET] (constricted epilaryngeal tube).For those speakers whose low register tones lack non-modal phonation, we would implement low register with a L tone, thus making it a kind of depressor.Below we have, for convenience, stuck to the relatively anodyne terms upper and lower register tones, and a phonatory register implementation.It is not crucial to the central argument of the paper.4.2.PITCH TARGET.Factoring out register leaves us with just three tonal pitch targets: HL, MH and H.This insight is incorporated formally in the tonological representations in figure 1.It is important to note the vertical depressed and non-depressed tonal pairs do not only minimally differ in register; as will be seen later in the paper, they also behave as natural classes in the tone sandhi.Thus for example the high fall and the mid rise-fall undergo and condition the same tone sandhi processes.Representing them with the same tonal pitch target gives formal recognition to their natural class relationship as well as, of course, formalizing the effect of depression.
TRUNCATION. The dimension of
Truncation also partitions the tones into two sets: long and short.Truncation is realized in duration, phonation offset and vowel inventory and quality.Short tones (short high, short mid-rise) have shorter Rhymes which are often terminated in a glottal stop word-finally.The relative difference in duration can be seen in figure 1 to be about 2 : 1 between the high level and short high tones, which have comparable F0.Truncation is also realized segmentally in a reduced number of Rhyme contrasts, with slightly different vocalic quality to those in long tones.As shown in figure 1, truncation can be represented straightforwardly in moraic structure: short tones have a single mora, long tones are bimoraic.
Examples of a conventional autosegmental derivation are given at (1) for the two monosyllabic words: [p 51] 冰 ice, with a high fall tone; and [b 341] 平 flat with a mid rise-fall tone.Their tonal-geometric underlying forms are given to the left and centre, with the realization rules to the right.As can be seen, both words have the same HL tonal pitch target, but differ in register.The register then conditions the realization of the HL as [51] in modal register and [341] in breathy.The realization of the phonation type is also, of course, conditioned by register.It is shown for the nonce as part of the realization of the nucleus /ia/: one typically hears a breathy vowel rather than a breathy pitch.The phonotactics of the Onset are an additional segmental effect conditioned by register, as /b/, along with other voiced obstruent phonemes, can only occur in low register morphemes, while /p/ and its ilk are restricted to upper register morphemes (the separate phonemic status of /p/ and /b/ can be demonstrated elsewhere in the phonology).The voiceless lenis realization of the /b/ as [b ] is a word-initial allophone, but may also be explicable in terms of phonatory register aerodynamics.The derivation of the nasalized vowel is one often found in northern Wu, where an independently motivated nasal coda nasalizes a preceding vowel before dropping off.
We are now in a position to examine the tone sandhi in disyllabic words for several combinations of monosyllabic tone, and see how well they can be assigned a dominance value according to the customary criteria of preservation vs. modification / neutralisation / loss.
Disyllabic words with short morphotonemes on the word-initial syllable.
The first pattern we present is in disyllabic words where the initial syllable has one of the two short morphoto-nemes /4/ and /34/.Table 1 gives examples of these morphotonemic combinations and their corresponding word pitches in words said by one of our female subjects.Non-modal phonation type is shown with a superscript ɦ .Underscoring indicates monomoraicity in tones with two Chao letters.Thus, for example, a disyllabic word with a short high /4/ morphotoneme on the wordinitial syllable and a high fall /51/ morphotoneme on the word-final syllable has a [2.51] word pitch: short lower-mid pitch on the first syllable and high falling pitch on the second.A disyllabic word with a short mid-rise /34/ morphotoneme on the word-initial syllable and a mid risefall /341/ morphotoneme on the word-final syllable has a [ ɦ 1. ɦ 341] word pitch: short low pitch with breathy voice on the word-initial syllable and mid rise-fall pitch with breathy voice on the Figure 2. Mean F0 and duration for female speaker's disyllabic words with short tones on the word-initial syllable and long tones on word-final syllable.X-axis = duration (csec.),yaxis = F0 (Hz).
word-final.Table 1 shows two different behaviors depending on whether the word-final tone is long or short.These will be treated separately, as they differ in dominance.
WORDS WITH FINAL LONG TONES.
In words with long tones on the word-final syllable, the word-final syllable tone shows a clear resemblance to its monosyllabic tone.Thus the high fall /51/ and mid rise-fall /341/ tones do not change; the high level /44/ is a slightly lower [33] after a preceding [23]; and the delayed rising tones have a slightly higher onset.The phonation type is also preserved on the word-final syllable.The truncation and phonation type of the word-initial tones is preserved, but their pitch changes.The short high /4/ assumes a lower-mid [2] pitch before all tones except the high level /44/, when it has the same pitch as in monosyllabic words.
The short mid-rise /34/ is realized as short low [1] before all tones except the high level /44/, when it has a slightly lower pitch than in monosyllabic words.The speaker's mean F0 and duration acoustics corresponding to this pattern are shown in Figure 2. The left column shows acoustics for words with a word-initial short high /4/ morphotoneme, the right column shows words with a word-initial mid rise /34/ morphotoneme.Each row corresponds to a different word-final long morphotoneme.When the word-final morphotoneme is low register (in the even rows), its Onset is voiced, and the F0 corresponding to the voicing is then shown with a thin dotted line.In order to show how the disyllabic word tones relate to their monosyllabic tone, the mean F0 and duration of the latter are plotted with thicker dotted lines, with the monosyllabic and disyllabic F0 trajectories aligned at corresponding Rhyme onset.In the top left panel, the relationship between the monosyllabic tones and their disyllabic shapes is further suggested by arrows pointing from the monosyllabic tones to their disyllabic counterparts.Although the similarity between the F0 shapes on monosyllabic words and the word-final syllable is clear, they are of course not the same, and it remains a matter of faith to claim preservation of tone on the basis of eyeballing the acoustics.In order to properly assess the extent to which the tone is preserved one needs the arithmetical functions relating the monosyllabic to the disyllabic forms which make clear the factors conditioning the realization of the F0.This is, unfortunately, beyond the scope of this paper.1 shows that the word-initial short tones in these combinations have pitches similar to their corresponding tones in monosyllabic words, with phonation type and short duration also preserved.When the word-final syllable has a short morphotoneme, however, its pitch is uniformly low falling and unlike its corresponding tone in monosyllabic words.Its phonation type is also often creaky and thus also unlike either the modal or breathy phonation types of the corresponding monosyllabic tone.Short duration is the only monosyllabic tonal property preserved.
WORDS WITH FINAL SHORT TONES. Table
Figure 3 shows the acoustics of this pattern.Unlike figure 2, each panel shows F0 traces for words with both short high and short mid rise morphotonemes on the word-final syllable.The trajectories in red are for words with the high morphotoneme; blue shows the mid rise morphotoneme.F0 for the voiced Onset that occurs with the mid morphotoneme is shown with a thin dotted line.The high morphotoneme has voiceless Onsets, so there is no F0.It can be seen that the F0 of the word-initial tones is similar to their monosyllabic form, although considerably shorter, being in a disyllabic word.All four word-final shapes, however, are very similar, with any F0/duration differences being conditioned by the voicing of the Onset.Note that the F0 on the word-final syllable falls well below the speaker's F0 floor, which figure 2 shown is a little below 150 Hz.This is a realization of the word-final creak.
It is clear from the impressionistic and acoustic descriptions above that we are looking at the preservation of long word-final tones, and the preservation of short word-initial tones before short word-final tones.No change is thus required in the tonal phonology of these forms; only in their tonetics.
On the other hand it is also clear that we are looking at tonological changes in the case of short word-initial tones before long tones.The lowering observed in both word-initial short high and short mid rise tones (which both have a H pitch target -see figure 1) can be represented as a change from H to L. Since phonation type is preserved, the resulting difference in pitch between [2] and [ ɦ 1] is easily accounted for by the difference in phonatory register: [2] is the realization of L in modal register, [ ɦ 1] is the realization of L in breathy register.The lowering change of H to L is blocked when short tones occur before the high level tone: a plausible inhibitory phonetic conditioning.
Changes also occur to word-final short tones after short tones.There is no register difference in these word-final forms, and any residual difference in pitch is easily related to the effect of voicing on the syllable Onset.Thus the contrast between the short /4/ and /34/ tones is neutralised.The result of the neutralisation is a pitch at the bottom of the speakers' range which often becomes creaky, and is unlike the corresponding monosyllabic word tones.This marks this neutralisation typologically as one where "… NEITHER member of the opposition appears, … but some third … segment sharing properties of the others, but with some of its own" (Lass 1984 : 50).
It is clear that these tonal combinations with short word-initial tones, with their tonological changes and neutralisation on the one hand, and tonal preservation on the other, instantiate a canonical example of dominance.Words with long word-final tones are examples of right dominance, or a weak-strong metrical structure; words with short word-final tones are examples of left dominance, or a weak-strong metrical structure.Moreover, it looks as if we can go a step further and link the metrical structure to moraic structure.As far as these combinations are concerned, bimoraic tones are strong.This leaves words where both tones are monomoraic, in which case the word-initial tone is strong.
Word-final high falling pitch preceded by either mid rising or low rising pitch.
The vast majority of words with a high fall /51/ morphotoneme on the word-initial syllable and either a high fall /51/, mid rise-fall /341/ or high level /44/ morphotoneme on the word-final syllable were said with a mid-rising pitch on the word-initial syllable followed by high falling pitch on the word-final syllable: [34.51].Words with a mid rise-fall /341/ morphotoneme on the word- Figure 4 shows the mean acoustics of these two word pitches for four speakers.[34.51] is shown on the left; [23.51] on the right.For each panel, two F0 trajectories are shown: one in red for words with voiceless intervocalic consonant and one in blue for voiced.The examples with voiced intervocalic consonant are those with a lower register mid rise-fall morphotoneme on the second syllable.The F0 is also shown of the speakers' monosyllabic tone on the corresponding word-initial morphotoneme: high fall on the left; mid rise-fall on the right.
It is clear from figure 4 that all word-final F0 shapes are the realization of the same high falling pitch target as in the high fall monosyllabic tone, with differences due to expected intrinsic effects from the voicing on the intervocalic consonant.The F0 corresponding to the higher and lower rising pitches on the initial syllable is also clear (the higher rise corresponds to the high fall morphotoneme, the lower rise to the mid rise-fall morphotoneme).The difference between the higher and lower rising F0 shapes diminishes throughout the Rhyme, so that the lower rise has only slightly lower F0 peak than the higher.Most examples show expected intrinsic effects on Rhyme duration from the intervocalic consonant.
This case appears prima facie to involve quite a lot of tone change and consequently very little preservation of tone involved on either syllable.The word-initial rising tones are clearly very dissimilar from their high fall and mid rise-fall morphotonemes.As for the word-final tone, its high falling pitch corresponds to three different monosyllabic morphotonemes: /high fall/, /mid rise-fall/ and /high level/, so the one case in which tone preservation appears clear is when the word-final morphotoneme is /high fall/.However, recall from figure 1 that both mid rise-fall and high fall morphotonemes share an underlying HL pitch target; and furthermore that the risefall pitch of the monosyllabic mid rise-fall tone results from word-initial depression, itself part of the word-initial realization of breathy register.It is therefore clear that, in the case of a wordfinal mid rise-fall morphotoneme, its high fall realization needs no explanation: there is no depression because the tone is not word-initial.This means that in two out of the three word-final cases, there is clear preservation of tone on the word-final syllable, but only in the sense of tonal pitch target, as there is no register contrast on the word-final syllable.In the remaining case, when the word-final morphotoneme is high level, there is clearly no preservation, either of tone or of tonal pitch target.
To what extent can dominance be diagnosed with these word tones?On the face of it, obviously in no absolute sense: there is modification, and thus lack of preservation of tone, on both syllables, but less on the word-final syllable.This points weakly to a right-dominant pattern.However, the high fall word-final tone appears to involve a case of neutralisation with the high level tone, and neutralisation is supposedly diagnostic of lack of dominance!This indeterminacy can again be resolved by recalling that there are different types of neutralisation (Lass 1984: 49ff.), depending on the realization involved.In the neutralisation of the word-final /4/ and /34/ short tones after short tones discussed in the previous section, the [21] realization resembled neither of the tones being neutralised, and could legitimately be considered therefore an example of a weak position.In the case in this section, however, one member of the neutralised oppositionhigh fall -appears to the complete exclusion of the other -high level.One might see in the persistence of the high fall an indication of the strength of the position, hanging on, so to speak, to at least one morphotoneme.There is, however, another way of interpreting the result of the neutralisation that is less metaphorical.It can, namely, be seen as an example of tone-to-stress attraction, a well-known indicator of metrical strength (Chen 2002: 69, 291 ff.).Suppose the word-initial HL tone target in these words is attracted to a metrically strong word-final position.This would account for the apparent neutralisation between the high and high fall tones.Interestingly, very similar tonal behavior has been demonstrated for the dialect of Zhenhai, across the bay from Jinshan (Rose 1990, Chen 2002: 69 ff.).Note, however, that this kind of argument goes beyond simply eyeballing the criteria for dominance listed above.Rather, it involves an analytic choice which maximizes the phonetic naturalness of the derivation, i.e. we are asking "which kind of dominance will better motivate our derivation?"To make this kind of judgment one needs already to have derivations at hand.In any case, we thus end up, after some refinement of the criteria for dominance, evaluating these [34.51] and [23.51] word pitches as instances of right dominance.Given this, it is then possible to include the metrical structure to write more plausible rules to derive the word pitch from underlying forms.An example is shown at (2): an autosegmental derivation of these [34.51] and [23.51] word pitches.In the left-hand column are shown the underlying representations for the relevant tonal combinations, which have either high fall or mid rise-fall morphotonemes on the word-initial syllable, followed by either high fall, mid rise-fall or high level on the word-final syllable (see table 2).P stands for either breathy or modal register, so [P, HL] represents both high-fall and mid rise-fall morphotonemes, and [modal, H] represents the high level morphotoneme.The tonological changes required to derive the surface forms are shown at top centre.(2a) formalizes an initial metrical strength assignment based on the tone target: any disyllabic sequence of HL and HL/H is deemed weak-strong.( 2b) is a tone-to-stress association rule moving the word-initial HL to the metrically strong word-final position; (2c) is a natural tonal assimilation rule changing a weak fall (HL) to a rise (MH) before a strong fall (HL); and (2d) register on the word-final syllable is modal.Some of the changes will apply vacuously, of course.The output of these rules is shown on the left.(2e) are the realization rules for tonal pitch conditioned by register: MH is realized as [34] when register is modal and as [2 ] when register is breathy.Modal HL is realized as [51].
7.
Words with high level pitch on both syllables.Some Jinshan disyllabic words have high level pitch on both syllables.This happens mostly when the word-initial morphotoneme is either high level /44/ or delayed mid rising /224/, and the word-final morphotoneme is either delayed mid rise /224/ or delayed low rise /114/.Table 3 gives examples.Note that the word-final syllables in these combinations have modal phonation, irrespective of whether they come from upper or lower register morphotonemes, i.e. the low register morphemes on the word-final syllable do not retain the breathy voice of their monosyllabic tone.
The acoustics of four speakers' disyllabic words with high level pitch are shown in figure 5.The F0 of their monosyllabic high level and delayed mid rise tones is also shown for reference.The four speakers' acoustics are all very similar and unremarkable for such a pitch, with F0 declining slowly over both syllables.Intrinsic duration and F0 perturbation effects due to the voicing on the second syllable Onset are also as expected, viz: longer rhymes before a voiced consonant and falling onset F0 perturbation after a voiceless consonant.It can be seen that the F0 on the wordinitial syllable is very similar to that of the speaker's high level monosyllabic tone, but very different, of course, from their delayed mid rising monosyllabic tone.
For words like [s ʉ tʰɔ 44.44] glove with high level /44/ morphotoneme on the first syllable, the high level word pitch looks very much like the result of a conventional spreading of a high level word-initial morphotoneme after the following morphotonemes have been deleted.This example would therefore seem unequivocally to qualify as left dominant by virtue of tone preservation, albeit in its spread realization.The problem of course is with those [44.44]words like [ɕi pʰ ɔ 44.44] movie ticket which have a word-initial mid-rise /224/ morphotoneme, since in this case there is no motivation for the origin of the high level word pitch.Once again, for words like [ɕi pʰ ɔ 44.44] movie ticket, one cannot invoke tonal preservation to diagnose dominance, since clearly no tone has been preserved.In the previous section, the type of neutralisation was invoked as a diagnostic aid, and this seems to help in this case also: neutralisation is in favour of one of the /44/ and /224/ pair being neutralised.Note also that the neutralisation on the word- final syllable in favour of a high level pitch is of the type where the result is neither of the /224/ and /114/ pair being neutralised, and thus is indicative of lack of dominance.
8. Summary.This paper has described monosyllabic tones and several examples of disyllabic lexical tone sandhi from the Northern Wu dialect of Jinshan in order to illustrate, from the relationship between their sandhi form and the monosyllabic tone of their constituent morphemes, some of the complexities in diagnosing whether the lexical sandhi is left-or right-dominant.After suggesting that dominance is a manifestation of metrical strength, we have gone from cases where the dominance is clear from preservation of tone -in words with short morphotonemes on the initial syllable and long morphotonemes on the final -to cases where there is no preservation of tone in cases where both constituent morphotonemes have a delayed mid rising tone.
We have shown that preservation of tone, at least in Wu, is not a straightforward notion, because it is sometimes the tonal pitch target that is preserved and not the whole tone, including phonation type.We have demonstrated that neutralisation is not automatically an indication of lack of dominance, as is generally assumed, but, depending on the type of neutralisation, can also be indicative of dominance.We have also suggested that dominance emerges from a deeper consideration of the implications of different derivational possibilities rather than just evaluating the customary criteria for dominance.It may also be the case that acoustic-phonetic experimentation with differential effects of speech rate and focus on F0, duration and intensity, as has been shown for Shanghai by Ling & Jiang (2019), can help in the diagnosis.
This paper demonstrates, adding to Rose & Shen (2016), that one can find both left and right dominance for non-truncated tones operating within the same lexical tone sandhi system, and we have hinted above that the metrical strength of a tonal combination within a word may be predicted -at least for Jinshan -from the tonal identity of the constituent morphemes.These two findings have implications for Chen's (2002) suggestion to have tone sandhi rules being conditioned by metrical strength.We certainly agree with the idea of stress phonetically conditioning tone, as shown for example in Kratochvil (1968: 35-47).However, first assigning metrical strength on the basis of tone, and then having the tone conditioned by metrical strength, looks circular.Perhaps it is the case that the different strong-weak and weak-strong metrical strengths we have seen in Jinshan have to be considered as underlying?
Figure 1 .
Figure 1.Normalized acoustics and tonological representation of the seven Jinshan tones on monosyllabic words.Thick red lines = mean normalized values.Thin lines = normalized values of five individuals.X-axis = normalised duration (%).Y-axis = z-score normalised F0 (sds around mean).4.1.REGISTER.The dimension of Register partitions the seven tones into two sets.High fall, high level, delayed mid rise and short high level belong to the upper register, and mid rise-fall, delayed low rise and short mid rise are lower register.Register governs many phonetic and phonological features, both segmental and supra-segmental.For example, voiced obstruent phoneme Onsets are only found in the low register tones.One important realization of register is depression: lower register tones have depressed pitch onsets.Figure 1 has been arranged to demonstrate this, with the vertical pairs of tones differing in depression: the mid rise-fall tone is the depressed version of the high fall; the delayed low rise the depressed version of the delayed
Figure 3 .
Figure 3. Mean F0 and duration for female speaker's disyllabic words with truncated tones on both syllables.Left panel = words with short high morphotoneme on word-initial syllable.Right panel = words with short mid rise morphotoneme on word-initial syllable.X-axis = duration (csec.),y-axis = F0 (Hz).
Table 2 .
initial syllable and either a high fall /51/, mid rise-fall /341/ or high level /44/ morphotoneme on the word-final syllable have a low rising pitch on the word-initial syllable followed by a high falling pitch on the word-final syllable: [23.51.Examples are in Table2.Note that the wordinitial [23] pitch preserves the breathy phonation of its morphotoneme, but that all word-final syllables are modally voiced, even if they are related to low register, i.e. breathy morphotonemes.Disyllabic Jinshan words with mid rise + high fall and low rise + high fall word pitch.
Table 3 .
Disyllabic Jinshan words with high level word pitch. | 8,449 | 2024-05-15T00:00:00.000 | [
"Linguistics"
] |
Bayes Shrinkage Minimax Estimation in Inverse Gaussian Distribution
In present paper, the properties of the Bayes Shrinkage estimator is studied for the measure of dispersion of an inverse Gaussian model under the Minimax estimation criteria.
Introduction
The Inverse Gaussian distribution plays an important role in Reliability theory and Life testing problems.It has useful applications in a wide variety of fields such as Biology, Economics, and Medicine.It is used as an important mathematical model for the analysis of positively skewed data.The review article by Folks & Chhikara [1,2] and Seshadri [3] have proposed many interesting properties and applications of this distribution.
Let 1 2 be a random sample of size drawn from the inverse Gaussian distribution , , , , ; , exp ; 2 2 0, , 0 Here, μ stands for the mean and for the inverse measure of dispersion.The maximum likelihood estimates of θ μ and θ are given as: The unbiased estimates of μ and θ are respecttively x and 1 3 and being stochastically inde-pendent ( [1,4,5,]).Schuster [6] showed that is known, the uniformly minimum variance unbiased (UMVU) estimator for measure of dispersion, and follows a chi-square distribution with degrees of freedom.
U n θ n
The choice of the loss function may be crucial.It has always been recognized that the most commonly used loss function, squared error loss function (SELF) is in appropriate in many situations.If the SELF is taken as a measure of inaccuracy then the resulting risk is often too sensitive to the assumptions about the behavior of the tail of the probability distribution.In addition, in some estimation problems overestimation is more serious than the underestimation, or vice-versa [7].To deal with such cases, a useful and flexible class of asymmetric loss function (LINEX loss function (LLF)) was introduced by Varian [8].The reparameterized version of LLF ( [9]) for any parameter is given as The sign and magnitude of 'a' represents the direction and degree of asymmetry respectively.The positive (negative) value of 'a' is used when overestimation is v x more (less) serious than underestimation. L is approximately square error and almost symmetric if a near to zero.
Thompson [10] suggested a procedure, which makes use of a prior information of the parameter in form of a guessed value by shrinking the usual unbiased estimator towards the guess value of the parameter with the help of a shrinkage factor .The experimenter according to his belief in the guess value specifies the values of shrinkage factor.The shrinkage estimator for the measure of dispersion of when a guess value of say is available, is given by Some shrinkage estimators for measure of dispersion have been obtained by Pandey & Malik [11] and have studied their properties under SELF-criterion.Prakash and Singh [12] have studied the properties of different shrinkage testimators for under the LINEX loss function.Palmer [13] and Banerjee & Bhattacharya [14] have discussed the Bayesian inference about the parameters of the inverse Gaussian distribution.
The present article proposed Bayes Shrinkage estimator based on the Minimax criteria for the measure of dispersion.A Bayes estimator for the measure of dispersion under the vague prior has been obtained in the Section 2. Under the Minimax criteria the Bayes Minimax estimator has been obtained in the Section 3. A Shrinkage estimator construct by utilizing the Bayes Minimax estimator in the Section 4. Further, a numerical study has been presented in Section 5 and draws a conclusion about the Bayes Shrinkage Minimax estimator in Section 6.
Bayes Estimator for Measure of Dispersion
We are not going into debate or to justify the questions of the proper choice of the prior distribution.We consider a vague prior for the parameter which is an increasing function of the parameter and is given as Therefore, the posterior density of parameter is defined as After simplifying, the posterior density of parameter The Bayes estimator for the measure of dispersion 1 θ ity under the LLF is obtained by simplifying the equal- Here, the suffix indicates that the expectation is taken under po ensity.After simplification the Bayes estimator for
The Minimax Bayes Estimator
The basic principle of this approach is to minimize the n a theorem, n be stated as Here, the risk of the Bayes estimator given in (6) for the parameter 1 θ with respect to LLF is de ned as fi Since, n U is distributed as a Chi-square with n degrees of freedom.Then by making a transform n atio Using equation (8) in the expression we have The Equation ( 9) represents the risk of the Bayes estimator of the measure of dispersion, which is independent with the parameter .Hence, the Bayes estimator is η and ˆ * θ for all and θ so that, the following relation holds: The number
The Shrinkage Bayes Minimax Estimator
Now, we construct a Shrinkage Bayes Minimax estimator as The risk of the Shrinkage Bayes Minimax estimator θ under the LLF is obtain by using Equation ( 8) as where .
The risk of the estimator T under the LINEX loss is given by , which minimizes is given by ng the with the risk under the LLF is given as Thus, the improved estimators amo class T is is lies between zero and one for the selected pa tric set of values which are considered later for the n e ings.Therefore, rical find- is considered as the shrinkage factor.max estimator
A Numerical Study
The relative efficiencies for the Shrinkage Bayes Miniθ relative to the improved estimator T is defined as served that the Shrinka θ is performs better then the improved estimator T for the all selected parametr 0 25 1 75 .
ic set of values for . Furth mple size increases the re er, as sa n lative efficiency decreases for all considered parametric set values and attains maximum efficiency at the 1 δ of point .Further, it is also observed that the relative eases as d increases when δ lie between 0 50 1 50 .
efficiency incr .It is seen also that, as 'a' increase relative efficiency first increases for 0 75 δ . and the crease for the other values of δ .
Con
In e rang δ hich is defined here as the ratio between the true value and guess (prior point) value of the unknown parameter under the LLF.Thus, we suggest using the Minimax estimator under LLF for estimating the measure of dispersion under the Shrinkage setup.
eR
Minimax estimator under the LLF loss criterion.The following statistical problem (Minimax Estimation) is equivalent to some two person zero sum game between the Statistician (Player-II) and Nature (Pla er-I).Here the pure strategies of Nature are the diffe values of in the interval 0, and the mixed strategies of Nature are the prior densities of in the interval 0, .The pure strategies of Statistician are all possible decision functions in the interval 0, .The expected value of the loss function is the risk function and it is the gain of the Player-I.Fur her, η θ E R θ Here, the expectation has been taken under the prior density of parameter .If the loss function is continu-
Table 1 . Relative efficiency for the estimator with respect to es .
* T | 1,722 | 2011-07-05T00:00:00.000 | [
"Mathematics"
] |
I DREAM, THEREFORE I AM AN ARCHITECT
In this review of the exhibition of the student’s research projects in the master’s class Digital Design Studio of Architectural Program within the International University of Sarajevo, mentor and curators Lamila Simisic Pasic and Meliha Teparic are giving an analysis of the settings, aims, and purpose of the show. Th e exhibition is about an attempt to follow the novelties that the 21st century is bringing into the creation process, such as the involvement of artifi cial intelligence (AI) within creative fi elds. Students started with discovering, analyzing, and classifying the results of visual impacts from their travel from home to school. Th e synthesis came out from a mixture of artifi cial and real. Th en, they merged their physical experiences transformed into visual imagery and digital outputs discovered through the lens of AI into one coherent and intuitive experience. Finally, students used machine learning as a direct collaborator for expanding their imaginations, particularly the diff usion model, which visualizes images out of the text, better known as text-to-image or, its extension, text-to-animation! Using these techniques, students reconstructed their voyages into more visionary landscapes, trying to emphasize, bold, and enlarge dilemmas and concerns of nowadays and refract a multisensory experience to tell the story. Th e exhibition was held in the Art Gallery of the International University of Sarajevo, Bosnia and Herzegovina, at the end of 2022.
In contemporary art, the artist refers to the theme of place from di erent perspectives, such as a place having a meaning for someone or something, a place having some value or signi cance, private and public places; the artist looking at/out of the places; and ctional places (Robertson & McDaniel, 2013, pp. 193-235). In this regard, "a place is a site of possibility, hypothesis, and fantasy-a somewhere where something might occur" (Robertson & McDaniel, 2010, p. 227). From the artistic point of view, the notion of a place is very broad and does not recognize speci c physical boundaries. Not only does a place have an abstract meaning metaphorically or guratively in the postmodern world, but it also goes beyond mere horizontal and vertical dimensions and 'unfolds' itself into a new dimension of the digital virtual world. For instance, in the past, an imagined place existed in the form of a physical representation, such as a picture as the two-dimensional at plan that we were looking at it, now the virtual place is becoming a 'place' for physical presence (e.g., video games, internet, etc.) (Robertson & McDaniel, 2010, p. 227). Furthermore, interactivity is one of the essential features of digital media, and it has participatory nature in contemporary art (Lughi, 2012). Now, we are not only able to participate and interact with virtual places, but we are also able to be present in the virtual place and the physical world at the same time. We are in "a phase of transition to a hybrid culture, where digital space is increasingly just another space we live in" (Robertson & McDaniel, 2010, p. 259), an extended zone of our physical space. us, Ferrando (2013) argued that today we live in a 'transhuman' world, a world where our technological inventions blend with our bodies and lives, and that very soon we'll be post-humans, or new species that will be no anymore human, once when we move out to another planet (pp. 26-32). In the exhibition "I Dream, erefore I am Architect" young creative architects are 'looking out for place' , in the "new realms of virtual reality" that "have spawned new conceptions of structure, such as liquid architecture, a term that refers to structures that mutate or expand into multiple, seemingly non-Euclidean dimensions" (Robertson & McDaniel, 2013, p. 220). If the physical architectural places are "cultural construct" (Robertson & McDaniel, 2013, p. 220) then we can consider virtual architectural places as "a profound cultural shi " (Robertson & McDaniel, 2013, p. 221).
Architecture and Artifi cial Intelligence
Architects nowadays are facing many challenges. e architecture profession, like everything else, is moving more from an expert system towards a learning system, becoming more transparent towards the other disciplines. One novelty, particularly novel for the 21st century, is the involvement of arti cial intelligence (AI) not only in production processes but also in creative ones. Like it was at the beginning of the usage of computers, AI is becoming an inevitable collaborator in creative processes; it is becoming more a ordable with interfaces that the architects visually better appreciate. Lamila Simisic Pasic, an Assistant Professor at the Architecture Program of the International University of Sarajevo (IUS), led the class Digital Design Studio where students worked on grasping the bene ts of AI in their creative processes. Here she tells their story through the exhibition "I Dream, erefore I am Architect" with Professor Meliha Teparic, curator of IUS Art Gallery.
To Be Curious Towards Something Different
In the study process named "I Dream, erefore I am Architect" students shared their enthusiasm towards novelties that might help to re-engage in live shared experiences. As young architects, this group of students might impact what the 21st century will bring to architecture practice.
New tools that are based on machine learning, like text-to-image or text-toanimations, are making suitable ttings for the young generations and what they consider as their culture of living. eir communications are based on texting and sharing information via text prompts. e short and sharp text translations into visual representations that are used in the novel tools such as Midjourney, StableDi usion, DreamStudio, and similar are following popular cultures nowadays. ese tools give visual results on the text prompt of users' thoughts. e journeys of students' visual thoughts shown in this exhibition present students' intentions towards creating architectural forms and their authentic experience of the novelty in our practice, which is beginning to inform the practice of design itself. Regarding methods used in the creation, students discovered, analyzed, and classi ed the results of visual impacts from their travel from home to school. ey produced short video documents of travels and synthesized creations from a mixture of arti cial and real. e merged physical and virtual experiences were transformed into visual imagery and digital outputs, discovered through the lens of AI into one coherent and intuitive experience. Students used machine learning as a direct collaborator for expanding their imaginations, particularly the di usion models that visualized images from the text! Using these techniques, students reconstructed their voyages into more visionary landscapes, trying to emphasize, bold, and enlarge dilemmas and concerns of nowadays and refract a multisensory experience to tell the story. e results are an amazingly evocative eruption of di erent views of their intentions to search for the correct answer to nowadays questions.
The Purpose of the Show e purpose of displaying architectural objects in drawings, models, pavilions, or digital and virtual presentations is vital for communicating architecture to the public. Architecture design is a lengthy process; it takes a long time to make and even longer to build. Displaying architectural projects might supply a view behind the scenes of creating our built environment. e connection between architecture and the public can broaden the public' s understanding of its culture, enable a better understanding of how architecture is created, and become an agency for possible discussion and debate toward wellbeing. Architectural exhibitions that display not just the nal project' s drawings but also thoughts, senses, cognitions, and memories of the creative process, might become a mechanism for the audience to understand their built environment better. e setting of the exhibition "I Dream, erefore I am Architect" is that the exhibition itself becomes architecture. is exhibition functioned as a hybrid performance of architectural thoughts displayed non-traditionally to provoke the audience to think not just about the concerns and topics of projects but about architecture as playfully, creative, and witty. e traditional way of presenting was rearranged into the embodiment of a parallel world displaying multiple ideas, juxtapositions, sounds, and visuals of students' ideas through the lenses of a cooking book, restaurant's menu, futuristic voyages via google maps, videos, or postcards, or children' video book (image 1). All projects in the show envision a world where citizens will live in an integrated, imaginative, and even immersive environment.
Image 1. e image shows the interior of the exhibition "I Dream, erefore I am Architect. Photo courtesy of Haris Heljo.
Students' Testimonies on "I Dream, Therefore I am Architect" 6 e project aim of "Picturesque Collage" (image 2) by Bakir Tanovic was to show through graphic illustration and morphosis the way from his home to the university. rough this picturesque collage, he expressed his feelings and odd things occurring on the road and environment, which for him are everyday occurrences. e work is shown as video art.
e image shows the envisioned world named "Picturesque Collage" by Bakir Tanovic. Photo courtesy of Haris Heljo. e concept of Erna Preljevic's " e Path of Changes" (image 3) was based on the road from the core of Sarajevo's industrial zone through industrial, residential, and commercial areas, mixed in the whole region until the area of educational institutions and hotels, which is still an area in development.
e representations of future development follow the style of Tadao Ando's architecture. e concept of Tadao Ando is mixed with glances of brutalism, but both concepts still keep glimpses of existing greenery.
is transformation represents the slow development of this region into an industrial zone. e student detects an area that needs to be in balance regarding the construction and the way of future development. However, going further in the direction of the city, student experiences excellent potential for a serious industrial zone but keep the sense of a mixture of Further on the road, as we enter the area of the roman bridge, the only thing she thinks of is the Roman bridge which is imagined as being made of silk and stone. Later as we enter the residential zone of Ilidza, we can sense the area of high socialization as if the whole environment is dedicated to people. e nal is the path near the river Zeljeznica which is open and with a couple of buildings, but it gives a sense of living with nature integrated with the constructions. e nal purpose of the whole path is an imagination of the future Zen living, integrated living with nature.
Image 3. e image shows the postcards designed for the project named " e Path of Changes' by Erna Preljevic. Photo courtesy of Haris Heljo.
Zeynep Nihan Yılmaz in her children's audiobook named "Futuristic Mushroom Houses" (image 4) reimagined the part of Sarajevo Canton, Sokolovic Kolonija, a neighborhood characterized by dull and uniform houses, using pneumatic architecture to in ate the façades and give them a convex, mushroom-like appearance, drawing inspiration from the styles of omas Heatherwick and Iris Van Herpen. Her project explores arti cial intelligence's potential and sparks ideas for future works through the forms and compositions it generates. As a child, she was fascinated by unusual shapes and forms in architecture and the fantastic, utopian worlds depicted in animated lms and cartoons. e concept of futuristic mushroom houses reminded her of the mushroom houses in Smurf Village. It motivated her to create an audiobook for children, which will educate them about architecture and appeal to the inner child in older audiences. that produces resources, digests its waste, and self-decomposes. e edible architecture was made to honor the creation and unveiling of an entirely new type of design-arti cial intelligence. e concept is based on interpretations of edible architecture, taking buildings on the way from home to school as a reference example for meal transformation. Inspired by the Disney movie from her childhood, Cloudy with Meatballs, this book's oeuvre includes edible buildings made of pasta, stretchy dough, and gluten mycelia, through egg-shaped objects in the Gaudí style, all the way to a Guggenheim pavlova cake and chocolate brutalist cake made of zicht concrete. ere is no doubt that this book, which has over 100 AI illustrations generated using Midjourney, will nd something suitable for the palate of any reader, taster, architect, or foodie. ese captivating parametric meals celebrate the discovery of AI creation -a celebration of architecture, technology, and geometry worthy of our admiration and appetite.
Image 5. "Edible Architecture" is a recipe book by Amina Likic. Photo courtesy of Haris Heljo. e healing restaurant menu by Selinay Erdeniz (image 6) also nds its inspiration in the unique style of Antonio Gaudi, which is characterized by natural, organic design and a mixture of materials. e main element of the concept is the Romanesco Broccoli, which contains fractals as patterns. Since the appearance of this fractal vegetable, it is mainly presented as "small Christmas trees", especially to the kids. e color of this vegetable can be green, orange, or violet, which is used in the design to create the pattern and emphasize elements within the theme. rough the concept and design, it aims to emphasize the importance of the healing environment and healing design, which is one of the critical and active topics. Healing building design and landscape are the major positive impact on society's psychological and physical health. Selinay has used previously mentioned natural elements to achieve a green healing environment and sustainable design to gain this.
Image 6. Setting for displaying the healing restaurant menu by Selinay Erdeniz. Photo courtesy of Haris Heljo. Student Amina Habul in her project for the future city, used AI to show a new, better, and futuristic city (image 7), which will serve nature and citizens at the same time. Her main idea was to design a new smart and sustainable city in an inclusive, collaborative, and equitable way. Amina wanted to create a city that would be bene cial rstly for the citizens of Sarajevo and, of course, to the world's well-being, regarding the constant climate change. While promoting sustainability across their environmental, economic, social, and cultural dimensions, she provides the necessary conditions and infrastructure to enhance the capabilities of the citizens to contribute to and enjoy the bene ts of a more liveable, resilient, and sustainable urban development. e city enables the meaningful participation of citizens in ful lling their right to the city; Amina focuses on making the city more prosperous, equitable, comfortable, and innovative; addressing social needs and making sure housing and urban services are high-quality; ful lling the needs of the vulnerable and those with disabilities. is proposal is also gender-sensitive and responsive, acknowledging the di erent and changing e overall goal of the project "Real Estate EH" by Ermin Halicevic was to demonstrate how AI can be used to create and generate ideas for a wide range of projects. In Ermin's case, it inspires him to develop futuristic Sarajevo designs and 3D models. e project's premise is the real estate agency EH (Ermin Halilcevic) (image 8), which sells apartments in Sarajevo.
Image 8. e display of the real estate agency "EH" promotion by Ermin Halicevic. Photo courtesy of Haris Heljo.
is study aims for the project "People, Nature, Future" (image 9) by Esra Nur Erdogmus to prevent the possible disaster that may occur in the future by showing it. In a world where green spaces are destroyed, and buildings are built in their place, people are preparing for their own demise. is work aims to show people the way by telling the truth and raising awareness as a result. Many disasters occur because of the occupation of green spaces. Increasing precipitation cannot be absorbed by the soil and oods occur. Many people die in these oods. Also, although trees prevent landslides, thousands of trees are cut down, and, in the end, hundreds of people lose their lives in these natural events. One of the reasons for the increasingly polluted air is the destruction of green areas and the factories built in their place. e damage done to nature and trees is done by people themselves. Image 9. e view on the results of the study "People, Nature, Future" by Esra Nur Erdogmus. Photo courtesy of Haris Heljo. e focus of the concept for the project of Anela Sudzuka named "A Balloon Land" (image 10) was the city of Sarajevo and its image. Since it is known as a foggy place which o en is presented as colorless, we clearly see that it a ects the population, psychologically and physically. Depression and anxiety are common psychological issues seen in the population nowadays. is is a good example showing that the environment a ects our mental health. It is in our power to turn these negativized elements into positive and bene cial ones by making some speci c changes and reorganizing the surrounding. erefore, the concept is made to give the whole city a bit of happiness and positive textures. e main elements used in the design are colorful balloons, which are used as decorative elements. By this approach, we can cure ourselves when we cure and change our environment. e design approach of the concept promotes mental well-being, as well as high-quality living.
Image 10. e view on "Balloon Land" by Anela Sudzuka. Photo courtesy of Haris Heljo. | 4,006.8 | 2023-07-10T00:00:00.000 | [
"Art",
"Computer Science",
"Engineering"
] |
Measuring the effect of the North Korea-U.S. summit on the South Korean stock market
Abstract: We examine the effects of the North Korea-U.S. summit and related events on the South Korean stock market over the period March 2018 to June 2018. Employing the event study methodology, we estimate sectoral abnormal returns following the events surrounding the summit and conduct several robustness tests to control for market integration and firm-specific information. Furthermore, we assess how sectoral systematic risk changes following these events by using various ARCH-type models such as GARCH, TARCH, EGARCH and PARCH. The results show that the South Korean stock market was highly sensitive to these events. In particular, we find that the market was negatively affected by the news that could reduce the probability of holding the summit and vice versa. We also find that market scepticism about the summit leads to the rise of a diamond risk structure.
Introduction
On 7 March 2018, Kim Jong Un, the North Korean leader, expressed his willingness to discuss the fate of his nuclear arsenal with the U.S. This announcement eventually led to a historic meeting between the North Korean leader and the U.S. President, Donald Trump. A successful meeting leading to the improvement of relations between North Korea and the U.S. could be perceived as reducing tension in the Korean Peninsula. However, the general perception, based on events from recent history, is that even if a deal was struck, it would not endure the test of time. Nevertheless, the exchange of smiles and handshakes during the summit created at least a transient feel-good factor for a more optimistic outlook.
ABOUT THE AUTHOR
We are an international research group consisting of members from Vietnam, United Arab Emirates and Australia. Our research activities cover several topics including political issues, environmental issues, banking regulations, financial regulations, cryptocurrency, blockchain and FinTech. This research paper is a part of our research series in the examination the relationship between political issues around the world and the stock markets.
PUBLIC INTEREST STATEMENT
The Trump-Kim summit is the historic political event in the modern world, and it brings various impacts on relevant parties. The road leading to this summit is rocky and South Korea plays a major role in facilitating this summit that can lead to a more stable Korea peninsula. Any events surrounding this summit may have an impact on South Korea. This paper examines the impact of the summit-related events on the South Korean stock market. Our study shows how the South Korean stock market is adversely affected by unfavourable news of the Summit.
Civil, military and political conflicts
The finance literature has addressed the financial consequences of civil, military and political conflicts. Some research applies the event study methodology, using firm-level data to capture the reaction of stock prices to conflicts (Abadie & Gardeazabal, 2003;Guidolin & La Ferrara, 2007). In addition, some literature examines financial indicators and investigates both the ex-ante and ex post effects associated with conflict. Rigobon and Sack (2005), for instance, investigated the response of US financial indicators to the risk of war with Iraq over the period from January 2003 to March 2003. They suggest that an increasing war risk is connected to lower stock, bond and commodity prices. This result is consistent with the findings of Leigh, Wolfers, and Zitzewitz (2003) and Wolfers and Zitzewitz (2009). Furthermore, Schneider and Troeger (2006) evaluate the reaction of stock market indices (including the Dow Jones, CAC and FTSE) during the military conflicts in Yugoslavia, Israel and Iraq from 1990 to 2000 and find that financial markets are affected adversely by the conflicts.
On the other hand, Amihud and Wohl (2004) find that a speedy end to war leads to an increase in stock prices and lower oil prices. Guidolin and La Ferrara (2010) use the event study methodology to examine the effect of 101 domestic and international military conflicts on capital markets, commodity prices and exchange rates between 1974 and 2004 and find mixed reactions to military conflicts in Asia and the Middle East.
Terrorist events
Another strand of the literature deals with terrorist events. Chen and Siems (2004), for example, examine the financial consequences of 14 terrorist attacks and conclude that stock markets (such as those of the U.S., U.K., France, Belgium, Sweden, Australia and Indonesia) tend to react negatively to these events. Likewise, Richman, Santos, and Barkoulas (2005) find that 28 stock markets experienced statistically significant negative reactions to the 11 September 2001 event. Moreover, Baros and Gil-Alana (2009) document the negative effects of violence in the Basque Country on financial and economic activity, pointing out that stock returns declined due to violence. The negative impact of terrorist activities on stock markets is also documented in Asia-Pacific countries such as Malaysia, Indonesia, Singapore, Japan and Australia (Graham & Ramiah, 2012;Ramiah, 2012;Ramiah, Cam, Calabro, Maher, & Ghafouri, 2010;Ramiah & Hui, 2015).
Emerging literature related to the effect of terrorism risk on equity markets has a tendency to fit interaction variables into asset pricing models, including the CAPM or implement ARCH-type models to identify changes in systematic risk (Apergis & Arpergis, 2016;Aslam & Kang, 2015;Ramiah & Graham, 2013). While most studies indicate that risk intensifies following terrorist events, Graham and Ramiah (2012) suggest that following the 11 September attacks, financial markets did not react to terrorism since market participants have already incorporated the risk of terrorism into their expectations. A recent paper shows that terrorist activities affect the commodity markets 120 business days later (Ramiah, Wallace, Veron, Reddy, & Elliott, 2018) 2.3. North korea and political uncertainty Hughes (1996) notes that North Korea represents a serious security matter, as it combines two extremely volatile issues: nuclear proliferation and conflict in the Korean Peninsula. In the last two decades, North Korea's nuclear program has been the main source of political instability and the most critical security matter in Northeast Asia (Haacke, 2013). Apart from being one of the most reclusive countries worldwide, North Korea has displayed a random, unpredictable and even impulsive reputation, which provides somewhat grave signals of potential international conflict and even a nuclear war (Dibooglu & Cevikb, 2016). Kihl and Kim (2005) argue that neither political repression nor economic poverty represent major concerns about North Korea, pointing out that the country represents a significant international security risk due to its nuclear capability and ballistic missile program.
The nuclear threat and military conflict in the Korean peninsula may affect financial markets in various magnitudes. However, all relevant parties such as North Korea, South Korea and the U.S. have substantial motives to avoid an unnecessary war. North Korean, for instance, is forced to restructure the county's economic priorities by reallocating resources due to corruption and economic mismanagement as nuclear arms can no longer guarantee the regime survival (Funabashi, 2007). According to Dibooglu and Cevikb (2016), a potential solution would be the integration of the North Korean economy into the global economy that will potentially eliminate the nuclear threat, leading to regional financial stability. Although a military conflict appears to be unlikely in the short run, the threat remains serious in the medium and long run unless North Korea agrees to denuclearise and dismantle its nuclear weapon system. This threat poses a high level of political uncertainty that may have an adverse effect on the stock market.
The existing literature documents evidence showing that risk and return in financial markets are affected by political uncertainty. Pham et al. (2018b) examine the impact of the 2016 U.S. presidential election on the U.S. stock market and find that the election widely affected the U.S. stock market. They also suggest that the U.S. stock market was highly responsive when Trump secured his Republican nomination. Another study by Bouoiyour and Selmi (2017) shows that Trump's victory had a negative impact on the event date and a positive effect during the postelection period by gathering the responses of eight large firms in the Dow Jones, S&P500 and Nasdaq indices. In addition, Ramiah, Pham, and Moosa (2017) examine the effect of Brexit on various sectors of the British economy in 2016, demonstrating that Brexit has a mixed effect on various sectors. In addition, Hira (2017) investigates the relationship between political instability and stock prices in Pakistan and reveals a negative relation between prices and political instability. Another study by Savita and Ramesh (2015) analyses the behaviour of stock prices during the 2014 Indian general elections using the event study methodology. They find highly positive cumulative abnormal returns over different event windows and conclude that the market reacts positively to the possibility of a change in the government.
South korea and political threats
The research on the effects of North Korean threat on South Korean financial markets is rather limited and most studies use event or scenario-based analysis. Noland (2007), for instance, implements scenario analysis to investigate the economic implications of North Korea's nuclear program on Northeast Asian countries and indicates that South Korea is the most economically vulnerable to this program due to its geographic proximity.
Most recently, Huh and Pyun (2018) investigate investors' reaction to the North Korean nuclear tests by evaluating the performance of the financial market. They use a time-varying structural vector autoregression model to point out that investors' attention to nuclear threats has heterogeneous impacts on the stock prices of South Korean firms. Earlier, Dibooglu and Cevikb (2016) study the impact of the North Korean threat on financial markets in Japan and South Korea and attempt to find out whether the threat affects stock prices, interest rates and exchange rates. Their results show a causal relationship between the North Korean threat and stock returns and exchange rate returns in both countries. These results are inconsistent with the findings of Kim and Roland (2014) who use the event study methodology to evaluate the impact of North Korea's nuclear threat on South Korea's financial markets, examining 26 events associated with the North Korean nuclear threat from 2000 to 2008. They do not find statistically significant effects of the nuclear threat on financial markets and conclude that South Korea's financial markets do not perceive the nuclear threat as credible.
In general, the threat from North Korea potentially leads to declining asset prices, investment reductions and capital outflows in the financial markets of the targeted countries (Dibooglu & Cevikb, 2016). The literature, however, fails to address the issue of how the South Korean stock market responded to the North Korea-U.S. summit. South Korea plays an important role in facilitating the summit and it was expected to reduce the political risk in the Korea peninsula. These observations motivate us to investigate how the South Korean market reacted to summit-related events in terms of risk and return.
Abnormal return and cumulative abnormal return estimation
Following Ramiah et al. (2017) and Pham et al. (2018b), we use the event study methodology to examine the effects of the North Korea-U.S. summit on the stock market-these effects are expressed in terms of sectoral abnormal returns. We hypothesise that the sectors experience negative abnormal returns if they perceive bad news (i.e. a decrease in the likelihood of the summit) that creates uncertainty for the stock market. On the other hand, we expect the sectors to have positive abnormal returns if they perceive the news as favourable to those sectors. If the news does not affect the sectors, no abnormal returns should be generated.
We first calculate daily returns, DR it , and expected daily returns, E DR it ð Þ, for every firm using the following equations: (1) where PI it is the stock price of firm i at time t, β 0 it and β 1 it are the intercept and the slope of the CAPM model respectively, r mkt is the market index as proxied by KOSPI and r f is the risk-free rate as proxied by 10-year bond yield.
Abnormal returns are estimated as follows: where DAR it is the daily abnormal return of firm i at time t. The daily abnormal returns of all firms within a sector are averaged to estimate daily abnormal returns of sector s at time t, DAR st . The t-statistic is used to check if a reaction is statistically significant for each announcement.
The efficient market hypothesis (EMH) does not always hold in the current market settings, making it necessary to perform additional estimations to check the presence or otherwise of continuing market reactions as represented by cumulative abnormal returns (CAR) 2, 5 and 10 days after the event date and, 2 and 5 days before the event date. The equations required for these estimations are as follow: This exercise allows us to capture delayed reactions, continuing reactions or market anticipation following summit-related events. We also use the t-statistic to check if the results are statistically significant.
Robustness checks
It is often argued that the CAPM is obsolete and that a more advanced model is required to estimate expected returns (Fama & French, 1993. Therefore, we replace the CAPM as represented by Equation (2) by the Fama-French five-factor model to re-estimate expected returns and then re-calculate abnormal returns to check if the findings are consistent. The underlying objective is to control for more risk factors such as size (SMB), value (HML), profitability (RMW) and investment (CMA). 1 The model is specified as: In addition, we conduct several other robustness tests to control for the shortcomings of the event study methodology: (1) the Corrado (1989) non-parametric ranking test to control for nonnormality of the abnormal return distribution; (2) removing firms that release firm-specific information within a window of AE 15 days from the event date to control for firm-specific effects; (3) the non-parametric conditional distribution approach proposed by Chesney et al. (2011) to estimate the probability of an event having an extreme effect on a sector; and (4)
Systematic risk
Many studies have examined the effects of news on systematic risk (Engelberg, McLean, & Pontiff, 2018;Pham, Ramiah, Moosa, & Moyan, 2018a;Ramiah, Martin, & Moosa, 2013). Engelberg et al. (2018), for instance, show that betas are higher on earnings announcement days. Furthermore, find that regulatory announcements (such as environmental policy) could lead to various changes in systematic risk. A strand of this literature is about the effects of political events on systematic risk. Several studies have investigated this issue and found that systematic risk is heavily affected by major political events. Ramiah et al. (2017), for instance, reveals that the Brexit referendum results led to an increase in "immediate risk". A recent study by Pham et al. (2018b) shows that many sectors in the U.S. experienced a surge in short-term systematic risk during the 2016 U.S. Presidential election.
Since March 2018, South Korea has played a major role in coordinating the historic meeting between North Korea and the U.S., which eventually took place on 12 June 2018. Although South Korea has close ties with the U.S., the country has faced a war threat from North Korea for many years. Therefore, a successful meeting between North Korea and the U.S. was expected to bring many benefits to South Korea such as a reduction in political risk in the Korea Peninsula that in turn would reduce the systematic risk of the South Korean stock market. Conversely, a failed meeting would lead to a surge in political risk and subsequently in the systematic risk of the market. In this study, we are going to capture both the aggregate and individual effects of the events surrounding the summit on systematic risk at the sectoral level. First, we create an aggregate dummy variable (D Summit ), which takes a value of 1 on the event date and 0 otherwise. We then incorporate this aggregate dummy variable to the CAPM and the modified model is as follow: wherer St is the return of sector S at time t,r ft is the risk-free rate at time t,r mt is the market return at time t, D Summit;t is the aggregate dummy variable,ε St is the error term, β 0 S is the intercept of the regression equation where E(β 0 S ) is equal to zero, β 1 S is the average short-term systematic risk of sector S, β 2 S captures the change in the sectoral risk, and β 3 S measures the change in the intercept of Equation (10).
The Chow test is conducted to detect the presence of structural breaks following summit-related events, while the Wald test is used to check for redundant variables. In addition, we introduce appropriate AR and MA terms to control for autocorrelation. Lastly, we use various GARCH specifications (such as GARH, TARCH, EGARCH and PARCH) to deal with the ARCH effects.
The problem with Equation (10) is that each event might have individual effects on systematic risk, which means that a different risk model is required to capture these individual effects. We create an individual dummy variable (ID) for each event and modify Equation (10) to estimate the individual short-term change in systematic risk. The model takes the form where ID j is the individual dummy variable that takes a value of 1 on event j and zero otherwise, β jþ1 S;j captures the change in systematic risk of sector S following event j. Since the summit might change the political situation in the Korean Peninsula in the long run, we expect the events to have a certain degree of impact on long-term systematic risk. The following model is used to capture the effects on long-term systematic risk: where LD j takes the value of 1 from the event date and zero before the event date.
Data
The data, covering the period between June 2015 to August 2018, were downloaded from Thomson Reuter Eikon Datastream, including individual stock prices of all listed firms on the South Korean stock market. We use KOSPI as a proxy for market return and the 10-year Korea bond yield as a proxy for the risk-free rate. Daily factors for the Fama-French five-factor model were downloaded from Kenneth French data library at Dartmouth College. 2 We collect firmspecific announcements from the Korean stock exchange. The announcements around the North Korea-U.S. summit were collected from various sources (see Table 1). 3,4
Empirical results
In general, we observe that the summit tended to affect the South Korean stock market negatively and that the effect is widespread across sectors. We find 17 sectors experiencing negative abnormal returns following summit-related events ( Table 2). The mining sector, for instance, exhibited an abnormal return of −7.03% (with a t statistic of −2.26) on 16 May 2018 when North Korea cancelled the talk with South Korea and threatened to cancel the summit.
On the other hand, the results show that few sectors reacted positively to the events around the summit ( Table 3). The general retailers sector, for example, had a positive abnormal return of 1.92% (with a t statistic of 2.69) on 9 March 2018 when President Trump accepted the North Korean leader's invitation to meet at the summit. Another sector that experienced a positive abnormal return is the tobacco sector, which experienced an abnormal return of 4.47% (with a t-statistic of 3.27) on 10 May 2018 when President Trump announced that he would meet Kim Jong-Un on 12 June 2018 to discuss the denuclearisation of the Korean peninsula.
Furthermore, we find 14 sectors exhibiting mixed reactions following the events (Table 4). Although these sectors reacted both positively and negatively to the events, the magnitude of negative reactions tended to outweigh the magnitude of positive reactions in most sectors. For instance, the industrial metals and the mining sector reacted positively to event 8 (2.59% with a t statistic of 2.12), when President Trump announced that he would meet with the North Korean leader and event 9 (2.84% with a t statistic of 2.33) when North Korea uncovered a plan to dismantle its nuclear test site. The sector treated the two events as good news as they aimed to bring peace to the Korean Peninsula, hence reacting positively to both events. However, when the summit was threatened to be cancelled (events 10 and 13), the sector experienced negative abnormal returns of −7.96% (with a t statistic of −6.53) and −6.82% (with a t statistic of −5.50) on event 10 and event 13, respectively. This is an example of a sector that exhibited mixed reactions following the events around the summit where negative reactions outweigh positive reactions. Overall, our findings are consistent with those of Huh and Pyun (2018) whereby we find that the effects of the Trump-Kim summit on the South Korean stock market vary across the sectors. Furthermore, the effects are dependent on the possibility of the summit in which the South Korean stock market tends to positively (negatively) react to events that increase (decrease) the likelihood of the summit.
Negative reactions to the uncertainty of the summit
The long-awaiting meeting between the U.S. and North Korean leaders took a massive hit on 16 May 2018 (event 10) when North Korea cancelled talks with South Korea and threatened to cancel the summit. We observe that the uncertainty of the summit yielded negative abnormal returns as 21 sectors reacted negatively to event 10 and event 13 (Figure 1). Following event 10, the three sectors recording the highest negative abnormal returns were construction and materials, industrial metals and mining, and mining (Table 5). Construction and materials, for instance, had a negative abnormal return of −6.48% (with a t statistic of −4.71) and four out of five of the robustness tests (with an exception of Chesney test) support this finding (Table 7). In addition, the industrial metals and mining sector experienced the highest negative abnormal return (−7.96% with a t statistic of −6.53) following event 10, a result that is supported by all of the robustness tests.
On 25 May 2018 (event 13), a day after Trump announced the cancellation of the summit, North Korea declared its willingness to discuss the matter. However, investors were still overwhelmed by the announcement and did not treat it as good news. Twenty-one sectors experienced negative reactions following the event ( Table 6). The results show that the industrial metals and mining sector continued to take the biggest hit as it experienced an abnormal return of −6.82% (with a t statistic of −5.50), a result that is supported by all of the robustness tests (Table 8). We also find that the construction and materials sector had the second-highest negative abnormal return of In addition, we estimate the cumulative abnormal returns of two and five days before events 10 and 13 to find out if the market anticipated the events. We find that the market did not anticipate the threat to cancel the summit from North Korea (event 10) as most sectors did not experience negative cumulative abnormal returns of two and five days before event 10 (Table 5). On the other hand, the results show that investors in certain sectors (including construction and materials, food producers, forestry and papers, general industrials, industrial metals and mining) were particularly pessimistic about the possibility of the summit as these sectors exhibited negative abnormal returns two days before event 13 (Table 6). Our results suggest that these reactions might be caused by the spillover effect from event 12 when Trump cancelled the summit via a letter to Kim. An interesting observation is that the South Korean market is more responsive to summit-related announcements originating from the U.S than those coming from its neighbour. Furthermore, we check if the reaction persisted by calculating the cumulative abnormal returns of two, five and ten days after the event day and find that all negative reactions following event 10 and 13 did not continue to the following days (Tables 5 and 6).
Systematic risk
The results show that the systematic risk of most sectors did not change in aggregate (Table 9). Mobile telecommunication is the only sector that experienced an increase in systematic risk on an aggregate basis (up from 0.57 to 2.07). We also document two sectors that exhibited a decline in systematic risk in aggregate, including construction and materials (down from 0.54 to −0.63) and industrial metals and mining (down from 0.52 to −0.34). When we use Equation (11) to estimate the individual effect of each announcement on systematic risk, we observe a diamond risk phenomenon over the period between when the North Korean leader expressed his willingness to discuss the fate of his nuclear arsenal with the U.S. and the acceptance by the U.S. president of the invitation from the North Korean leader (Figure 2). We find that most sectors experienced a surge in systematic risk (with the exceptions of three sectors including oil and gas, industrial metals and mining, and real estate investment trust) following event 1 when the news about the possibility of the summit broke out on 7 March 2018. Systematic risks reverted to their normal levels following event 2 on 9 March 2018. This phenomenon shows that the South Korean stock market was sceptical about the summit.
In addition, we replace the short-term individual dummy variables (ID) by the long-term individual dummy variables (LD) and estimate Equation (12) to examine the effects of the events surrounding the summit on long-term systematic risk. We find out that long-term systematic risk was fluctuating heavily from event 6 to event 8 when North Korea and the U.S. were preparing for the summit and when President Trump announced officially the date of the summit in Singapore ( Figure 3). Furthermore, we find a lower degree of diamond risk structure between event 12 and 13 when President Trump cancelled the summit. Change in Systematic Risk Event Figure 3. Long-term changes in systematic risk following the events around the summit.
Extensions
It is always difficult to select an appropriate asset pricing model to estimate expected returns, which is why researchers tend to use as many models as they can. In this section, we discuss some empirical evidence on the use of different asset pricing models and their variation to estimate expected returns. The models include the CAPM (model 1), the modified CAPM controlling for different market spillover effects (model 2), and the Fama-French five-factor model (model 3).
The results show that a more advanced asset pricing model (for example, controlling for more risk factors) occasionally over-estimates expected returns (in comparison to the CAPM). We use the evidence of abnormal returns on 16 May 2018 and 25 May 2018 as these two events produced the highest number of reactions.
Since daily returns do not vary across the models, the difference in abnormal returns is, in fact, similar to the difference in expected returns as captured by the three models. Table 10 reports the percentage differences in abnormal returns among the three asset pricing models on 16 May 2018 whereby D12 shows the difference in abnormal returns between model 1 and model 2, D13 indicates the difference in abnormal returns between model 1 and model 3, and D23 displays the difference in abnormal returns between model 2 and model 3. We find that using model 2 (controlling for risk premiums from various markets) occasionally over-estimates expected returns in several sectors in comparison to the CAPM (for example, household equipment and services).
The results also show that the Fama-French five-factor model performs better than both the CAPM and the modified CAPM models in this scenario since using this model yields significantly lower expected returns for all 21 sectors. This finding is, however, not consistent with other events. We observe that CAPM might not be too obsolete in certain circumstances in comparison to other advanced models (Table 11) and expected returns, as estimated by model 2 and model 3 on 25 May 2018, are higher than what is produced by the CAPM in 3 and 7 sectors, respectively.
Conclusion
The North Korea-U.S. summit marked a historic political event that directly affected South Korea in many aspects since it has always striven for political stability in the Korean Peninsula, which would provide a better business environment for Korean firms. Our study examines how the South Korean stock market reacts to the summit-related events by using event methodology and various robustness tests. Our findings show that South Korean firms and investors were desperately looking forward to a successful meeting between North Korea and the U.S. since most negative reactions arose in response to events that led to uncertainty about the summit. Likewise, most positive reactions occurred following events that were conducive to materialisation of the summit. We also find a diamond risk phenomenon in the South Korean stock market due to its scepticism about the summit. The contribution of our study to the literature is threefold. First, our study shows how each sector in South Korea responds to the summitrelated events in terms of risk and return. In addition, we provide empirical evidence of the responsiveness of the South Korean market to announcements originated from the U.S. Finally, our study provides a comprehensive comparison on the performance of various asset pricing models used in event study.
One of the limitations of event study is that it is difficult to differentiate the real effect of an event from noise. Our study attempts to resolve this issue by removing the firms releasing firm-specific information in the window of 15 days before and after the event day. However, the drawback of this methodology is that it also removes all the firms even if the firm-specific information might not have any impact on those firms. Developing such a methodology to resolve this issue is beyond the scope of this paper and we leave this question to future studies. | 6,882.4 | 2019-01-01T00:00:00.000 | [
"Economics",
"Political Science"
] |
A Wars2 mutant mouse shows a sex and diet specific change in fat distribution, reduced food intake and depot-specific upregulation of WAT browning
Background: Increased waist-to-hip ratio (WHR) is associated with increased mortality and risk of type 2 diabetes and cardiovascular disease. The TBX15-WARS2 locus has consistently been associated with increased WHR. Previous study of the hypomorphic Wars2 V117L/V117L mouse model found phenotypes including severely reduced fat mass, and white adipose tissue (WAT) browning, suggesting Wars2 could be a potential modulator of fat distribution and WAT browning. Methods: To test for differences in browning induction across different adipose depots of Wars2 V117L/V117L mice, we measured multiple browning markers of a 4-month old chow-fed cohort in subcutaneous and visceral WAT and brown adipose tissue (BAT). To explain previously observed fat mass loss, we also tested for the upregulation of plasma mitokines FGF21 and GDF15 and for differences in food intake in the same cohort. Finally, to test for diet-associated differences in fat distribution, we placed Wars2 V117L/V117L mice on low-fat or high-fat diet (LFD, HFD) and assessed their body composition by Echo-MRI and compared terminal adipose depot weights at 6 months of age. Results: The chow-fed Wars2 V117L/V117L mice showed more changes in WAT browning marker gene expression in the subcutaneous inguinal WAT depot (iWAT) than in the visceral gonadal WAT depot (gWAT). These mice also demonstrated reduced food intake and elevated plasma FGF21 and GDF15, and mRNA from heart and BAT. When exposed to HFD, the Wars2 V117L/V117L mice showed resistance to diet-induced obesity and a male and HFD-specific reduction of gWAT: iWAT ratio. Conclusion: Severe reduction of Wars2 gene function causes a systemic phenotype which leads to upregulation of FGF21 and GDF15, resulting in reduced food intake and depot-specific changes in browning and fat mass.
The TBX15-WARS2 locus, which spans~1 Mb and includes genes TBX15, WARS2 and regions downstream of SPAG17, is consistently associated with WHR across multiple meta-analyses (Heid et al., 2010;Shungin et al., 2015;Pulit et al., 2018). Since the majority of SNPs in this region overlap the non-coding part of the genome, the effector genes remain to be identified (Maurano et al., 2012;Mušo et al., 2022). WARS2 is a mitochondrial tryptophanyl-tRNA synthetase, a protein essential for mitochondrial translation, recently associated with angiogenesis and brown adipose tissue metabolism (Wang et al., 2016;Pravenec et al., 2017). Expression of both TBX15 and WARS2 in subcutaneous adipose tissue was associated with multiple metabolic traits including BMI and Matsuda insulin sensitivity index (Civelek et al., 2017). The GTEx database links the TBX15-WARS2 locus risk SNPs to the expression of WARS2 in multiple human tissues, but a few studies have also linked the locus to TBX15 expression in adipose (Heid et al., 2010;GTEx-Consortium, 2013;Civelek et al., 2017).
Our group has previously established a Wars2 V117L/V117L mouse model where a N-ethyl-N-nitrosourea (ENU)induced hypomorphic mutation causes defective splicing and results in only 0%-30% of the full-length protein remaining across different tissues (Agnew et al., 2018). Homozygous Wars2 V117L/V117L mice showed mitochondrial electron transport chain (ETC) complex deficiency in multiple tissues which directly or indirectly resulted in hypertrophic cardiomyopathy, sensorineural hearing loss and failure to gain fat mass. Importantly, white adipose tissue (WAT) showed upregulation of mitochondria and browning markers such as uncoupling protein 1 (UCP1) and mRNA levels of cell death-inducing DNA fragmentation factor subunit alpha (DFFA)-like effector a (Cidea) and iodothyronine deiodinase 2 (Dio2) genes. On the other hand, the brown adipose tissue (BAT) was dysfunctional and showed reduced browning marker expression. Elevated serum fibroblast growth factor-21 (FGF21) and mRNA from heart, muscle and white adipose suggested a mechanism by which at least part of the browning in adipose tissue may be mediated systemically (Fisher et al., 2012).
Another mitokine frequently co-induced with FGF21 in response to mitochondrial stress is growth/differentiation factor 15 (GDF15). GDF15 was previously reported to be an inducer of taste aversion and a suppressor of food intake by acting in the hindbrain where its receptor GDNF family receptor α-like (GFRAL) is expressed (Mullican et al., 2017;Patel et al., 2019). We hypothesised that a possible elevation of GDF15 levels could be thus affecting food intake and in effect the fat mass in Wars2 V117L/V117L mice.
In this follow-up study, we set out to explore whether WARS2 could be a regulator of white adipose browning and fat distribution. We initially tested whether the previously observed WAT browning effects in Wars2 V117L/V117L mice differed between different depots and whether changes in FGF21, GDF15, and food intake are observed and thus could explain the failure to gain fat mass in the chow-fed mice. Given that human polymorphisms in the TBX15-WARS2 locus are associated with a less severe reduction in WARS2 expression (GTEx-Consortium, 2013), we included heterozygous Wars2 +/V117L mice in our study. We evaluated the effect of high-and lowfat diet challenge (HFD-60% kcal fat, LFD-10% kcal fat) on adiposity and tested for any diet and depot specific differences in fat mass loss.
Animal models
All mice used in this study were housed in the Mary Lyon Centre at MRC Harwell. Mice were kept and studied in accordance with UK Home Office legislation and local ethical guidelines issued by the Medical Research Council (Responsibility in the Use of Animals for Medical Research, July 1993; Home Office license 30/3146 and 30/3070). Procedures were approved by the MRC Harwell Animal Welfare and Ethical Review Board (AWERB). Mice were kept under controlled light (light 7 a.m.-7 p.m., dark 7 p.m.-7 a.m.), temperature (21°C ± 2°C) and humidity (55% ± 10%) conditions. They had free access to water (9-13 ppm chlorine) and were fed ad libitum on a commercial chow diet (SDS Rat and Mouse No. 3 Breeding diet, RM3, 3.6 kcal/g) unless stated otherwise. Mice were group housed unless stated otherwise and were randomised into sex-matched cages on weaning. Researchers were blinded to the genotype of mice until analysis of the data.
Frontiers in Physiology frontiersin.org Experiment 1-molecular and hormonal investigation of Wars2 V117L/V117L mice Wars2 V117L/V117L mice were generated and genotyped as previously described (Potter et al., 2016;Agnew et al., 2018). Tissues and plasma were collected in experiments previously described (Agnew et al., 2018). Briefly, 4-month-old male and female Wars2 V117L/V117L and Wars2 +/+ mice (n = 5-7) were humanely killed by terminal anaesthesia, and retro-orbital blood was collected into lithium-heparin microvette tubes (CB300, Sarstedt, Numbrecht, Germany). Death was confirmed by cervical dislocation and mice were then dissected and kidney, liver, muscle, heart, iWAT, gWAT, and BAT collected. Tissues were directly placed in cryotubes and snap frozen in liquid nitrogen and samples were stored at −70°C before subsequent analyses by qPCR and Western blot.
Experiment 2-food intake measurements in Wars2 V117L/V117L mice Four-week-old male and female Wars2 V117L/V117L , Wars2 +/V117L , and Wars2 +/+ mice were pair-housed by genotype with ad libitum access to RM3 diet (n = 4-10 cages). Food was weighed twice a week until 16 weeks of age, and mice were weighed weekly. The mean weekly food intake per week per cage was calculated and cumulative food intake analysed.
Experiment 3-body fat distribution in
Wars2 V117L/V117L mice on HFD We investigated body composition and fat distribution in male and female Wars2 V117L/V117L , Wars2 +/V117L , and Wars2 +/+ mice challenged with a high-fat diet (HFD). Experimental cohort numbers were based on estimates made using GraphPad Statmate using gWAT:iWAT ratios from previous experiments. We generated three cohorts of males and females, which were weaned directly onto HFD (Research Diets, D12492) or matched low-fat diet (LFD, Research Diets, D12450J) (n = 9-22, 185 mice in total).
Total body mass was measured every 2 weeks from 4 weeks of age on a scale calibrated to 0.01 g. Body composition of the mice was measured every 2 weeks using an Echo-MRI (EMR-136-M, Echo-MRI, Texas, United States). The readings were total fat mass (g) and total lean mass (g). At 24 weeks old, mice were humanely killed by cervical dislocation and individual fat depots were dissected and weighed: interscapular BAT (iBAT), interscapular WAT (isWAT), perirenal BAT (prBAT), perirenal WAT (prWAT), inguinal WAT (iWAT), gonadal WAT (gWAT), mesenteric WAT (mWAT), and epicardial WAT (cWAT). gWAT:iWAT ratio was calculated from these weights as an indicator of visceral:subcutaneous fat distribution as described in (Gray et al., 2006).
Quantitative PCR
Total RNA from adipose tissues (experiment 1) was extracted using the Direct-zol ™ RNA MiniPrep Plus kit protocol (Zymo research, #R2071). RNA was reverse-transcribed using the SuperScript ™ III Reverse Transcriptase Kit (ThermoFisher) to generate 2 μg of cDNA. mRNA gene expression was assayed using the TaqMan system (ThermoFisher) with the TaqMan FAM dye-labeled probes (Applied Biosystems, Invitrogen, United States) according to manufacturer protocols. Assays were carried out using an ABIPRISM 7500 Fast Real-Time PCR System (Applied Biosystems) and quantitation by the comparative C T (ΔΔC T ) analysis. Data was normalised to a geometric mean of two house-keeping genes specific to each tissue.
A mouse GeNORM analysis (PrimerDesign) for 6-8 genes was used to determine the most stable house-keeping genes. Taqman probes used in this study:
FIGURE 1
Increased browning in inguinal WAT (iWAT) and gonadal WAT (gWAT) of 4-month old male Wars2 V117L/V117L mice. (A,B) Relative expression of browning, mitochondrial biogenesis and adipose differentiation markers in iWAT and gWAT, respectively. Normalised to geometric mean of Canx and Ywhaz. Data was log-transformed and assessed by unpaired t-test or Mann-Whitney test (iWAT for Dio2 and Fgf21) based on their distribution, n = 6 and five wildtype and homozygotes respectively in iWAT and gWAT. (C,D) Western blot and quantification of UCP1 protein levels in male iWAT relative to α-tubulin and WT average, n = 5. Tested by Unpaired t test with Welch's correction (E) qPCR analysis of mt-Nd1:Gapdh ratio signifying mitochondrial: genomic DNA (mtDNA:gDNA) ratio. 2-way ANOVA with post-hoc comparison of genotypes, n = 5. All data shown as mean ± SD.
Frontiers in Physiology frontiersin.org 04 Instruments). Tissue homogenates were centrifuged at 13,000 rpm for 30 min at 4°C. For adipose samples, the floating lipid layer was carefully pierced with a P200 pipette tip, the supernatant transferred to a new Eppendorf tube and samples spun again 1-2 more times until all lipid was removed. The final supernatants were isolated and protein concentration quantified by the DC Protein Assay (BioRad). The samples were diluted and supplemented with NuPAGE LDS Sample Buffer (4X) and NuPage Reducing Agent (10X) and were denatured by heating to 70°C for 10 min. Gel electrophoresis was performed using the linear gradient NuPAGE 4%-12% Bis-Tris Protein Gels, 1.0 mm, 12-well (ThermoFisher) with 1X NuPAGE MOPS SDS Running Buffer in the Mini Gel Tank (ThermoFisher). 20 μg protein was loaded per well and samples run at 200 V for 50 min. The experimental and control groups (WT, HOM) were arranged in an alternating order to avoid local transfer biases.
Proteins were then wet-transferred to a PVDF membrane (Hybond-P, GE Healthcare Amersham, MU60103A) using the Mini Blot Module Set (ThermoFisher) in 1X NuPAGE Transfer Buffer (ThermoFisher) containing 10% methanol. Protein membranes were blocked in 5% skimmed milk TBST or 5% BSA TBST for 1 h at room temperature before incubation with primary antibodies overnight at 4°C. After 3-5 5-min washes with TBST, the membranes were incubated with species-specific secondary antibody conjugated to horseradish peroxidase (HRP) in TBST-milk for 1 h at room temperature. Then, membranes FIGURE 2 GDF15 and FGF21 levels are elevated in 4-month old Wars2 V117L/V117L mouse plasma. ELISA analysis of FGF21 (A) and GDF15 (B) levels in males (n = 5-6) and females (n = 6-7). Analysis by 2-way ANOVA followed by post-hoc Sidak multiple comparison. qPCR analysis of Fgf21 (C) and Gdf15 (D) levels in multiple tissues from the female mice used in (A) and (B) (n = 5-7). Data was log-transformed and assessed by unpaired t-test or Mann-Whitney test (Fgf21 in Heart). Mean raw C T values are shown for WT and HOM tissues for comparison of expression between tissues. All data shown as mean ± SD.
Frontiers in Physiology frontiersin.org were washed 5 × 5 min in TBST. The chemiluminescent reaction was carried out using the Pierce ECL Plus Western Blotting Substrate (ThermoFisher) and membranes imaged by exposure to CL-Xposure Film (ThermoFisher). Protein bands were quantified using ImageJ and normalised to the respective house-keeping gene. Primary antibodies used in this study: UCP1 at 1: 1,000 dilution (Abcam, ab23841), α-tubulin at 1:10,000 (Cell Signalling, 2144).
Mitochondrial DNA copy number assay
Mitochondrial content in adipose tissue was assessed by ratio of mitochondrial DNA (mtDNA) to genomic DNA (gDNA) as assessed using qRT-PCR. Total DNA, which contains both gDNA and mtDNA, was extracted from adipose tissue (experiment 1) using the Dneasy Blood and Tissue Kit (Qiagen, # 69504). We amplified both the mouse genomic gene Glyceraldehyde 3-phosphate dehydrogenase (Gapdh) and mouse mitochondrial gene Mitochondrially encoded NADH: Ubiquinone oxidoreductase core subunit 1 (mt-Nd1) as proxies for genomic and mitochondrial DNA, respectively. Quantitative PCR was performed with 10 ng DNA per reaction and 5 μM of each primer, using the Fast SYBR Green System on a ABIPRISM 7500 Fast Real-Time PCR Machine (Applied Biosystems). All samples were run in technical triplicates. Primers: mt-Nd1-Fw (CCCATTCGCGTTATT CTT), mt-Nd1-Rv (AAGTTGATCGTAACGGAAGC), Gapdh-Fw (CAAGGAGTAAGAAACCCTGGACC), Gapdh-Rv (CGA GTTGGGATAGGGCCTCT).
FIGURE 3
Food Intake and bodyweight are reduced in Wars2 V117L/V117L mice. Cumulative food intake in (A) males (n = 4-10) and (B) females (n = 8-9). N represents one cage of two mice of the same genotype. Bodyweight in the same cohort of (C) males (n = 8-20) and (D) females (n = 12-18) where N represents each mouse. Significance at specific time points was calculated with 1-way ANOVA with multiple comparisons. Significance symbols for WT × HET: *, HET × HOM: +.
FIGURE 4
Wars2 V117L/V117L mice fail to gain fat and lean mass during growth and due to high-fat diet feeding. Three cohorts of 6-month old male (n = 9-18) mice on low-fat (LFD) or high-fat diet (HFD) were pooled and assessed for body weight (A,B), fat mass (C,D), and lean mass (E,F), respectively. Genotypes: Wars2 +/+ (WT), Wars2 +/V117L (HET), and Wars2 V117L/V117L (HOM). For male mice one homozygote on a LFD and one wildtype on a HFD were excluded as outliers (identified using ROUT in GraphPad PRISM 9). Significance at specific time points was calculated with 2-way ANOVA with Tukey's multiple comparison analysis for all groups. Significant difference between Wars2 +/+ (WT) and Wars2 V117L/V117L (HOM) is shown as *p < 0.05, **p < 0.01, ***p < 0.001. Comparisons between other groups are depicted in the same way using the symbols (+, &, ×, $, #) annotated in the top right corner.
Frontiers in Physiology frontiersin.org
Statistical analysis
All statistical analyses were performed in Graph Pad Prism 9. Data outliers were identified using ROUT and omitted as indicated in each figure legend. Normality of distribution was evaluated using D'Agostino & Pearson normality test. Data was transformed where necessary in order to normalise their distribution prior to statistical analysis and details of the statistical tests used are described in each figure legend. Area under the curve for bodyweight, fat and lean mass was calculated in PRISM with Y = 0 as a baseline. qPCR data was log-transformed and is shown as mean ± SD for visualisation and statistical analysis.
Frontiers in Physiology frontiersin.org
Browning is increased in both subcutaneous and visceral WAT depots of Wars2 V117L/V117L mice on chow diet
We set out to test whether browning effects previously observed in subcutaneous iWAT can also be observed in visceral gWAT, assessed by mRNA expression of a panel of browning and mitochondrial biogenesis gene markers in these mice at 4-months of age. In male iWAT of Wars2 V117L/V117L mice, as expected, we found increased expression of browning genes: Cidea increased by 0.61 ± 0.25 logFC (p = 0.0343), cytochrome c oxidase polypeptide 7A (Cox7a) by 0.60 ± 0.25 logFC (p = 0.0414) and Dio2 by 0.93 ± 0.30 logFC (p = 0.0133) in ( Figure 1A). Male gWAT showed 0.56 ± 0.13 logFC (p = 0.0025) and 0.53 ± 0.10 logFC (p = 0.0006) increase in mRNA levels of both Cidea and the master regulator of mitochondrial biogenesis peroxisome proliferator-activated receptor gamma coactivator 1-α (Pgc1α) in Wars2 V117L/V117L mice ( Figure 1B). In female iWAT of Wars2 V117L/V117L mice, Cidea, Cox7a, Pgc1α, and Pparα were increased by 0.49 ± 0.15, 0.47 ± 0.15, 0.50 ± 0.12 and 0.33 ± 0.11 logFC, respectively (p = 0.0122, 0.0130, 0.0026, 0.0179, respectively) (Supplementary Figure S1A). The expression of browning genes in female gWAT was highly variable and Pgc1α was the only significantly upregulated gene 0.51 ± 0.17 logFC (p = 0.0176) in Wars2 V117L/V117L mice (Supplementary Figure S1B). We then assessed browning by immunoblotting for UCP1 protein in iWAT. In males, we found a significant increase (p = 0.0437) and a similar nonsignificant trend in females ( Figures 1C,D; Supplementary Figures S1C,D). We next assessed mitochondrial mass as another marker of browning. Using a qPCR assay targeting both mtDNA and gDNA genes, we observed a significant increase of 0.43 ± 0.08 logFC and 0.23 ± 0.08 logFC in mtDNA: gDNA in male Wars2 V117L/V117L iWAT (p = 0.0002) and gWAT (p = 0.0264), respectively in Wars2 V117L/V117L mice ( Figure 1E). No genotype driven difference was seen in female mice (Supplementary Figure S1E). In agreement with previous findings, BAT showed the reverse effect with reduced mitochondrial DNA content in both sexes (Supplementary Figure S2E), reduced expression of browning marker gene expression in both sexes in Wars2 V117L/V117L mice (Supplementary Figures S2A,B) and lower UCP1 expression in males and a trend for reduction in females (Supplementary Figures S2C,D). Together this is evidence of increased WAT browning, observed on multiple levels (mRNA, protein, mtDNA) in both iWAT and gWAT depots in Wars2 V117L/V117L mice with the specific effects differing between sexes and iWAT generally showing higher differences in fold change.
Mitokines FGF21 and GDF15 are elevated in the Wars2 V117L/V117L mice on chow diet 12-month old Wars2 V117L/V117L mice were previously shown to have mitochondrial ETC complex deficiencies in multiple tissues and elevated plasma levels of the mitokine, FGF21 which may at least partially explain the WAT browning. We thus decided to measure circulating FGF21 and the appetitesuppressing mitokine GDF15 in free fed 4-month old mice (Patel et al., 2019). We observed an overall genotype effect (p = 0.0485) on FGF21 levels, with an 86% increase (p = 0.0364) in female Wars2 V117L/V117L mice and a non-significant trend for increase in males (Figure 2A). GDF15 was significantly increased in Wars2 V117L/V117L mice of both sexes, with an 112% increase (p = 0.0014) in males and 158% increase (p = 0.0001) in females ( Figure 2B). We followed with a qPCR study of multiple tissues to show that Fgf21 expression was elevated by 1.80 ± 0.13 mean difference of log10-fold change (logFC) ± (SE) (p = 0.0012), 0.38 ± 0.11 logFC (p = 0.0055), 0.47 ± 0.16 logFC (p = 0.0157), 0.41 ± 0.18 logFC (p = 0.0447) in the heart, BAT, muscle and kidney of Wars2 V117L/V117L mice respectively ( Figure 2C). Gdf15 was elevated by 0.66 ± 0.09 of logFC (p < 0.0001) and 0.84 ± 0.12 logFC (p < 0.0001) in the heart and BAT, respectively ( Figure 2D). We also tested for changes in Atf4 levels, one of the upstream regulators of Gdf15 and Fgf21, but found no difference in any of the tissues (Supplementary Figure S3).
Food intake is reduced in Wars2 V117L/V117L mice on a chow diet
We hypothesised that the elevated GDF15 levels may be contributing to reduced food intake in Wars2 V117L/V117L mice. To test an effect on food intake, we set up an independent cohort of pair-housed mice on regular RM3 chow diet. Male Wars2 V117L/V117L mice showed reduced cumulative food intake compared to wild-type mice already from the first timepoint at 7 weeks of age (p = 0.045) (Figures 3A,B). Female Wars2 V117L/V117L mice showed significantly lower food intake from 10 weeks onwards (p = 0.0148). At 14 weeks of age, the male and female cumulative food intake was 17% (p = 0.0016) and 8.4% lower than wild-type (p = 0.0020), respectively ( Figures 3A,B). This is thus likely to have contributed to the lower bodyweight seen in these mice ( Figures 3C,D).
Homozygous Wars2 V117L/V117L mice fail to gain fat mass due to growth and high-fat diet In data from a small cohort of 6-month old Wars2 V117L/V117L males we previously showed a trend towards increased ratio of gWAT:iWAT mass (Agnew Frontiers in Physiology frontiersin.org 09
FIGURE 6
Gonadal to inguinal WAT (gWAT: iWAT) ratio is elevated in Wars2 V117L/V117L males on a HFD. gWAT:iWAT ratio was calculated for 6-month old male (n = 9-18) and female (n = 11-22) mice either on low fat (LFD) and high-fat diets (HFD) (A,B). The individual gWAT (C,D) and iWAT (E,F) weights are shown below. To fit a normal distribution, male and female gWAT:iWAT ratio data and male iWAT data were transformed by Y = Log 2 (Y). The gWAT male and female data were normally distributed (D'Agostino & Pearson normality test) and the iWAT female data showed some deviation from normality (p = 0.0476). Significance was tested using 2-way ANOVA with Tukey's multiple comparison test between genotype for each diet. Significant differences in multiple comparisons of WT, HET and HOM on each diet are depicted as *p < 0.05, **p < 0.01, ***p < 0.001.
Frontiers in Physiology frontiersin.org 10 et al., 2018). To investigate whether an altered diet could reveal a fat distribution phenotype or whether it would alleviate the failure to gain fat mass found in these mice, Wars2 V117L/V117L , Wars2 +/V117L and Wars2 +/+ mice were placed on HFD and matched LFD. As expected, HFD increased body weight and fat mass in wild-type Wars2 +/+ (week 24, males: p < 0.0001, p < 0.0001; females: p < 0.0001, p < 0.0001, respectively) and heterozygous Wars2 +/V117L mice (week 24, males: p = 0.0363, p < 0.0001, females: p < 0.0001, p < 0.0001, respectively). However, no significant effect of HFD on body weight was observed in Wars2 V117L/V117L mice of either sex ( Figures 4A,B, 5A,B). On LFD, significant bodyweight differences between wild-type and Wars2 V117L/V117L were observed and persisted from 14 (p = 0.0126) and 16 weeks of age (p = 0.0176) for males and females, respectively. On a HFD, significance was reached earlier, at 6 (p = 0.0016) and 12 (p < 0.0001) weeks of age, respectively. Similar effects were observed between wildtype and homozygous mice for fat mass, which was significant from 6 to 12 weeks (male) and 10 and 16 weeks (female), 6 weeks earlier on HFD than on LFD, respectively ( Figures 4C,D, 5C,D). Significant differences were also observed in the lean mass of Wars2 V117L/V117L mice, but these were of a smaller magnitude ( Figures 4E,F, 5E,F). When analysed over the time course using area under curve, these differences were maintained (Supplementary Table S1). In summary, most of the weight differences in Wars2 V117L/V117L mice were due to the reduction in fat mass and administering a high-fat diet exacerbated these differences.
For all three measures, heterozygous Wars2 +/V117L mice also showed significant differences to Wars2 V117L/V117L mice at an earlier age than for wild type mice (Figures 4A-F, 5A-F). A significant increase in bodyweight (p = 0.0476, p = 0.0416) and fat mass (p = 0.0418, p = 0.0430) of Wars2 +/V117L females on HFD was observed compared to wild-type mice at 6 and 8 weeks of age respectively, but this change did not persist in later timepoints. In line with this, 12-month-old heterozygous female knockout Wars2 +/− mice did not show any differences in body weight or composition on either diet (Supplementary Figure S6). In summary, we did not observe any reproducible differences between the heterozygous Wars2 +/V117L or Wars2 +/− mice and the wild-type mice.
Wars2 V117L/V117L mice show reduction in the weights of multiple fat depots and a HFD and male specific elevation in gWAT:iWAT ratio Since the majority of the weight differences in the Wars2 V117L/V117L mice could be explained by fat mass, we next evaluated differences in fat distribution by weighing fat depots from 24 week old mice, and we considered the ratio of gWAT: iWAT mass (Figure 6; Supplementary Figures S4, S5). Indeed, almost all fat depots weighed less in Wars2 V117L/V117L compared to wild-type or heterozygous mice. The only exceptions were male HFD gWAT, female HFD perirenal BAT and female LFD perirenal WAT which did not differ from Wars2 +/+ or Wars2 +/V117L . The lack of weight change in male Wars2 V117L/V117L gWAT on HFD together with the 1.372 ± 0.1755 g lower iWAT weight (p < 0.0001) resulted in an increased gWAT:iWAT ratio (p < 0.0001), indicating higher visceral to subcutaneous fat ratio ( Figures 6A,C,E) Interestingly, no such trend was replicated in females where both iWAT and gWAT depot weights were reduced, by 1.508 ± 0.2398 g (p < 0.001) and 1.684 ± 0.2521g (p < 0.001), respectively ( Figures 6B,D,F). No significant differences were observed between the heterozygous and wild-type mice for any of the fat depots apart for male iWAT on a LFD (p < 0.05). Similarly, fat depots of 12-month-old female heterozygous Wars2 +/− mice in a separate cohort, did not show any significant differences (Supplementary Figure S7). This demonstrates that Wars2 V117L/V117L mice have much lower fat mass which is unequally shared by different fat depots and results in male and HFD-specific increase in gWAT:iWAT ratio.
Discussion
We assessed fat depot differences in browning and showed that the magnitude of browning effects is greater in iWAT than in gWAT of chow-fed 4-month-old Wars2 V117L/V117L mice. This agrees with previous research which showed that gWAT has low browning marker expression and a very low browning capacity compared to iWAT (de Jong et al., 2015;Zuriaga et al., 2017). Our findings suggest that the adipose phenotypes in Wars2 V117L/V117L mice are driven systemically, secondary to a severe mitochondrial dysfunction in the heart, BAT and muscle. Firstly, we confirmed the upregulation of FGF21, an established inducer of WAT browning (Fisher et al., 2012;Agnew et al., 2018). It is possible that other inducers of browning such as catecholamines could also be involved but were not measured in this study (Frontini et al., 2013). Secondly, we showed higher plasma GDF15 in these mice which may contribute to the observed lower food intake that thus contributed to the reduced bodyweight and fat mass, as shown in other models of mitochondrial disease (Chung et al., 2017).
We have shown that Wars2 V117L/V117L mice fail to gain fat mass also when challenged with a HFD, accompanied by a male and HFD-specific upregulation of gWAT:iWAT ratio. This was likely driven by the lower mass of iWAT and the relatively unchanged visceral gWAT on HFD. In general, all other male visceral depots showed a reduction of fat mass in male Wars2 V117L/V117L mice. It would be interesting to extend these observations using other methods, such as small animal X-ray computed tomography (CT) system, that could accurately verify Frontiers in Physiology frontiersin.org the effect on overall fat distribution over time (Sasser et al., 2012). This male-specific effect is in line with sexual dimorphism which is an established feature of fat distribution (Pulit et al., 2017). In fact, the TBX15-WARS2 locus also contains an independent male-specific WHRadjBMI-association signal (Shungin et al., 2015). Further study will be required to explain the diet specificity. However, HFD was previously shown to induce browning and it could thus potentiate the depot-specific differences observed in chow-fed animals and thus contribute to HFD-specific fat mass loss seen in WAT and not gWAT (García-Ruiz et al., 2015). It is important to note that all mice in this study were housed at "room temperature" (21°C ± 2°C) and not at thermoneutral temperature for small rodents (29°C-30°C). Housing mice at room temperature leads to a mild cold stress and permits browning of WAT (e.g., McKie et al., 2019;Raun et al., 2020), while not directly influencing fat mass (Small et al., 2018). Since we did not house mice at thermoneutrality it is not possible to discern whether the browning gene expression effect, or indeed the fat depot differences we detected in Wars2 V117L/V117L mice would be maintained under thermoneutral conditions, and this would be an interesting experiment to be tested in further investigations outside the scope of this study. Nevertheless, mice harbouring the Wars2 V117L/V117L mutation have greater mRNA expression of browning markers in WAT, greater UCP1 protein and greater mitochondrial mass than their wildtype littermates, demonstrating browning of the WAT in these mice. Whether this is a direct effect of the Wars2 V117L/V117L mutation or an interaction between the genotype and ambient temperature cannot be discerned by our current study.
Is it possible that a similar mechanism relating mitochondrial failure in the heart and other tissues together with WAT browning might drive the WHR signal in humans? Indeed, rare variants in genes of the mitochondrial genomes and in another member of the ARS2 family, DARS2, have all been associated with WHR (Justice et al., 2019). Furthermore, in the Common Metabolic Diseases Knowledge Portal, the TBX15-WARS2 locus is associated with cardiovascular traits such as stroke severity and peripheral vascular disease in people with type 2 diabetes (cmdgenkp.org, 2021a), whilst variants in the WARS2 gene are linked to diastolic blood pressure (cmdgenkp.org, 2021b). This suggests that a systemic mechanism could explain the WHR GWAS association in humans.
In conclusion, we have shown that a hypomorphic mutation in the Wars2 gene causes a severe failure to gain body mass and results in changes to fat distribution in male mice on a HFD. We also reveal differences in browning propensity of different WAT depots and elevation of circulating FGF21 and GDF15 which likely partly explain some of these phenotypes. These data support a potential functional role for WARS2 in the WHRadjBMI TBX15-WARS2 locus, which could be further investigated in human studies where WARS2 expression varies by genotype.
Data availability statement
The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding authors.
Ethics statement
Mice were kept and studied in accordance with UK Home Office legislation and local ethical guidelines issued by the Medical Research Council (Responsibility in the Use of Animals for Medical Research, July 1993; Home Office license 30/3146 and 30/3070). Procedures were reviewed and approved by the MRC Harwell Animal Welfare and Ethical Review Board (AWERB).
Author contributions
MM, RC, and RD designed and supervised the experiments, analysed data, prepared figures, and wrote the manuscript with input from all authors. MM carried out the molecular studies and body composition measurements. Food intake analysis was carried out by LB, MM, and LV. Cohorts were managed by LB and LV. Fat depot weight measurements were performed by MY, RD, and MM. LZ assisted with molecular studies. KB and PB carried out the GDF15 ELISAs.
Funding
This work was funded by the Medical Research Council (MC_U142661184). MM was funded by an MRC Doctoral Training studentship. GDF15 assays were conducted at the MRC MDU Mouse Biochemistry Laboratory (MC_UU_00014/5). | 7,242.2 | 2022-05-24T00:00:00.000 | [
"Biology",
"Medicine"
] |
Multi-Authority Attribute-Based Encryption with Dynamic Membership from Lattices
Attribute-based encryption is useful for one-to-many encrypted message sending. However, most attribute-based encryption schemes authorize and issue attributes to users by a single authority. Such a scenario conflicts with practical requirements and may cause the key-escrow problem. Hence, decentralization of authority is a critical issue in attribute-based encryption. Besides, dynamic membership management is another important issue in attribute-based encryption. With dynamic membership management, a system may update the user’s attributes without affecting other users, making the system more flexible and practical. On the other hand, with the rapid development of quantum computers nowadays, encryption schemes based on traditional mathematical problems are at the risk of quantum attacks. Among the existing quantum-resistant mathematical architectures, lattice-based cryptography is the most widely studied. Thus, we propose a multi-authority attribute-based encryption scheme with dynamic membership from lattices to solve the above problems. Moreover, we also formally prove the security of the proposed scheme under the decisional learning with errors assumption.
I. INTRODUCTION
To protect the personal privacy of messages sent over the network, public key encryption (PKE) schemes have been widely researched and used to protect the confidentiality of the data. In a PKE scheme, each user owns a key pair called a public key and a private key. The sender encrypts the plaintext with the public key, and the receiver decrypts the cyphertext with the private key. Moreover, there are many different aspects of research on PKE, such as identitybased encryption (IBE) [1], attribute-based encryption (ABE) [2], dynamic encryption [3], re-encryption [4], searchable encryption [5], and so on. However, to prevent man-in-themiddle attacks, a certificate authority (CA) is needed to issue and manage a certificate for each user's public key.
ABE is a kind of PKE suitable for one-to-many message sending. To reduce the requirements for certificates in PKE schemes, IBE is proposed where the user's public key is related to its identity. However, the IBE scheme is only appropriate to the one-to-one message sending scenarios. When a sender wants to send the same message to a group of receivers, he/she must first know the identity of each receiver and then individually encrypt the message with each identity. To improve efficiency for one-to-many encrypted message sending, in 2005, Sahai and Waters [6] proposed the first fuzzy identity-based encryption (Fuzzy IBE) scheme. In their scheme, each user is recognized by a specific set of attributes. A sender designates a policy for encrypting the plaintext, and receivers can decrypt the ciphertext only when their attribute set matches the policy. Thus, Fuzzy IBE is considered a pioneer of ABE.
ABE can be split into two categories according to the different strategies of access structures, namely key-policy ABE (KP-ABE) [7] and ciphertext-policy ABE (CP-ABE) [8]. In KP-ABE, the key generation center (KGC) is responsible for embedding an access structure into the user's private key. Then the sender can determine the attribute set of the ciphertext during the encryption process. By contrast, in CP-ABE, KGC is responsible for embedding an attribute set into the user's private key. Then the sender can determine the access structure of the ciphertext during the encryption process. For example, in Figure 1, assume that a Hospital-A only provides its disease reports of the COVID-19 to doctors or those who suffer from COVID-19. In KP-ABE, each user is able to recover the plaintext only when the attribute set of the ciphertext matches his/her access structure. On the contrary, a user is able to recover the plaintext only if his/her attribute set matches the access structure of the ciphertext in CP-ABE.
CP-ABE is more suitable for one-to-many encryption than KP-ABE. Since a sender can choose the distinct access structure for every ciphertext in CP-ABE, it is more elastic and feasible than KP-ABE. Furthermore, in CP-ABE, KGC may update the user's private key independently when a user's attributes change. However, in KP-ABE, revoking or updating a user's private key may invoke other users' private keys.
Decentralization of authority is another important issue in ABE. In general ABE, a single KGC distributes and manages all users' keys, which may cause the key escrow problem where the system may face severe impacts if KGC is under the attacker's control. To solve the above problem, Chase [9] proposed the first multi-authority ABE (MA-ABE) scheme in 2007 which permits a set of attribute authorities to manage and issue users' keys independently. After that, Lin et al. [10] proposed an MA-ABE scheme that eliminates the demand for a central authority in 2008, and Lewko and Waters [11] proposed an MA-ABE scheme with a novel key-binding technique in 2011. The concept of MA-ABE removes the risk of malicious KGC and makes ABE more practical. In addition, the multi-authority structure can also disperse the computation and make the key distribution process more efficient.
Apart from the multi-authority architecture, dynamic membership management is also important in ABE. Fan et al. [12] proposed the first ABE supporting dynamic membership management (ABE-DM). According to their definition, there are four features to reach DM, which are Expandability, Revocability, Renewability, and Independence. With those features, any user can join the system whenever needed, and the system can update a user's attributes without affecting other users. However, there is a flaw in the above scheme. Therefore, Fan et al. [13] proposed a new scheme to fix the flaw in 2015, but the security of their scheme is proved in the random oracle model. Finally, Fan et al. [14] proposed an ABE-DM scheme in 2021, and its security is proved in the standard model. Besides, Chang [15] applied the concept of DM to a multi-authority ABE (MA-ABE-DM) scheme in 2019.
Lattice-based encryption is one of the most popular quantum-resistant encryption. In 1994 Shor [16] proposed the quantum algorithm which may solve traditional hard problems such as discrete logarithm, elliptic curve discrete logarithm, and large integer factorization problems. Therefore, all the above schemes may be insecure, and various quantum-resistant encryption methods have been proposed in recent years. Among them, lattice-based encryption has received attention for its ability to incorporate functional encryption. The security of lattice-based encryption is based on the shortest vector problem (SVP) and its derived problems. Moreover, no known algorithm can solve SVP in polynomial time, and only approximate-SVP can be solved in polynomial time [17].
Lattice-based ABE has been studied in recent years. In 2012, Agrawal et al. [18] proposed a lattice-based Fuzzy IBE scheme, and Zhang et al. [19] improved it to a lattice-based CP-ABE scheme in the same year. Then, Boyen [20] proposed the first lattice-based KP-ABE scheme in 2013. After that, the first lattice-based MA-ABE scheme was proposed by Zhang et al. [21] in 2015, and a variant is proposed by Liu et al. [22] [23] proposed the first lattice-based MA-ABE scheme, which is CP-ABE. Although some works for MA-ABE on lattices are proposed, to the best of our knowledge, there has been no lattice-based MA-ABE scheme, which supports dynamic membership management.
A. PROBLEM STATEMENTS
In summary, to meet the needs of the practical environment, a one-to-many encryption scheme should satisfy the following requirements: • The sender may choose the access structure of the ciphertext. Therefore, the sender can easily decide who can decrypt the ciphertext. • Private keys of users may be updated independently.
Other users' private keys will not be influenced during the key-update phase. • Decentralization of authority should be provided to reduce the damage of the key escrow problem of a single authority. • The encryption scheme should resist quantum attacks. Therefore, we propose an MA-ABE-DM scheme on lattices to satisfy the above requirements.
B. CONTRIBUTIONS
To highlight the advantages of the proposed MA-ABE-DM scheme on lattices, it provides the features as follows: • The proposed scheme is a CP-ABE scheme. Therefore, the sender can choose the access structure of the ciphertext. • Since the proposed scheme allows dynamic membership management, all private keys can be updated independently. • It supports multi-authority scenarios, which resolves the key escrow problem of a single authority. • The proposed structure is based on lattices, which can resist quantum attacks.
C. ORGANIZATION
In section 2, we introduce some cryptographic primitives on lattices and the hard problem applied in the proposed scheme.
In section 3, we show some MA-ABE schemes on lattices and a traditional MA-ABE scheme that supports dynamic membership management. Then, we present the details of the proposed scheme and its security proof in section 4 and section 5, respectively. Finally, we show some comparisons between the proposed scheme and the related works in section 6 and make a conclusion for this work in section 7.
II. PRELIMINARIES
In this section, we first introduce some background knowledge of lattices. We adopt the trapdoor generation and sample techniques proposed by Micciancio and Peikert [24] in 2012 and prove the security of the proposed scheme under the assumption of the D-LWE problem introduced by Regev [25] in 2005. Furthermore, we present the concept of dynamic membership (DM) and show the composition of MA-ABE-DM schemes.
A. NOTATIONS
In the rest of this work, lowercase bold letters are used to represent column vectors, and uppercase bold letters are used to represent matrices, such as vector v and matrix V. Also, the transpose of v and V will be v ⊤ and V ⊤ , respectively. Moreover, a horizontal concatenation matrix of two matrices is denoted as . Besides, we denote scalars with lowercase regular letters and denote sets with uppercase regular letters, such as scalar r and set U . The size of the set U will be represented as |U |.
B. LATTICE
Given a set of m-dimensional linearly independent vectors A lattice Λ with V can be defined as follows: Definition II.1. The m-dimensional lattice is defined as For integer lattices, there are three common mathematical architectures as follows. VOLUME 4, 2016 Definition II.2. Given a prime q, a vector t ∈ Z n q , and a matrix A ∈ Z n×m q , define:
C. SHORT INTEGER SOLUTION (SIS)
According to the research done by Ajtai [26] in 1998, the following problems have been proved to be NP-Hard in the worst case.
Definition II.3. The Short Integer Solution (SIS) Input an arbitrary matrix A ∈ Z n×m , a prime q and a factor γ, find a nonzero vector x ∈ Z m q , where Ax = 0 (mod q) and ∥x∥ ≤ γ.
Definition II.4. The Inhomogeneous Short Integer Solution (ISIS) Input an arbitrary matrix A ∈ Z n×m , a prime q, a target vector t ∈ Z n q and a factor γ, find a nonzero vector x ∈ Z m q , where Ax = t (mod q) and ∥x∥ ≤ γ.
D. LEARNING WITH ERRORS (LWE)
In 2005, Regev [25] firstly defined the learning with errors (LWE) problem and demonstrated that for specific noise distribution χ, the LWE problem is as difficult as the worstcase of the gap shortest vector problem or shortest independent vectors problem under the quantum reduction shown by Peikert in [27]. The security of the proposed scheme is based on the hardness assumption of the D-LWE problem as shown in Figure 2 and Definition II.7 .
Definition II.5. Oracle O χ and Oracle O Ψ : Given a positive integer n, a prime q, and a specific discrete Gaussian distribution χ over Z q . Let O χ be a pseudo-random noise sampler with some random secret-vector s ∈ Z n q , and let O Ψ be a truly-random sampler. The outputs of O χ and O Ψ are defined as follows: q is a uniformly random vector, s ∈ Z n q is a fixed secret-vector, x i ∈ Z q is an ephemeral noise value sampled from χ. • O Ψ : Output a set of truly random samples (u i , v i ) ∈ Z n q × Z q , which is independently and uniformly random sampled from the whole domain Z n q × Z q . Definition II.6. The Search Learning With Errors (S-LWE) problem: Instance arbitrary polynomial number of (Z n q × Z q ) samples from O χ , find the fixed secret-vector s. ,where ϵ is non-negligible, A Oχ = 1 denotes the case that A guesses correctly when the D-LWE problem generates samples from oracle O χ , and A OΨ = 1 denotes the case that A guesses correctly when the D-LWE problem generates samples from oracle O Ψ .
E. LATTICE TRAPDOOR
The proposed scheme applies the trapdoor generation function proposed by Micciancio and Peikert [24] since it is more efficient. The algorithms of the trapdoor functions are introduced as follows: Let a gadget vector g = 1 2 4 · · · 2 k−1 ∈ Z 1×k q , where k = log 2 q and q = 2 k . There exists a matrix T g such that Then, a gadget matrix G can be defined as And there also exists a matrix T G corresponding to G, such that With the matrix T G , any SIS problem constructed with G can be simply solve.
With the special matrix G, anyone may generate a trapdoor R and compute the corresponding matrix A. Then he/she may solve the ISIS problem for matrix A. The trapdoor generation function GenTrap and the function SampleD used to solve ISIS problem are described as follows: Given a matrixĀ ∈ Z n×m q , an invertible matrix H ∈ Z n×n q , and a distribution χ over Zm ×w q . IfĀ and H are not given, then uniformly chooseĀ ∈ Z n×m q at random and set H as an identity matrix I ∈ Z n×n where the s 1 (·) function extracts the Euclidean length of the input matrix or vector.
Given a matrixĀ ∈ Z n×m q , a trapdoor matrix R ∈ Zm ×w , an invertible matrix H ∈ Z n×n q and a target vector t ∈ Z n q . Then, output a vector x ∈ Z m q with the Gaussian parameter σ, such that Ax = t ∈ Z n q , where A = Ā |HG −ĀR . The details of generating the vector x is described as follows: First, the algorithm randomly chooses a perturbation vector p ∈ Z m q with σ. The vector p can be divided into Further, the algorithm constructs an ISIS problem with G by computing Finally, the algorithm outputs
F. DISCRETE GAUSSIANS
Definition II.8. For any real number σ > 0, Gaussian function with σ as the parameter and z as the center on R n can be defined as follows: And the corresponding discrete Gaussian distribution will be: The discrete Gaussian distribution over L can be defined as: Note that, we usually omit the subscript if z is the origin or σ = 1 and denote χ = D σ,z (y).
G. ACCESS STRUCTURE
In ABE, a certain rule of attributes designed by a sender is called an access structure. The receivers can decrypt the ciphertext if and only if their attributes suffice the access structure of the ciphertext. Let U be the set of all attributes in the system, and U ID ⊆ U be the attribute set of the user with identity ID. Then, a sender chooses an access structure τ that defined a rule of the attribute set U C ⊆ U to encrypt the message. If we illustrate τ with a tree structure, the leaf VOLUME 4, 2016 nodes of τ will be the attributes in U C , and each parent node may be one of the two types of operation gates, namely AND and OR. The user with the identity ID can get the message only if U ID suffices the access structure τ .
H. DYNAMIC MEMBERSHIP (DM)
The aims of dynamic membership management (DM) were proposed by Fan et al. [12] in 2013, which contains the following features: • Expandability: Any user may enroll into the system whenever needed. • Renewability: The system may update users' attributes and get the corresponding private key. • Revocability: The system may remove any attribute of a user, and make sure that the ciphertext encrypted by the new public parameters is unable to be decrypted by a revoked private key. • Independence: Any user's revocation and the updating process may be performed without affecting other users. There are two ways to achieve revocability, namely indirect revocation and direct revocation. In an indirect revocation mechanism, the system needs to maintain a revocation list for each attribute. After each revocation, all unrevoked users have to update their private keys to ensure its effectiveness, which is a complicated work. On the other hand, a direct revocation mechanism simplifies the revocation process. In a direct revocation mechanism, none of unrevoked users will be affected after any revocation, this feature also called Independence. However, to carry out a direct revocation process, such a mechanism usually asks the sender to embed the revocation list into the ciphertext during encryption. The disclosure of the revocation list may cause some privacy issues.
I. MA-ABE-DM SCHEME
The proposed MA-ABE-DM scheme is composed of the following seven algorithms: takes the user's identity ID, the user's attribute set U ID , the public key P K d , and the master private key M K d as input and outputs the partial private key SK ID d for the user ID.
• Revoke(GP, P K d , ID, t): This algorithm takes a public key P K d , a user's identity ID, and a target attribute t as input and outputs the updated public key P K ′ d .
This algorithm takes a public key P K d , a master private key M K d , a user's identity ID, a target attribute t, and the user's partial private key SK ID d as input and outputs the updated partial private key This algorithm takes a single-bit message b, an access tree τ , and the public keys {P K d } d∈D C as input and outputs the ciphertext C, where D C is the set of attribute authorities monitoring attributes in τ . • Init: A declares a target access tree τ and the target user's identity ID * and sends them to S, where τ is form with some operation gates and a set of attributes U C . Besides, A also provides a list of corrupted attribute authorities D corrupt . Note that, D C is the set of attribute authorities monitoring attributes in U C which is restricted not to be a subset of D corrupt . • Setup: S runs Setup to generate the global public parameters GP and sends it to A. • AuthoritySetup: S runs AuthoritySetup to generate public key P K d and master private key M K d of each attribute authority AA d . Then, S sends public keys {P K d } d∈D to A. For those corrupted attribute authorities, S provides the corresponding master private keys {M K d } d∈Dcorrupt to A as well. • Phase 1: A may issue the following queries.
--Enroll: A submits (ID, U ID ) to S, where ID is a user's identity and U ID is an attribute set for ID. If ID = ID * , for each attribute in U ID * C , S provides only the public parameter B ID * d,i to A. Otherwise, S runs Enroll to create the user's private key SK ID and adds the public parameters {B ID d,i } i∈U ID into the corresponding public key P K d , and then returns and t is an element in U C , abort the query. Otherwise, S runs Revoke to revoke the attribute t for ID and update the public key P K d . --Extend: A submits (ID, t) to S, where t is an attribute monitored by AA d but not an element in U ID . If ID = ID * and t is an element in U C , S provides only the public parameter B ID * d,t to A. Otherwise, S runs Extend to create the private key e ID d,t and adds the public parameter B ID d,t into the corresponding public key P K d , and then returns e ID d,t to A. A delivers two distinct single-bit messages (m 0 , m 1 ) to S. Then, S randomly selects b ∈ {0, 1} and generates the ciphertext C b corresponding to b and the access tree τ . Then, S sends C b to A as the challenge. • Phase 2: A may issue more queries as defined in Phase 1.
III. RELATED WORKS
In this section, we introduce some traditional MA-ABE schemes and some ABE schemes based on lattices. Moreover, we show the properties comparison between the proposed scheme and related works.
A. PROPERTIES COMPARISON
The proposed scheme achieves the dynamic membership management property with the security guaranteed, which is more functional than others. The details of the properties comparison are shown in TABLE 1 In 2019, Chang [15] proposed an MA-ABE-DM scheme. In Chang's scheme, any authority can be the "Initializer" who may set up and aggregate public parameters from all attribute authorities without a central authority. This scheme grasped the advantages of both direct and indirect revocation mechanisms to design a new approach that holds privacy and independence simultaneously. The trick of Chang's scheme is to generate a unique key for each attribute value of every user, which implies that even if there are two users having the same attribute value, their attribute keys are distinct. After a user enrolled, there will be a correlated public parameter for each attribute key so that anyone can encrypt the message by adding the corresponding public parameters of the chosen attributes. This scheme was proved to be secure under the hardness of the decisional bilinear Diffie-Hellman (DBDH) problem. However, the DBDH problem is unable to resist the quantum attacks proposed by Shor [16]. Ming et al. [28] proposed a pairing-free MA-ABE scheme with revocability. Without pairing operations, their scheme has a lower computational cost. However, unrevoked users need to update their private keys in the revocation mechanism. Moreover, this scheme was proved to be IND-CPA secure under the decisional Diffie-Hellman (DDH) assumption, which is also unable to resist the quantum attacks.
C. LATTICE-BASED REVOCABLE CP-ABE SCHEMES
We introduce some CP-ABE schemes on lattices with revocation or updating mechanisms here. Wang et al. [29] added the attribute modification mechanism into the CP-ABE and proved their scheme was secure under the random oracle model. After that, Meng [30] proposed a directly revocable CP-ABE scheme in 2020, which is more intuitive than the previous works. However, both schemes were constructed under a single authority, which is not suitable for reality.
1) Wang et al.'s Revocable and Grantable CP-ABE Scheme on Lattices
In Wang et al.'s [29] scheme, they presented an efficient revocable and grantable CP-ABE scheme. They built a binary tree for each attribute and constructed the key-update process with the KUNodes algorithm. Also, they proved that their scheme was secure under the selective and random oracle model. However, the revocation mechanism in Wang et al.'s scheme is indirect, which means that the other users will be affected after the revocation process. Moreover, the scheme is based on a single authority scenario, which does not match the practical condition.
2) Meng's Revocable CP-ABE Scheme on Lattices
In Meng's [30] work, there are two directly revocable CP-ABE schemes on lattices. One can achieve user-level revocation, and another can achieve attribute-level revocation. To directly revoke a user or an attribute, the sender embeds the revocation list into the ciphertext during encryption. As mentioned above, the benefit of a direct revocation mechanism is that there is no other user will be affected. Besides, the attribute authority is able to perform revocation without issuing any updated key, which reduces lots of work. However, the revocation list in Meng's scheme reveals the identities of revoked users, which invades the privacy of the revoked users. Furthermore, their scheme is based on a single authority scenario, which conflicts with the practical condition.
D. LATTICE-BASED MA-ABE SCHEMES
We show some MA-ABE schemes on lattices here.
1) Zhang et al.'s MA-ABE Scheme on Lattices
Zhang et al. [21] firstly proposed a lattice-based MA-ABE scheme in 2015. Although the scheme requires a central authority (CA) to authenticate identities and handle attributes of users, they avoid the key escrow problem by depriving the key generating ability of the central authority. However, there are some shortages of Zhang et al.'s scheme. First, there are neither revocation nor updating mechanisms in the scheme, which is inconvenient for member management. Second, their scheme is KP-ABE. Third, the probability of two distinct users having the same attribute private key is non-negligible in their scheme, which may cause a collusion attack.
2) Liu et al.'s MA-ABE Scheme on Lattices
Liu et al. [22] proposed an efficient MA-ABE scheme on lattices in 2018. Taking advantage of the theory in Micciancio and Peikert [24] et al.'s work, they constructed an optimized sampling function to create stronger trapdoors in the key generation process. Besides, they also proved their scheme to be IND-CPA secure, but there is still a lack of modification mechanism for attributes. Precisely, there are neither revocation nor updating mechanisms in the scheme as well, which is inconvenient for member management. Moreover, their scheme is KP-ABE, too.
3) Datta et al.'s MA-ABE Scheme on Lattices
In 2021, Datta et al. [23] proposed an MA-ABE scheme on lattices, which is CP-ABE. Moreover, their scheme supports access policies described in disjunctive normal form. However, neither revocation nor updating mechanisms are provided in their scheme.
IV. CONSTRUCTION
In the proposed lattice-based MA-ABE-DM scheme, as shown in Figure 3, an initializer firstly runs Setup to generate the global public parameters, and each attribute authority runs AuthoritySetup to create its own public key and private key. After that, each attribute authority runs Enroll to generate the partial private key of each user. Once a user's attribute changes, the attribute authority runs Revoke or Extend algorithm to update the public key and user's partial private key. When sending a message, the sender runs Encrypt to generate the ciphertext and sends it to the receivers. Then, receivers run Decrypt to recover the message. The notations are shown in TABLE 2.
A. THE PROPOSED SCHEME
We show the details of the proposed scheme, which consists of seven algorithms: Setup, AuthoritySetup, Enroll, Revoke, Extend, Encrypt and Decrypt.
C. AUTHORITYSETUP
After the Initializer publishes GP , each AA d runs the following steps to create its own public key and private key: , and the following formula holds:
E. REVOKE
A revoking request for AA d includes the target attribute t and the user's identity ID. After receiving the revoking request, AA d revokes the target attribute for ID through the following steps.
F. EXTEND
An extending request for AA d includes the target attribute t and the user's identity ID. After receiving the extending request, AA d generates the private key of the target attribute for ID through the following steps. Finally, ID updates his/her private key (SK
G. ENCRYPT
A sender encrypts a single-bit message b with a chosen access tree τ by the following steps: 1) Randomly choose a noise value x ∈ χ.
2) Choose a access tree τ which is form with some operation gates and a set of attributes U C = {U C d } d∈D C , where D C is the set of attribute authorities monitoring attributes in U C . 3) For each attribute j ∈ U C d , randomly choose a noise vector x d,j ∈ χ m . 4) For each ID ∈ L, choose a random value r ID ∈ Z q and a vector s ID ∈ Z n q and set 5) Set the root node value of τ as r ID . Then, starting from the root node, assign a value to each leaf node according to the operation gates in τ : • When comes a AND gate, and M be the set of all its child nodes. Divide the value of the parent node into |M | parts, where r ID = j∈M r ID j (mod q). Set each child node value as one of parts r ID j . • When comes a OR gate, and M be the set of all its child nodes. Set each child node value as same as the value of their parent node such that ∀j ∈ M, r ID j = r ID . 6) For each attribute j ∈ {U C d } d∈D C , generate the partial ciphertext for each ID as follows:
H. DECRYPT
After receiving the ciphertext C, any user whose attribute set satisfies τ inputs his/her private key SK ID = {e ID d,j } j∈U ID ,d∈D and runs the following steps: 1) Compute b ′ according to the operation gates in τ . For the convenience of explanation, we show the case where all operation gates are AND and the case where all operation gates are OR as follows: • When τ is composed with only AND gates:
I. CORRECTNESS
We show the correctness of the formulas in decryption as follows.
For the convenience of explanation, assume that τ only composed with AND gates: By properly choosing the noise value and noise vectors, the following formula holds: Thus, compute b ′ − ⌊ q 2 ⌋ . If b ′ = 1, the result will be less than ⌊ q 4 ⌋. Otherwise, if b ′ = 0, the result will be large or equal to ⌊ q 4 ⌋.
V. SECURITY PROOF
In this section, we prove the proposed scheme to be se- 1) A determines a set of corrupted attribute authorities Dcorrupt. Without loss of generality, we consider the most extreme condition that only one attribute authority AA d * is uncorrupted, so that Dcorrupt = D − AA d * . 2) A determines the target access tree τ , where τ is form with some operation gates and a set of attributes UC = {U C d } d∈D C . Besides, A also determines the target user's identity ID * . Note that, UC must contain at least one attribute monitored by AA d * . In other words, DC cannot be a subset of Dcorrupt. • Phase 1: A may issue the following queries.
Then, S updates the public key P K d with B ID * d,i . Otherwise, S follows the Enroll algorithm defined in Section IV-D to generate the matrix B ID d,i and the vector e ID d,i for each attributes in U ID . Then, S returns the private key SK ID = {e ID d,i } i∈U ID to A and updates the public key P K d with {B ID d,i } i∈U ID . --Revoke: Take (ID, t) as input, where t is an attribute monitored by AA d in U ID . If ID = ID * and t is an element in U C , abort the query. Otherwise, S follows the Revoke algorithm defined in Section IV-E to remove t from U ID by updating the public key P K d . --Extend: Take (ID, t) as input. If ID = ID * and t is an element in U C , S sets the matrix B ID * , which also means that Then, S updates the public key P K d with B ID * d,t . Otherwise, S follows the Extend algorithm defined in Section IV-F to generate the matrix B ID d,t and the vector e ID d,t . Then, S returns the private key e ID d,t to A and updates the public key P K d with B ID d,t . • Challenge: A delivers two distinct single-bit messages (m 0 , m 1 ) to S. Then, S randomly chooses a message b ∈ (m 0 , m 1 ) and performs the following steps to encrypt b with τ .
4) Then get
d , d∈D C , U C }. 5) S sends the ciphertext C b to A as the challenge.
• Phase 2: A may issue more queries as defined in Phase1. If the samples are chosen from O χ , we will have that . The ciphertext will be Since A is able to win the IND-CPA M A−ABE−DM game with advantage ϵ, the probability that A wins the IND-CPA M A−ABE−DM game is 1 2 + ϵ. Therefore, in this case, the probability that S correctly guesses the D-LWE oracle is 1 2 + ϵ. Otherwise, if the samples from the D-LWE oracle are uniformly random chosen from O Ψ , the ciphertext C b is uniformly random. In this case, the probability that S correctly guesses the D-LWE oracle is 1 2 . Thus, the advantage for S to solve the D-LWE problem will be: Therefore, if A can win the IND-CPA M A−ABE−DM game with non-negligible advantage ϵ, it implies that S can solve the D-LWE problem in polynomial-time with non-negligible advantage ϵ.
VI. COMPARISON
In this section, we present the comparison between the proposed scheme, Zhang As shown in TABLE 5, the computation cost of the encryption in the proposed scheme is g times of the related works, which is the cost of achieving independence property. Nevertheless, the encryption cost of the proposed scheme with g = 100 is 3.2093 (s), which is still within an acceptable range. Moreover, we provide more flexible access structures than the related works with the much lower decryption cost, which makes the proposed scheme more efficient and functional.
B. TRANSMISSION COST
The estimation is performed under the set of practical parameters given in TABLE 3. Let U C with size h be the set of attributes associated with the ciphertext, and U ′ C with size h ′ be the smallest set of attributes that satisfies U C . Assume that g is the total number of the users and k is the total number of the attribute authorities.
As shown in TABLE 6, the ciphertext length of the proposed scheme has to multiply the total number of the users g, which is the cost of achieving independence property. Besides, due to the form of ciphertext in Zhang et al.'s work and Liu et al.'s work is nearly the same, the transmission cost and the computation cost are almost the same as well.
VII. CONCLUSION
In this research, a lattice-based multi-authority ABE with dynamic membership management has been proposed. First, the sender can design the access structure, since the proposed scheme is a CP-ABE scheme. Second, with dynamic membership management, all private keys may be updated independently. Third, it supports multi-authority to resolve the key escrow problem. Fourth, based on lattice hard problems, the proposed scheme resists quantum attacks. Fifth, it can effectively prevent collusion attacks by users. Finally, the proposed scheme is provably IND-CPA secure under the D-LWE assumption, which ensures the confidentiality of the ciphertext.
Although the transmission cost and the computation cost in the encryption process of the proposed scheme are higher than in the previous works, these costs are worth sacrificing for adding more practical features. Besides, the computation cost of the decryption of the proposed scheme is much lower than the previous works, which is more beneficial to the receivers. In the future, we will try to enhance the performance of the proposed scheme. Specifically, how to The computation time of a multiplication Z 1×n The computation time of a multiplication Z m×n q × Z n×1 q 3.080 (ms) T mul3 The computation time of a multiplication Z × Z m×1 q 0.084 (ms) T mul4 The computation time of a multiplication Z × Z n×1 q 0.002 (ms) T mul5 The computation time of a multiplication Z n×m The computation time of a multiplication Z 1×m The computation time of an addition Z m×1 q + Z m×1 q 0.045 (ms) h ′ * (T mul4 + T mul5 ) + h ′ * T mul4 + k * T mul4 + h ′ * T mul6 ≈ 2.681h ′ + 0.002h ′ + 0.002k + 0.013h ′ (ms) ≈ 2.696h ′ + 0.002k ≈ 13.49 (ms) Datta et al.
h ′ represents the size of the smallest attribute set that can satisfy the ciphertext.
g represents the total number of users.
g represents the total number of users. | 8,572.2 | 2022-01-01T00:00:00.000 | [
"Computer Science"
] |
Hydrodynamical Backflow in X-shaped Radio Galaxy PKS 2014-55
We present MeerKAT 1.28 GHz total-intensity, polarization, and spectral-index images covering the giant (projected length $l \approx 1.57$~Mpc) X-shaped radio source PKS~2014$-$55 with an unprecedented combination of brightness sensitivity and angular resolution. They show the clear"double boomerang"morphology of hydrodynamical backflows from the straight main jets deflected by the large and oblique hot-gas halo of the host galaxy PGC~064440. The magnetic field orientation in PKS~2014$-$55 follows the flow lines from the jets through the secondary wings. The radio source is embedded in faint ($T_\mathrm{b} \approx 0.5 \mathrm{\,K}$) cocoons having the uniform brightness temperature and sharp outer edges characteristic of subsonic expansion into the ambient intra-group medium. The position angle of the much smaller ($l \sim 25$~kpc) restarted central source is within $5^\circ$ of the main jets, ruling out models that invoke jet re-orientation or two independent jets. Compression and turbulence in the backflows probably produce the irregular and low polarization bright region behind the apex of each boomerang as well as several features in the flow with bright heads and dark tails.
INTRODUCTION
Most luminous and extended radio sources have a pair of collinear jets thought to be aligned with the spin axis of the supermassive black hole (SMBH) in the nucleus of the host galaxy (Blandford & Znajek 1977). However, 3-10% are Xshaped radio galaxies (XRGs) defined by having a second set of jets or "wings" misaligned with the first (Leahy & Williams 1984;Joshi et al. 2019). The three main models for XRGs invoke (1) a sudden or continuous reorientation of the nuclear SMBH spin axis (e.g., Ekers et al. 1978;Klein E-mail<EMAIL_ADDRESS>et al. 1995;Dennett-Thorpe et al. 2002), (2) the superposition of two independent linear jets produced by two SMBHs residing in the same host galaxy (Lal & Rao 2005), or (3) hydrodynamical backflows from the over-pressured main jets deflected by the ellipsoidal hot interstellar medium (ISM) of the host galaxy (e.g., Leahy & Williams 1984;Worrall et al. 1995;Capetti et al. 2002;Saripalli et al. 2008). Leahy & Williams (1984) illustrating the hydrodynamical backflow model. The nuclear SMBH (×) emits two collinear radio jets (large horizontal arrows) ending in hotspots (black dots). Hydrodynamical backflows from the ends of the jets (small horizontal arrows) initially preserve the axial symmetry of Leahy & Williams (1984) shows the main features of the hydrodynamical backflow model for XRGs. the jets. The axial symmetry is broken by the oblique hot ISM of the host galaxy (thin elliptical contours) deflecting the backflows in opposite directions (bent arrows) and producing a radio source (heavy bent contours) with only inversion symmetry about the nucleus. Although the radio source may resemble a true XRG in low-resolution images, its actual shape is more like a double boomerang.
In this paper we present and analyse new 1.28 GHz images of the giant XRG PKS 2014−55 (Saripalli et al. 2008;Saripalli & Subrahmanyan 2009) based on data from the recently completed 64-element MeerKAT array of the South African Radio Astronomy Observatory (SARAO) in the Northern Cape of South Africa. We show that the morphology, spectrum and magnetic field structure of this source are very consistent with the hydrodynamical model and are inconsistent with other proposed models for X shaped sources. The central component of PKS 2014−55 = PKS 2014−558 = PKS J2018−556 (Wright & Otrupcek 1990) is identified by position coincidence with the m v ≈ 15.5 Seyfert II elliptical galaxy PGC 064440 (Paturel et al. 1989). PGC 064440 has heliocentric redshift z h = 0.060629 and velocity v h = 18176 ± 45 km s −1 (Jones et al. 2009); corrected to the cosmic microwave background (CMB) frame (Fixsen et al. 1996) z = 0.060252 and v = 18063 ± 46 km s −1 . All absolute quantities in this paper were calculated for a ΛCDM universe with H 0 = 70 km s −1 Mpc −1 and Ω m = 0.3 using equations in Condon & Matthews (2018). Thus PGC 064440 is at comoving distance D C ≈ 254 Mpc, (bolometric) luminosity distance D L ≈ 270 Mpc, and angular diameter distance D A ≈ 240 Mpc so 1 ≈ 70 kpc.
The radio observations and data reduction are described in Section 2, and the resulting images are presented in Section 3. Section 4 discusses the morphology of the radio source and its host galaxy. The data are interpreted with the asymmetric hydrodynamical model in Section 5. Our results are summarized in Section 6.
OBSERVATIONS AND DATA REDUCTION
The XRG PKS 2014−55 and its unpolarized gain/ bandpass/flux-density calibrator PKS B1934−638 were observed by MeerKAT for 9.7 h on 2019 October 11 using 55 of the 64 13.5 m diameter antennas in the array. One 10 min scan on the polarization calibrator 3C 138 was also included. Additional information about MeerKAT and its specifications can be found in Jonas et al. (2016), Camilo et al. (2018), and Mauch et al. (2020). The maximum baseline length used was nearly 8 km, giving θ ≈ 7 . 4 FWHM resolution at ν = 1.28 GHz. All four correlations XX, YY, XY, and YX of the orthogonal linearly polarized feeds were divided into 4096 spectral channels of width 0.208984 MHz. The 856 MHz total bandpass centred on 1.284 GHz includes the 1.420 GHz H i line near the z = 0.06 host galaxy PGC 064440 redshifted to 1.34 GHz, where the channel width is ∆v ≈ 47 km s −1 . The averaging time was 8 s.
Continuum Flagging and Calibration
The (u, v) data were converted from the archive format to AIPS format using MeerKAT's KATDAL package 1 . The initial radio-frequency interference (RFI) flagging followed that by Mauch et al. (2020). We trimmed 144 channels from each end of the bandpass and merged the 3808 remaining spectral channels into eight spectral windows. Subsequent editing and calibration used the OBIT package (Cotton 2008) In each calibration step, deviant solutions were detected and flagged along with the corresponding (u, v) data. The gain/bandpass/flux-density calibrator is essentially unpolarized, so the approximation XX=YY is valid. Standard structural and spectral models for PKS B1934−638 and 3C 138 were used as appropriate. Our flux-density scale is based on the Reynolds (1994) polynomial fit for the spectrum of PKS B1934−638: where S is the flux density in Jy and ν is the frequency. The main flagging and calibration steps were: (i) Fixed flagging: Frequency ranges known to contain strong, persistent RFI were flagged. Projected baselines shorter than the 13.5 m dish diameter were flagged to eliminate shadowing.
(iii) Initial flagging: Running medians in time and frequency were compared with the data to reveal variable and/or narrowband RFI for flagging.
(iv) Initial X-Y phase calibration: Cross-hand phase corrections were determined from the noise diode calibration signals injected into each data stream at the beginning of the observing session.
(vii) Amplitude and phase calibration: Complex gain solutions for PKS B1934−638 were determined and applied to the target field.
(viii) Flagging of calibrated data: Flagging operations for which calibrated data are needed were done.
(ix) Repeat: Flags from the steps i-viii were kept and the calibration steps iv-viii were repeated.
(x) Polarization calibration: After a further averaging of two spectral channels, instrumental polarization was determined from the unpolarized calibrator PKS B1934−638; solutions were obtained in 14.2 MHz blocks. The cross-hand delay and phase were determined from the polarized calibrator 3C 138. All polarization calibration parameters, including feed orientation, were determined jointly in a nonlinear least-squares solution using all calibrators.
Finally, the calibrated (u, v) data were averaged in time to reduce their volume. The averaging times were subject to the baseline-dependent constraint that averaging reduce the amplitudes by ≤ 1% inside the circle of radius ρ = 1. • 2 centred on the target, and they never exceeded 30 s.
Continuum Imaging
The continuum imaging used the Obit task MFImage, which was described in more detail by Cotton et al. (2018). The non-coplanar array in our extended synthesis was corrected by covering the imaged sky with small tangential facets out to radius ρ = 1. • 2 and placing additional facets on outlying sources stronger than S ≈ 6 mJy from the 843 MHz SUMSS catalog (Mauch et al. 2003). Variations of sky brightness and antenna gain across our wide bandpass were accommodated by dividing the observed spectrum into 34 frequency bins having ∼ 2% fractional bandwidths. The frequency bins were imaged independently and CLEANed jointly. Three iterations of phase self-calibration were applied. The CLEAN window was mostly generated automatically, but with some manual assistance to cover all of the extended emission. The multi-resolution Stokes I CLEAN used 1,032,004 components, a 3% loop gain, and included 3.394 Jy of total flux density after CLEANing to a depth of 15 µJy beam −1 . Spectra were least-squares fitted in each pixel using frequency bin weights inversely proportional to the square of the imageplane rms noise.
Stokes Q and U were imaged out to a radius ρ = 1. • 0 and CLEANed for 50,000 components to a depth of 28 µJy beam −1 in Stokes Q and 15 µJy beam −1 in Stokes U. Rotation measures (RMs) and electric-vector polarization angles (EVPAs) corrected to zero wavelength for each pixel were derived by a search in RM space, essentially taking the peak of the rotation measure synthesis function Brentjens, M. A. & de Bruyn, A. G. (2005). The RM that gave the highest polarized intensity in the average RM-corrected value of the P = Q 2 + U 2 was taken as the RM, the peak average P as the polarized intensity, and the polarization angle of the RM-corrected average Q + iU as the EVPA at zero wavelength. The EVPA at zero wavelength is orthogonal to the source magnetic field vector B integrated through the source. There is little evidence for effects beyond a simple well-resolved, external Faraday screen. The fractional polarization image was derived by first correcting the frequency averaged polarized intensity for the Ricean bias and divided by the frequency averaged total intensity.
Continuum Spectral Index
The imaging described in Section 2.2 used tapering in the outer portion of the (u, v) plane to minimize the variation of resolution with frequency but did not address a similar problem in the inner portion of the (u, v) plane that distorts the spectral-index image. Further imaging similar to that described in Section 2.2 was done using a Gaussian taper in the inner (u, v) plane with rms length σ = 500λ to reduce the frequency dependence of the short-baseline coverage. In order to better explore the faintest emission, we convolved the images of the individual frequency planes to θ 1/2 = 15 before making frequency-dependent primary beam corrections and fitting for the spectral index.
Spectral-line Calibration and Imaging
The 1420 MHz H i line frequency at the redshift z = 0.06 of PGC 064440 is 1340 MHz. We reduced a 50 MHz subset of the visibility data including this frequency using the IDIA Pipeline 3 to produce a spectroscopic data cube containing the H i line around PGC 064440. The IDIA pipeline is based entirely on CASA (McMullin et al. 2007) tasks. The basic workflow follows the calibration process described above. After the continuum subtraction using CASA's UVCONTSUB task to fit a polynomial to the visibility spectra, we used CASA's TCLEAN task with the widefield gridder performing 1000 CLEAN iterations per channel. A Briggs weighting with robust = 0.5 was used to optimise the shape of the dirty beam while minimizing sensitivity loss. We measured the image noise to be σ ∼ 125 µJy beam −1 . The FWHM resolution in the H i cube is θ = 19 . 2 × 17 . 6.
Total Intensity Continuum
UnCLEANed flux from the very extended radio source PKS 2014−55 and background sources in the primary beam, combined with the lack of projected baselines shorter than the 13.5 m antenna diameter, left a wide but very shallow negative "bowl" in the total-intensity image. We used the AIPS task IMEAN to measure the mode of the intensity distribution of pixels in source-free areas near PKS 2014−55; it is −4.6 ± 1.0 µJy beam −1 . To fill in the bowl, we added 4.6 µJy beam −1 to the image zero level. We divided this image by the circularised 67 FWHM primary beam attenuation pattern specified by Mauch et al. (2020) equations 3 and 4 to yield the final "sky" image shown in Fig. 2. The actual primary beam is slightly elliptical with axial ratio a/b ≈ 1.04 and rotates with parallactic angle on the sky. However, the maximum attenuation error introduced by our circular approximation is a negligible < 0.3% even at the ends of PKS 2014−55, ρ = 12 from the pointing centre.
The image was restored with a 7 . 43 × 7 . 35 Gaussian beam, so the brightness temperature corresponding to a spectral intensity S p = 1 µJy beam −1 is T b ≈ 13.7 mK at ν = 1.28 GHz. The rms noise plus confusion is σ = 3.5 ± 0.2 µJy beam −1 . We used the AIPS task TVSTAT to measure the total 1.28 GHz flux density in the Ω = 1.02×10 −5 sr region bounded by the faint "cocoon" that makes Fig. 2 resemble the photograph of a jellyfish; it is S = 2.36 ± 0.08 Jy, with the quoted error being dominated by an estimated 3% uncertainty in the absolute flux-density scale. Inside the region subtended by PKS 2014−55, the 4.6 × 10 −6 Jy beam −1 bowl correction contributed only S ≈ 0.03 Jy to the total flux density. The smooth bowl correction is only ∼ 12% of the average cocoon brightness, so our detection of the sharp-edge cocoons is quite reliable. Most published flux densities (e.g. Wright & Otrupcek 1990;Hindson et al. 2014) at nearby frequencies appear to be about 25% lower than ours, perhaps because they didn't capture the full extent of this very large source. The Murchison Widefield Array (MWA) flux densities (Hindson et al. 2014) S = 15.2 ± 0.8, 13.3 ± 0.7, 11.3 ± 0.6, and 8.8 ± 0.5 Jy at 120, 149, 180, and 226 MHz, respectively, are consistent with PKS 2014−55 having an overall spectral index α ≡ +d ln S/d ln ν in the range −1.0 < α < −0.8. The central region in our 1.28 GHz total-intensity image of PKS 2014−55 (Fig. 3) is well fit by the sum of three Gaussian components: two with completely free size and position parameters plus a (forced) point source with free position and flux density representing a possible radio core between them. The results are listed in Table 1. Although it is not clearly visible in Fig. 3, the 32 ± 6 mJy core component is required for the best fit and its fitted position is < 1 from the nucleus of the host galaxy PGC 064440. The line connecting the two outer components has position angle P A = 154 • measured from north to east. Saripalli et al. (2008) imaged the central region of PKS 2014−55 with sub-arcsec resolution and found five nearly collinear radio components: a central core, an inner double source, and an outer double source. The position angles of their inner and outer doubles are +150 • and +156 • , Table 1. The positions, 1.28 GHz flux densities, deconvolved major and minor diameters between half-maximum points, and major-axis position angles of Gaussian fits to the central region of PKS 2014−55. respectively. Thus each long, narrow Gaussian component in Table 1 is a blend of two relatively compact sources. The Saripalli et al. (2008) radio core has a fairly flat (α > −0.5) spectrum above ν = 1.28 GHz, so it is probably synchrotron self-absorbed and completely unresolved.
Polarization
The rotation measure in front of PKS 2014−55 is shown in Figure 4. It is RM ∼ 40 rad m −2 and varies by only a few rad m −2 across the source, so it may originate primarily in our Galaxy.
Polarization "B" vectors at zero wavelength parallel to the magnetic field orientation in the emitting region are plotted as short line segments in Fig. 5. The lengths of the vectors are proportional to the fractional polarization. The magnetic vectors are nearly parallel to the main jets, as in most FR II (Fanaroff & Riley 1974) sources (Bridle et al. 1994). In contrast, the magnetic vectors are usually perpendicular to the jets in FR I sources (Bridle & Perley 1984). Fig. 5 shows that the magnetic field B closely follows the apparent flow around the bends in PKS 2014−55.
The fractional polarization is high (30-50%) over most of the main lobes indicating very organized magnetic fields. In the secondary lobes, the fractional polarization approaches 80% indicating very little scatter in the magnetic field orientations. On the other hand, the brighter Stokes I regions at the apexes of the "boomerangs" typically have around 15% fractional polarization indicating a more tangled magnetic field structure.
Continuum Spectral Index
The spectral-index image is given in Figure 6. The spectrum in the lobes is steep everywhere, flattening somewhat near the bright regions inside the apexes of the "boomerangs" and becoming very steep in the cocoons and near the ends of the secondary lobes. We hesitate to provide quantitative estimates of the spectral-index errors because they are largely systematic, caused by limited sampling near the centre of the (u, v) plane. Further analysis of the spectral index is being developed for a subsequent paper. The S = 2.36 Jy radio source with spectral index α ≈ −0.8 has spectral luminosity L ν ≈ 2.0×10 25 W Hz −1 at ν = 1.28 GHz in the source frame. Such luminous sources (Ledlow & Owen 1996) usually have FR II morphologies (Fanaroff & Riley 1974) characterised by narrow jets leading to hotspots in edge-brightened lobes. PKS 2014−55 does not (Fig. 2); its long, filamentary, and diffuse main "jets" are only decaying relics of activity that ceased millions of years ago. Saripalli & Subrahmanyan (2009) found several examples of XRGs lacking hotspots at the ends of their relic jets and noted that the current lack of hotspots cannot be used to rule out backflows from earlier jets that have since decayed.
The current spectral brightnesses of the relic jets are only T b ∼ 14 K, so their minimum-energy magnetic field strengths B min (e.g., Pacholczyk 1970;Worrall & Birkinshaw 2006) are low. For electrons emitting at their critical frequencies from ν 1 = 10 7 Hz to ν 2 = 10 10 Hz, proton/electron energy ratio κ, and source line-of-sight depth d, in convenient astronomical units. Even for a line-of-sight depth d = 100 kpc and a high proton/electron energy ratio κ = 2000, the magnetic field strength is only B min ∼ 3 µG and the corresponding synchrotron lifetime τ ∼ c 12 B −3/2 ∼ 1.3 × 10 16 s ∼ 4 × 10 8 yr (Pacholczyk 1970) is very long. The energy density of the CMB at z = 0.06 is the same as that of a B ∼ 3.7 µG magnetic field, so inverse-Compton (IC) scattering off the CMB reduces the radiative lifetimes in the relic jets to τ ∼ 2×10 8 yr. The spectral steepening at the ends of the wings visible in Figure 6 indicate ages τ 10 8 yr. This result is typical of giant radio galaxies (Ishwara-Chandra & Saikia 1999). The minimum relativistic pressure in most of PKS 2014−55 is P min ∼ 10 −14 (1 + κ) 4/7 dyne cm −2 . At D A ≈ 240 Mpc, 1 ≈ 1.164 kpc and the largest angular extent φ ≈ 22. 5 of PKS 2014−55 implies a projected overall length l ≈ 1.57 Mpc. This is more than twice the traditional minimum size defining a giant radio source, l ≈ 1 Mpc for H 0 = 50 km s −1 Mpc −1 (Willis et al. 1974), or l ≈ 0.7 Mpc for H 0 = 70 km s −1 Mpc −1 . Even the backflow wings of PKS 2014−55 easily satisfy this criterion: their total projected extent is φ ≈ 14. 0 or l ≈ 0.98 Mpc. The two long arms have nearly equal projected lengths (11. 7 and 10. 8 for the NW and SE arms, respectively) and flux densities (1.033 and 1.025 Jy), so they show no evidence for relativistic time delays or flux boosting.
PKS 2014−55 extends far beyond the virial radius of its host galaxy and directly probes the ambient intergalactic medium (IGM). PGC 064440 is not in a cluster environment rich enough to have a significant intracluster medium. Malarecki et al. (2015) made a spectroscopic study of the Mpc-scale environments for a sample of low-redshift giant radio galaxies including PKS 2014−55. The number density of galaxies more luminous than −19.49 mag in a cylinder of 1 Mpc radius and 24 Mpc length along the line-of-sight centred on PKS 2014−55 is only n ∼ 0.066 Mpc −3 , a typical density in galaxy groups and poor clusters, but a factor of 10 lower than in galaxy clusters.
The faint radio cocoons in Fig. 7 are defined by their fairly constant brightness temperatures T b ∼ 0.5 K between sharp inner and outer boundaries. Figure 6 shows that they have the steep spectra produced by radiative losses, so, like the relic jets, they too may be relics of even earlier activity. Inserting T b = 0.5 K, ν = 1.28 GHz, and line-of-sight depth d = 100 kpc into Equation 2 yields B min ≈ 0.15(1 + κ) 2/7 µG in the cocoon of PKS 2014−55. The corresponding magnetic energy density is U B ∼ 1.0 × 10 −15 (1 + κ) 4/7 erg cm −3 and the minimum relativistic pressure in the cocoons is P min ∼ 1.3 × 10 −15 (1 + κ) 4/7 dyne cm −2 . These low-pressure cocoons are exceptionally sensitive barometers for measuring the pressure of the intra-group medium (IGrM) or the IGM (Malarecki et al. 2013).
If the pressure P e in the external medium is less than the cocoon pressure P c , the cocoon should expand laterally with speed v ⊥ at which the ram pressure balances the cocoon overpressure: where ρ e is the external mass density. The IGM may contain half of all baryons and Ω b ≈ 0.046, so a lower limit to the external mass density at redshift z = 0.06 is the mean baryonic IGM density For a primordial abundance of fully ionized H and He, the mean mass per particle is µ e ≈ 0.6m p ≈ 1.0 × 10 −24 g and the particle density is n e = ρ e /µ e ≈ 2.5 × 10 −7 cm −3 . PGC 064440 is in a poor group of galaxies, where the IGrM particle density is ∼ 10× as high and the temperature range is 10 6 K < T < 10 7 K (Stocke et al. 2019), so the external particle pressure P e = nkT ≈ 10 −15 dyne cm −2 . Even if κ = 0, the minimum cocoon pressure is comparable with the external pressure. Higher κ or non-equipartition magnetic fields would only increase the cocoon pressure. In the limit P e P c , inserting P c = P min into Equation 4 predicts that the cocoon boundary should be expanding into the surrounding medium with speeds between v ⊥ ∼ 200(1 + κ) 2/7 km s −1 (IGrM) and v ⊥ ∼ 700(1 + κ) 2/7 km s −1 (IGM). These expansion speeds are subsonic in the radio cocoons, allowing the cocoons enough time to reach pressure equilibrium (Begelman et al. 1984) and attain their constant brightness temperatures. The PKS 2014−55 cocoons are l ⊥ ≈ 50 kpc wide, so the expansion time scales τ ≡ l ⊥ /v ⊥ of the cocoons are 70(1 + κ) −2/7 < ∼ τ(Myr) < ∼ 250(1 + κ) −2/7 . The energy density of the CMB surrounding PKS 2014−55 at z = 0.06 is U CMB = 5.3 × 10 −13 erg cm −3 . It is larger than the magnetic energy density in the cocoon even in the unlikely event (Beck & Krause 2005) that κ > m p /m e ∼ 2000. The ratio of IC losses to synchrotron losses is U CMB /U B , so the radiative lifetimes ∼ 100 Myr of relativistic electrons in the cocoons are strongly limited by IC scattering off the CMB, not by synchrotron radiation.
The 2MASX apparent magnitude of PGC 064440 is m K = 11.64, the distance modulus is 32.16, and the K correction is K ≈ 6.0 log 10 (1 + z) ≈ −0.15 mag (Kochanek et al. 2001), so M K ≈ −25.37 and the total mass of stars inside the photometric radius r ∼ 34 kpc is log 10 (M * /M ) ≈ 11.6. The mid-infrared source WISEA J201801.29−553931.5 (Wright et al. 2010) coincides with PGC 064440. Its midinfrared colours (W2 − W3) = +3.066 and (W1 − W2) = +1.256 are typical of Seyfert galaxies and far from the colors of elliptical galaxies whose mid-infrared emission is dominated by stellar photospheres (Jarrett et al. 2011). This mid-infrared evidence for circumnuclear dust heated by an active galactic nucleus (AGN) is supported by the presence of a heavily obscured (column density log[N H (cm −2 )] = 23.51 ± 0.14) hard X-ray source at the centre of PKS 2014−55 (Panessa et al. 2016) and the absence of broad optical emission lines. Star formation may also contribute to the mid-infrared emission from PGC 064440.
PGC 064440 is a Seyfert II galaxy with very strong highionization ([O iii]/Hβ ∼ 13) emission lines (Simpson et al. 1996). Many powerful radio galaxies have extended emissionline regions (EELRs) with radii > ∼ 100 kpc. Tadhunter et al. (1989) observed the [O iii]λ5007 line of PGC 064440 with a long slit in P A = 192 • nearly parallel to the continuum major axis and found emission extending ∼ 11 kpc on both sides of the nucleus with the linear velocity field of a constant-density enclosed mass and maximum rotation velocity |∆V max | ≈ 280 km s −1 relative to the nucleus, indicating a total mass M ∼ 2 × 10 11 M within ∼ 11 kpc of the nucleus.
THE HYDRODYNAMICAL BACKFLOW MODEL
The extended radio jets of most high-luminosity sources are linear and inversion symmetric about their host galaxies. Leahy & Williams (1984) noted that opposing axisymmetric backflows could form a fat disc expanding laterally where they encountered the hot ISM of their host galaxy, but a misaligned ellipsoidal hot gas halo could break the axial symmetry and bend the backflows in opposite directions away from the ellipsoid major axis to produce the secondary arms or "wings" of XRGs (Fig. 1). Saripalli & Subrahmanyan (2009) found that XRGs lacking FR II hotspots often contain inner doubles, indicating restarted jets along the same axis, and they proposed that the wings are asymmetric backflows. Backflows from extended radio jets can be treated as fluid flows because their magnetic fields are strong enough that even ultrarelativistic protons and electrons have Larmor radii much smaller than the jet radius. Magnetic lines of force are frozen into the jet fluid, so velocity shear across the jet tends to align the magnetic field along the jet, and jet growth can increase the frozen-in magnetic field strength to near equipartition. Many astrophysical jets are stable enough to survive bending, as demonstrated by the bent tails of radio galaxies moving through dense gas in massive galaxy clusters.
Backflow Geometry
A faint dust lane extending ∼ 10 (∼ 12 kpc) is just visible in the DES DR1 (Abbott et al. 2018) r-band image of PGC 064440 (Fig. 8). Fig. 9 is the corresponding r-band brightness contour plot. The narrow inner ellipse represents the tilted circular ring that overlaps the dust lane, of which only the near half is visible in absorption. The two larger ellipses are fits to the first and third brightness contours in Fig. 9. The parameters of these ellipses and the 2MASS K s "total" isophotal ellipse are listed in Table 2. The isophotal ellipticities ≡ 1 − (φ m /φ M ) are in the range 0.28 < < 0.36. They indicate that PGC 064440 is an oblate ellipsoid whose equatorial position angle is P A = 15 • ± 5 • and whose projected polar axis is at P A = 105 • ± 5 • . If the dust lane is an equatorial ring, the polar axis is inclined ≈ 8 • from the plane of the sky, with the P A = 105 • side closer to us and the P A = −75 • side farther away. Even if PGC 064440 were a thin disc, the isophotal ellipticities independently imply that its polar axis must be < ∼ 45 • from the plane of the sky. sition, ellipticity ≡ (φ M − φ m )/φ M = 0.3, and major-axis position angle P A = 15 • as the starlight of PGC 064440 traced by its outer K s isophote ( Table 2). The shape and orientation are justified by Chandra X-ray Observatory images (Hodges-Kluck et al. 2010) of hot (0.3 keV < kT < 2 keV) halo gas surrounding the host galaxies of XRGs which show that the ellipticities and position angles of the hot gas follow those of the stellar light distributions. The 1.57 Mpc long straight line in Fig. 10 is centred on PGC 064440 and has the same position angle P A = 154 • as the inner triple source. The fact that it closely overlaps the long arms of PKS 2014−55 proves that the radio jet orientation today is within ∼ 5 • of its orientation when the jets were launched tens of millions of years ago and when the faint cocoons were formed even earlier. This is expected if the radio "wings" in P A ≈ 90 • are produced by hydrodynamical backflows from stable jets but improbable for models in which the wings are produced by changing the jet orientation.
The hydrodynamical model predicts that the ends of the wings should contain the "oldest" synchrotron electrons. Their very steep continuum spectra ( Figure 6) are consistent with synchrotron and inverse-Compton energy losses. The cocoons are probably even older. The biggest challenge to the hydrodynamical backflow model presented by PKS 2014−55 is the need to deflect both of its very wide (observed width ∼ 150 kpc, which we treat as a cylinder of radius r ∼ 75 kpc) backflows cleanly in opposite directions without splitting them. This requires that the hot ISM have both a high ellipticity and a large semimajor axis l. Capetti et al. (2002) found that XRG host galaxies in a small sample all have > 0.17, a criterion easily satisfied by PGC 064440, while only three in their reference sample of 15 3C FR II galaxies have ≥ 0.3, the ellipticity of PGC 064440.
When they encounter the hot halo ISM of the host galaxy, backflows initially parallel to the main jets are deflected toward the most negative pressure gradient. The ellipse in Fig. 10 represents an isobaric contour, so backflows bend away from the directions in which the angle ∆ between the backflow and the constant-pressure contour of the ellipse is < 90 • . For any ∆, backflow radius r, and ellipticity , this implies a minimum galaxy semimajor axis l min is needed to deflect the entire backflow to one side. In the case of PGC 064440, ∆ = 41 • and = 0.3, so x i ≈ 0.854l and y i ≈ 0.364l. Then Equation A5 yields the minimum semimajor axis l min that can cleanly deflect the backflow the size of the ellipse drawn in Fig. 10. This ellipse is just the projection of the galaxy onto the sky, which is an ellipsoid of revolution. However, the dust lane indicates that the polar axis of PGC 064440 lies only 8 • from the plane of the sky, so the observed ellipse is a good representation of the ellipsoid. PGC 064440 includes a stellar mass M * ≈ 10 11.6 M (Section 4.2), so the galaxy stellar-to-halo mass relation (SHMR) predicts that its halo virial mass should be M vir > ∼ 10 13.3 M (Wechsler & Tinker 2018). Such massive galaxies are typically assembled at redshift z a ∼ 0.9 and have virial radii (Shull 2014 (1 + z a ) −1 .
Thus PGC 064440 has R vir ∼ 290 kpc, and its halo is (just) big enough to completely deflect the wide backflows of PKS 2014−55.
Head-tail brightness features in the backflows
Radio continuum features with bright bent "heads" and long dark "tails" pointing downstream in the backflows (Fig. 2) of both secondary lobes at first suggest obstacles blocking the backflows. The clearest example is in the western backflow (Fig. 11) The corresponding linear diameter ∼ 10 kpc is comparable with the size of cold H i discs in spiral galaxies. However, we detected no H i line emission and found no visible galaxies on the DES images downstream of the heads. Furthermore, 10 kpc is much smaller than the 2r ∼ 150 kpc diameter of the backflows. The line-of-sight depth of the backflows from such wide axially symmetric jets should be 10 kpc, so a 10 kpc obstacle could not reduce the backflow brightness by the observed amount shown in Fig. 11, which is more than a factor of two.
The dark tail in Fig. 11 cannot be attributed to free-free absorption in an intervening ionized cloud with kinetic temperature T > ∼ 10 4 K because an absorber with optical depth τ emits with brightness temperature T b = T[1 − exp(−τ)], and the 1.28 GHz brightness temperature T b ∼ 2 K of the tail implies τ < 10 −3 . Fig. 10 shows that the dark tails appear in the downstream sides of both backflows near the deflecting halo boundary indicated by the ellipse. It also shows two matching bright regions, one just inside the apex of each "boomerang". These bright regions are probably not traditional FR II hotspots because they are not in the radio jets and they do not have the usual edge-brightened morphology. We suggest that the bright regions indicate compression and turbulence where backflow material is piling up inside the apex. Turbulence on scales smaller than our θ 1/2 = 7 . 4 beam could explain the low observed polarized intensity in the brighter northwestern region (Fig. 5). We suspect that the matched sets of dark tails are simply hydrodynamical features downstream of the bright regions, and don't actually indicate the presence of external obstructions. 15 Figure 11. The bright bent "head" and long dark "tail" in this continuum image suggest blockage of the backflow by an obstacle of size ∼ 10 kpc. The contour levels are 200 µJy beam −1 × 1, 2, 3, . . . , 8.
SUMMARY
The hydrodynamical backflow model for PKS 2014−55 is supported by the following evidence: (i) The observed "double boomerang" radio morphology is expected for backflows from a pair of collinear jets redirected by the oblique hot ISM of the host galaxy. Both the magnetic fields and the total-intensity ridges follow the continuously bending flow lines. Two matching bright regions inside the boomerang apexes suggest compression and turbulence where backflow material is piling up. Bright heads and dark tails appear between both bright regions and their backflow wings. They appear to be features in the flow, not signs of obstruction by the ISM of nearby galaxies.
(ii) AGN activity in PKS 2014−55 has recently restarted ( Fig. 3) with the reborn jets in the same direction (Fig. 10) as the main lobe. Thus, the secondary wings are very unlikely to be the result of a change in the orientation of the spin axis of the SMBH.
(iii) The virial halo of the host galaxy PGC 064440 is large enough and has the correct position angle to cleanly deflect backflows from the wide main jets in the direction observed.
The unique combination of high surface-brightness sensitivity (σ ≈ 48 mK), high angular resolution (θ 1/2 ≈ 7 . 4), and dense (u, v)-plane coverage of our new MeerKAT continuum image makes the very extended radio source PKS 2014−55 the best example of an XRG produced by hydrodynamical backflows from a jet with fixed orientation. The prototypical XRG NGC 326 has been cited as evidence for jet reorientation following an SMBH-SMBH merger (Merritt & Ekers 2002). However, Hardcastle et al. (2019) reobserved the dumbbell galaxy NGC 326 with the Low-Frequency Array (LOFAR) at 144 MHz and found faint, extended radio morphological evidence for hydrodynamical effects related to an ongoing group or cluster merger. Although their result does not rule out the spin-flip model for NGC 326, we endorse their caution not to infer jet reorientation in XRGs lacking deep and detailed radio images.
The new MeerKAT continuum image also revealed faint (T b ∼ 0.5 K) low-pressure (P min ∼ 10 −14 dyne cm −2 ) cocoons with sharp edges and the nearly constant brightness characteristic of subsonic (in the cocoons) expansion into the surrounding intra-group medium probed by the giant source PKS 2014−55. This pressure assumes κ = 40. The pressure in the cocoons could range from P min ∼ 10 −15 dyne cm −2 if κ = 0 to P min ∼ 10 −13 dyne cm −2 if κ = 2000. | 8,787.8 | 2020-05-06T00:00:00.000 | [
"Physics"
] |
PSPICE Hybrid Modeling and Simulation of Capacitive Micro-Gyroscopes
With an aim to reduce the cost of prototype development, this paper establishes a PSPICE hybrid model for the simulation of capacitive microelectromechanical systems (MEMS) gyroscopes. This is achieved by modeling gyroscopes in different modules, then connecting them in accordance with the corresponding principle diagram. Systematic simulations of this model are implemented along with a consideration of details of MEMS gyroscopes, including a capacitance model without approximation, mechanical thermal noise, and the effect of ambient temperature. The temperature compensation scheme and optimization of interface circuits are achieved based on the hybrid closed-loop simulation of MEMS gyroscopes. The simulation results show that the final output voltage is proportional to the angular rate input, which verifies the validity of this model.
Introduction
In recent years, due to a rapid development of microelectromechanical systems (MEMS) technology, MEMS gyroscopes have become an indispensable portion of the inertial navigation, military, and consumer electronics market for the advantages of their small size, light weight, low cost, and high reliability [1]. The MEMS gyroscope is a device for angular rate detection, and its basic principle is based on the Coriolis Effect [1]. The proof mass of a gyroscope oscillates along the drive axis with a stable frequency and stationary amplitude; meanwhile, this gyroscope is subjected to an angular rate Ω in the input axis. Then, it will generate a vibration of the same frequency along the sense axis due to the Coriolis force. The angular rate information is provided by the sense axis, which specifically is proportional to the direct current (DC) output of the sense axis. It is necessary to note that the three axes mentioned above are orthogonal to each other.
(1) A detailed model of MEMS gyroscopes is developed. This model considers mechanical thermal noise equivalent disturbance as well as the capacitance model without approximation. The model also considers the effect of temperature on the displacements of both modes. (2) All output ports of the model are capacitive interfaces, which can be directly connected to conditioning circuits. Therefore, a closed-loop simulation of the model and interface circuits is achieved. A calibration scheme for temperature is developed based on a different Zero Rate Output (ZRO) at different temperatures. (3) Based on the simulation results, optimization designs of interface circuits are achieved, including the value of the demodulation signal phase φ y and the circuit gain k in the closed-loop detection circuit.
The remainder of this paper is organized as follows. In Section 2, PSPICE models of different modules in gyroscopes are firstly proposed, including a sensitive structure model, a Coriolis force and elastic coupling force model, a mechanical thermal noise equivalent disturbance model, a mechanical model, the temperature models of gyroscope parameters, and a differential capacitance model. Then, the PSPICE device model of capacitive MEMS gyroscopes is established based on these modules together with its principle diagram. Lastly, the interface circuits of the closed-loop simulation are also discussed. In Section 3, a systematic simulation of this model and a closed-loop simulation of MEMS gyroscopes are conducted with an analysis of the effect of ambient temperature and optimization of interface circuits, and simulation results verify the validity of this model. In Section 4, the corresponding discussion is given.
PSPICE Models of Different Modules in MEMS Gyroscopes
A complete MEMS gyroscope is a system of coupled mechanical and electrical components. Its capacitance interface is used to achieve the electromechanical coupling process. Figure 1 shows the capacitance interface of the modeled gyroscope. In Figure 1, the x-axis is the direction of drive mode. The y-axis is the direction of sense mode, and the z-axis is the input direction of the angular rate. The blue movable electrode is the structure (STRC) electrode, and it is biased with a DC voltage U 0 . In terms of drive mode, the four orange plates are exactly the same and fixed. The upper two plates are actuation electrodes D 1+ and D 1− . Asinω d t and −Asinω d t are differential alternating current (AC) excitation voltages applied to the two electrodes for generating an electrostatic driving force F x on the movable plate. The lower two plates D 2+ and D 2− are detection electrodes of drive mode, which detect the drive-axis displacement of the STRC electrode. The same structure exists in sense mode. The plates S 1− and S 1+ are detection electrodes of the sense mode, which detect the sense-axis displacement of the STRC electrode. The plates S 2− and S 2+ are force feedback electrodes of sense mode and are used to compensate for the Coriolis force induced by the input angular rate.
There is some software that can be used in the simulation of MEMS gyroscopes, such as COMSOL, SIMULINK, and PSPICE. COMSOL mainly emulates the mechanical characteristics of MEMS gyroscopes by finite element analysis, so it is incapable of simulating a complete gyroscope system including the interface circuits. Large quantities of simulations on MEMS gyroscopes are conducted in SIMULINK. In [9], a behavioral model of MEMS gyroscopes constructed in SIMULINK is presented. This model simply describes the mechanical model of gyroscopes and behavioral models of interface circuits. However, SIMULINK software is inadequate to simulate actual circuit electronics and changes of environmental parameters, such as ambient temperature. Because of its good convergence, PSPICE software is suitable for system-level and circuit-level simulation. PSPICE has rapid and accurate simulation ability, so it has been successfully used in a wide variety of linear and nonlinear electrical circuit simulations [10]. PSPICE can simulate both the electronic and non-electronic components, which is consistent with the purpose of this paper, namely the modeling of MEMS gyroscopes and the simulation of interface circuits. Other important reasons to choose PSPICE as the simulation tool are that PSPICE uses more accurate and realistic analog electronic component models and the parameters of models are variable in the simulation process. Figure 2 shows the principle diagram of capacitive MEMS gyroscopes, which is based on the diagram of MEMS sensors in [11]. Different modules of MEMS gyroscopes in this principle diagram are established by a PSPICE Simulation Model, and these modules are discussed separately in the following sections. This diagram illustrates that the inputs of gyroscopes include the AC excitation voltage for drive mode, the input angular rate for sense mode, and the mechanical thermal noise existing in both modes. The outputs of gyroscopes are the changes of differential capacitance, which are caused by the vibration displacements of both modes.
The Sensitive Structure
For drive mode, the electrostatic driving force is induced by the differential AC excitation voltages acting on the sensitive structure. Specifically, Asinω d t and −Asinω d t are applied to the actuation electrodes D 1+ and D 1− . Additionally, the STRC electrode is biased with a DC voltage U 0 . This electrostatic force F x is normal and it is expressed as a gradient of the electrical potential energy in the capacitor.
The energy of a parallel plate capacitor is where C is the capacitance value and V is the voltage applied to the capacitor. The electrostatic driving force F x+ between the movable plate and the right actuation plate can be calculated as follows: where d x is the initial spacing of the two plates, x is the drive-axis displacement of the STRC electrode, C 0 is initial capacitance between the movable plate and actuation plates of drive mode and its value is εa x d x , ε is the dielectric constant, and a x is the plate area. The same derivation is also applied to the left actuation plate and the movable plate. Additonally, the total electrostatic driving force F x for the movable plate is shown below: The above formula shows that F x is related to Asinω d t and x. In case of x << d, F x can be simplified as follows: Formula (4) indicates that F x is proportional to the excitation voltage Asinω d t when x << d x . Additionally, the gain factor k vf is 2C 0 d x U 0 . With the sensitive structure, the AC excitation voltage can be converted into the electrostatic driving force of drive mode. The scale factor k vf will be used in the closed-loop simulation of MEMS gyroscopes.
The Coriolis Force and Elastic Coupling Force Model
This model describes the mechanism for the generation of Coriolis force and elastic coupling force. The Coriolis force is induced by the Coriolis Effect. It is related to the angular rate input and the displacement velocity of drive mode. The Coriolis force can be calculated according to the specific formula, where → Ω is the angular rate of rotation and is along the z-axis, V x is the displacement velocity of the STRC electrode in drive mode, and m x is the effective mass of drive mode, which also happens to be the proof mass in our gyroscope model. The direction of the Coriolis force is along the y-axis. It should be noted that when the input angular rate is relatively large, the Coriolis force generated by the movement of sense mode will also affect the drive mode. The corresponding Coriolis force is where V y is the displacement velocity of the STRC electrode in sense mode. This force will act upon the drive mode of MEMS gyroscopes. MEMS gyroscopes may be affected by fabrication imperfections which can lead to a coupling error between the drive mode and the sense mode. The most common error is a quadrature error due to spring imbalance, which causes an additional elastic coupling force. This force is orthogonal to the Coriolis force in the phase [12], and it is proportional to coupling stiffness and drive-axis displacement. The size of a coupling force can be calculated as where k yx is the coupling stiffness, which is used to characterize the effect of spring imbalance. The direction of an elastic coupling force is also along the y-axis.
The Mechanical Thermal Noise Equivalent Disturbance
For the mechanical thermal noise existing in a MEMS gyroscope's structure, [13] indicates that the effect of the noise can be equivalent to a constant disturbance force upon both modes. The size of the equivalent force is where K B is Boltzmann's constant, and its value is 1.38 × 10 −23 J/K, T 0 is degrees Kelvin, and B is the bandwidth of the gyroscope. For a mode-split gyroscope, its bandwidth is approximately equal to 0.54 times the frequency difference when operating in open-loop detection mode [14]. Additionally, this bandwidth is valid for the simulation of a PSPICE device model of capacitive MEMS gyroscopes. c is the damping coefficient of the gyroscope structure, and it is different for the two modes.
The corresponding values of the damping coefficient are shown below: where Q x and ω x are the quality factor and resonant angle frequency of drive mode, respectively, Q y and ω y are the quality factor and resonant angle frequency of sense mode, respectively, and m x and m y are the effective drive mass and effective sense mass, respectively. It is noteworthy that the quality factor and resonant frequency of gyroscopes are greatly affected by ambient temperature. The effect of temperature on parameters will be discussed in the following temperature model.
The Mechanical Model of MEMS Gyroscopes
A mechanical model of MEMS gyroscopes can convert the resultant forces into displacements for both modes. Additionally, a gyroscope can be considered as a second-order Spring-Mass-Damping system. Its characteristics equations are where k xx and k yy are the stiffness coefficients of the drive mode and sense mode, respectively, F nx and F ny are the mechanical thermal noise equivalent disturbance force of the drive mode and sense mode, respectively, and F x and −2m x Ω .
x are the driving forces of the two modes, respectively. The transfer functions of the characteristic equations are where s is the variable symbol jω d of the transfer function. The transfer functions convert the corresponding resultant forces into displacements for both modes when mechanical model parameters of gyroscopes are known. Additionally, the mechanical model is also related to temperature.
The Temperature Models of Gyroscope Parameters
As discussed earlier, the quality factor and resonant frequency of gyroscopes are greatly impacted by ambient temperature. Therefore, the temperature will affect the displacements of both modes. The authors in [15] indicate that temperature and resonant frequency approximate a linear relationship. A corresponding experiment was carried out to obtain the linear factor based on the gyroscopes in the authors' research group. The varying resonant frequency measurement is based on the dynamic signal analyzer. At different temperatures, the phase-locked loop controls the drive mode to operate at different working frequencies. Because the Q value of drive mode is very large, the working frequency is approximately equal to the resonant frequency. Therefore, the resonant frequencies of drive mode are obtained by measuring the working frequencies through the "channel" port of the dynamic signal analyzer at different ambient temperatures. For sense mode, the measurement of resonant frequency at different temperatures is achieved through the "source" and "channel" ports of the dynamic signal analyzer. The "source" port outputs a chirp sweep signal to the force feedback electrode. Additionally, there are corresponding response signals with different amplitudes on the detection electrode. These response signals are displayed on the dynamic signal analyzer through the "channel" port. The point with the largest response amplitude corresponds to the resonant frequency of sense mode. Then, the relationship between the resonant frequency and the ambient temperature is shown as follows where f x and f y are the resonant frequency of drive mode and sense mode, respectively, and T is degrees centigrade. In order to obtain a higher quality factor, the microstructure of MEMS gyroscopes is vacuum-packaged. In [16], the quality factor is proportional to the resonant frequency and inversely proportional to the square root of the degrees kelvin. Additionally, the proportional factors of both modes are different. The resonant frequency and quality factor at 27 degrees centigrade will be used to calculate the proportional factor of both modes. Therefore, the Q values of two modes at 27 degrees Celsius need to be measured. The Q value measurement needs the "source" port and the "channel" port of the dynamic signal analyzer. For the drive mode and sense mode, the "source" port outputs a chirp sweep signal to the actuation electrode and the force feedback electrode, respectively. The corresponding response signals are generated on the detection electrodes of both modes and they are displayed on the dynamic signal analyzer through the "channel" port. The Q value is obtained through dividing the resonant frequency by the corresponding −3 dB bandwidth. A corresponding experiment was carried out to obtain the linear factor based on the gyroscopes in the authors' research group and the formulas are shown as follows: Q y = 16.891 f y / (T + 273.15).
2.1.6. The Differential Capacitance of MEMS Gyroscopes In Figure 1, the lower two orange plates and the movable blue electrode form the differential detection capacitance of drive mode, and the capacitance is, respectively: For sense mode, its differential detection capacitance is formed between the two fixed red plates and the movable blue electrode. The differential capacitance is, respectively: where d y is the initial spacing of the capacitance, a y is plate area, and y is the sense-axis displacement of the STRC electrode. In our capacitance model, the actual parallel-plate detection mechanism is used. The capacitance change is not simply approximated to a direct proportion of the displacement. This approximation is sometimes used in the simulation of MEMS sensors. The distance of the MEMS capacitor plate is much less than its length and width. Therefore, the edge effect of capacitance can be ignored. The parallel-plate sensing mechanism contributes a nonlinear behavior between the sense capacitance and the sense-axis displacement. This nonlinearity will be eliminated in normal operation because the displacement produced by the Coriolis force is suppressed by the feedback force. In actual gyroscopes, the capacitive nonlinearity is from multiple effects, including microfabrication process errors, parallel plate nonlinearity due to deformation, C-V conversion circuit behavior, and the quadrature error existing in gyroscopes. In our model, these factors are assumed to be ideal because the entire simulation is based on the PSPICE's own model components and electronics.
Establishment of Capacitive MEMS Gyroscopes PSPICE Device Model
The parameters of the PSPICE device model of parallel-plate capacitive micro-gyroscopes are shown in Table 1 at a constant temperature of 27 degrees centigrade. All other parameters except k yx are derived from the gyroscopes in the authors' research group. The data for k yx in Table 1 is not experimental data but simulation data. The authors in [4,17] indicate that the typical quadrature force is much larger than the Coriolis force used for sensing the external angular rate. In the case of other parameters, they are known assuming that k yx = 5 N/m is in accordance with the precondition (assuming the typical angular rate Ω is 0.01 • /s). Additionally, k yx = 5 N/m is experimental data according to [4]. Therefore, we take k yx = 5 N/m as our simulation data. The PSPICE device model is established according to Figure 2 as shown in Figure 3. The formulas in Figure 3 represent the modules in Figure 2 and they have been discussed in detail in Section 2.1. This model consists of drive mode and sense mode. In terms of drive mode, the resultant force and mechanical model form a loop to generate a displacement. All of the output ports are differential capacitance interfaces that can be directly connected to a C-V conversion circuit. For the sense mode, the driving force is induced by the Coriolis force and elastic coupling force model. It will be converted into a displacement with the mechanical model. The displacement affects the differential capacitance. The final differential capacitance output is related to the angular rate input. The specific simulation results of a parallel-plate capacitive micro-gyroscopes PSPICE model at a constant temperature of 27 degrees centigrade are shown in Section 3.
The temperature will affect the resonant frequency and quality factor, which changes the gyroscope's mechanical model. So, the gyroscope will have different motion states at different temperatures. The simulation results at different temperatures with the same excitation voltage are also described in Section 3.
The PSPICE Closed-Loop Simulation of MEMS Gyroscopes
The temperature will change the motion state of a gyroscope, so it is necessary to implement the closed-loop control of MEMS gyroscopes. For drive mode, the Phase-locked Loop (PLL) method [18] is usually used to control the drive frequency's stability. The Automatic Gain Control (AGC) method [19] is used to keep the amplitude of the drive-axis displacement constant. For sense mode, the Closed-loop Detection method [20] is used to ensure that the proof mass always vibrates near the equilibrium position. The corresponding block diagram of the closed-loop simulation based on the above three methods and the micro-gyroscopes PSPICE model is shown in Figure 4. For drive mode, the control circuits consist of a C-V conversion circuit, a PLL frequency control circuit, and an AGC amplitude control circuit. The gain factors k vf and k cv represent the sensitive structure and the C-V conversion circuit, respectively. When the drive mode is stable, the output of the loop filter should be a stabilized value at a fixed temperature, which means that the vibration frequency of displacement reaches stability. It is noteworthy that the stabilized value will change slightly with the ambient temperature due to the change of resonant frequency. Additionally, the output of the amplitude detection module is equal to the reference value when the drive mode is stable, which means that the vibration amplitude of the displacement reaches a constant value. The corresponding simulation results of closed-loop control are shown in Section 3.
For sense mode, the purpose of closed-loop detection is to suppress the Coriolis force completely. This is accomplished by adding differential AC signals to the feedback electrodes of sense mode, namely S 2+ and S 2− in Figure 1. It is noteworthy that the phase of the differential AC signal needs to be opposite to the phase of the displacement velocity of drive mode. A corresponding feedback force is induced to suppress the Coriolis force by adjusting the amplitude of the differential AC signals. The feedback force can be calculated by formula (4), and its value is where C 1 is initial capacitance between the movable STRC and each of the four fixed plates of the sense mode, Bsin(ω d t + φ b ) is the AC signal, which has opposite phase of the displacement velocity of drive mode, and k ins is the simplified interface gain of the force feedback structure. In our model, the output of the k ins gain module is the output voltage of MEMS gyroscopes corresponding to different angular rates.
The Simulation Results of the Capacitive MEMS Gyroscopes PSPICE Device Model
The simulation results of the capacitive micro-gyroscopes PSPICE model are described in Figure 5. Figure 5a shows that the drive-axis displacement and sense-axis displacement are stable sine waves with the same frequency as the AC excitation voltage. This means that the Coriolis Effect has been achieved through the established PSPICE model. Figure 5b indicates that the amplitude of the sense-axis displacement changes with the angular rate. It is noteworthy that the sense-axis displacements corresponding to different angular rates will intersect at specific points. At these points, the phases of the sense-axis displacement are some periodic values, which will cause the sense-axis displacement to be zero and the quadrature displacement to be the maximum. Therefore, these points represent the quadrature displacement, which is induced by an elastic coupling force and its amplitude is 4.804 nm. The sense-axis displacement is not proportional to the angular rate input due to the existence of quadrature displacement. Figure 5c shows that the displacements of gyroscopes can be converted into differential capacitance changes. Thus, all output ports of this PSPICE model are capacitance interfaces that can be connected directly to a C-V conversion circuit.
Changes in ambient temperature will cause changes in the resonant frequencies and quality factors of MEMS gyroscopes, which can affect the dynamic characteristics of the gyroscope. Therefore, the same excitation voltage will induce different drive-axis displacement and sense-axis displacement at different ambient temperatures. Figure 6 shows the corresponding simulation results.
The simulation results show that the drive-axis displacement and sense-axis displacement will have different phases and amplitudes at different temperatures, which indicates that the gyroscope is greatly affected by the ambient temperature. Therefore, subsequent conditioning circuits are necessary to achieve the closed-loop control of MEMS gyroscopes at different ambient temperatures. The simulation results show that the outputs of amplitude detection at different temperatures are all 1 V, which is equal to the reference value. This means that the amplitude of the drive-axis displacement is a fixed value as the excitation voltage is adjusted at different ambient temperatures. Additionally, at different temperatures, the output of the loop filter reaches a stabilized value at about 0.5 s. This means that the vibration frequency of the drive-axis displacement has stabilized. Therefore, the drive mode realizes closed-loop control of the vibration amplitude and frequency at different temperatures. It is noteworthy that the output voltages of the loop filter corresponding to different temperatures are different, which indicates that the stable vibration frequency of the drive-axis displacement is different when the ambient temperature changes. This is because the resonant frequency of drive mode is affected by the ambient temperature.
The Closed-Loop Simulation Results of Sense Mode
For sense mode, the closed-loop detection is based on complete compensation for the Coriolis force. This means that the proof mass is only affected by the quadrature force, which causes the proof mass to always vibrate near the equilibrium position. The closed-loop detection method uses the Proportion-Integration (PI) controller algorithm, which will be specifically explained in Section 3.2.3. Figure 8 shows the simulation result of the resultant force of Coriolis force and feedback force. The simulation result shows that the resultant force is gradually changed to Zero at 1 s (Ω = −300 • /s), which means that the Coriolis force is completely compensated for by the feedback force. Specifically, the closed-loop detection is achieved by setting up feedback electrodes. The electric potential energy is generated between the feedback electrodes and the proof mass by applying external feedback voltages, so a tangential electrostatic feedback force is generated to compensate for the Coriolis force completely. Therefore, the closed-loop detection is implemented based on this method. Figure 9 shows the output voltage corresponding to different input angular rates. Different input angular rates can cause different Coriolis force values, so the feedback force induced by the output voltage also changes. Therefore, there is a proportional relationship between the input angular rate and the output voltage. Figure 9 shows that the scale factor is -0.02459 mV/( • /s) with a linear correlation coefficient of 1. The nonlinearity induced by the parallel-plate sensing mechanism is completely eliminated in closed-loop detection because the proof mass always vibrates near the equilibrium position.
Changes in temperature will cause changes in the motion state of a gyroscope. So, the phase φ y of the sense-axis displacement will also be changed. However, the phase φ 1 of the demodulated signal used in the closed-loop detection circuit cannot be changed in real-time. Therefore, different temperatures will induce different output voltages when the input angular rate is zero. Figure 10 shows different ZRO voltages of gyroscopes at different temperatures. Additionally, a corresponding cubic fitting curve based on the self-compensation method [21] is obtained as where ZRO(T) is the curve fitting results for T, and its unit is µV. The final compensated curve is the difference between the raw simulation result and the fitting result. The final temperature result shows that the compensated ZRO voltages are around zero in the temperature range from −20 to 40 • C.
The Optimization Designs of the Closed-Loop Detection Circuit
It is noteworthy that the choice of the demodulated signal phase φ 1 is important for the aforementioned closed-loop detection scheme. The purpose of closed-loop detection is to control A yc = 0 by applying the feedback voltage. This is achieved indirectly by controlling A c = 0 while also requiring a suitable phase φ 1 . A c is the controlled variable of the PI controller. The Coriolis displacement and quadrature displacement can be assumed as where A yc is the amplitude of the Coriolis displacement, and this displacement is the result of the interaction between the Coriolis force and the feedback force, A yq is the amplitude of the quadrature displacement, and φ y is the phase of the sense-axis displacement. The orthogonal demodulation process in the closed-loop detection circuit is shown in Figure 11. Therefore, the controlled variable of the PI controller is where φ 1 is the phase of the demodulated signal. When φ y = φ 1 , the second term of Equation (27) is zero. This means that the quadrature displacement has no effect on the output voltage. Therefore, A c and A yc are in a ratio relationship. The loop can control that A yc equals to zero by controlling that A c is equal to zero. When φ y = φ 1 , the error term due to the quadrature displacement is not equal to zero. This means that the output voltage is affected by the Coriolis displacement and quadrature displacement. Therefore, A yc cannot be controlled to zero by controlling A c to zero. Additionally, the Coriolis force cannot be completely compensated for in this case. In consequence, the choice of the demodulated signal phase is important for the optimization of Coriolis force cancellation. Figure 12 shows the compensation for the Coriolis force at different demodulation phases. The simulation results show that when φ 1 = φ y (φ 1 − φ y ≈ 130 • ), the resultant force of Coriolis force and feedback force is not equal to zero. Therefore, the Coriolis force has not been completely compensated for in this case. The above analysis and simulation show that the optimization of Coriolis force compensation is achieved by choosing a suitable demodulation signal phase φ 1 .
In addition to the phase of the demodulation signal, the circuit gain of closed-loop detection will also affect the performance of the circuit. For simplicity, we take the product of k 1 and k ins as k for analysis. Figure 13 shows different stabilization times at different k. The simulation results show that a suitable circuit gain will induce the shortest stabilization time. The feedback force is proportional to the output voltage. So, the feedback force is constantly changing during the process of stabilization. A large k will cause the overshoot of the output voltage to increase, thus increasing the stabilization time of the feedback force. A small k will also slow down the response process of the feedback force. So, the corresponding stabilization time will also be increased.
Discussion
In this paper, a PSPICE device model of parallel-plate capacitive micro-gyroscopes was introduced for reducing the cost of prototype development, which consists of a principle diagram and different modules. Systematic simulations of this model and closed-loop control circuits were conducted, and it is demonstrated that this PSPICE device model has two clear advantages: (1) Some details of MEMS gyroscopes are considered in this model, including the capacitance model without approximation, mechanical thermal noise, and the effect of temperature. (2) The closed-loop simulation of MEMS gyroscopes is achieved based on the differential capacitance interfaces of this model and subsequent interface circuits.
Some optimization designs of the circuit have been implemented based on this model, including a complete compensation for Coriolis force and the corresponding stabilization time. The test Printed circuit board (PCB) of closed-loop detection is being designed. The corresponding experiment result will be summarized in future work. | 6,994.4 | 2018-03-28T00:00:00.000 | [
"Engineering",
"Physics"
] |
Obtaining thin metal films and their compounds using magnetron sputtering and arc evaporation in a single technological cycle
The authors of the article consider the possibility of obtaining thin films using magnetron sputtering and arc evaporation. The prospects of combining these methods in a single processing unit for obtaining films with high performance characteristics have been shown.
Thin films of various materials have found wide application in many fields of science and technology. First of all, these are conductive, semiconductor, dielectric, protective and other layers in microelectronics, wear-resistant coatings on tools and protective and decorative coatings on machine parts and on industrial and household products. The thin film properties depend upon not only the material, but also to some extent upon its crystallographic structure.
The most advanced layers are grown using chemical vapor deposition from the gaseous (vapor) phase (CVD). In this case, film growth occurs through the successive deposition of layers, i.e. tangential movement of the steps. This method has many limitations. In the first place, it is the presence of an alignment substrate and a high temperature of the growing process.
Physical vapor deposition methods for thin films obtaining do not have those limitations. These methods are characterized by non-equilibrium crystallization conditions. When non-alignment (amorphous and polycrystalline) substrates are used, film growth occurs according to the standard process. The predominant film growth direction depends on the atomic structure of the material being grown and the alignment of the films relative to the substrate is dependent upon the direction of the film-forming particles flow [1]. Joining of new particles to the atomically rough (diffuse) surfaces takes place from the macroscopic point of view in any spot, so that the surface in the process of growth shifts along the normal line to itself at each point [2]. Films grown using these methods have a fibrous (columnar) structure (figure 1). Texturing of fibers for materials with a cubic lattice (TiN, ZrN, etc.) is possible along the <111> directions, less often <100>, <110> directions, and in binary diamond-like compounds with a wurtzite structure (AlN, ZnO, etc.) in <0001> direction, and less often in <1120> one. Figure 1 is the photograph of AlN film chipping showing the structure of the film. The space between the fibers is filled with an amorphous phase. Impurity metal atoms can be spread in the form of point defects, both in the crystalline phase and in the amorphous one. When the concentration of impurity atoms is higher than 5% and when the drop phase is on the substrate, the amorphous metal clusters are formed at the interfaces [3]. Thus, to obtain thin films with the required properties, a bond between the growth conditions and the crystalline structure should be established. In this case, finding the crystallographic grain orientation in the film is far from being sufficient. It is also necessary to evaluate the degree of crystallinity, i.e. the ratio of the crystalline and amorphous phase.
Among the physical vapor deposition methods, magnetron sputtering and arc evaporation are the most widely used ones.
The electric discharge arising in the working chamber, vacuum-treated to the pressure of 10-10 -4 Pa and below, belongs to the class of so-called vacuum arcs or high-current discharges burning in vapors of the cathode material, i.e. the so-called cathode form of a vacuum arc with integral cold electrodes and an eroding cathode is executed. The discharge at the anode with a developed surface (the body of the working chamber) is diffuse. The discharge at the cathode exists in the form of the cathode spots.
The local energy density in the cathode microspots, chaotically moving along the integrally cold cathode surface is rather high (106-107 W/cm 2 ). This causes a high erosion rate (evaporation) of the cathode material in the area of cathode microspots. Erosion products of the cathode material are emitted from the area of cathode microspots in the form of high-speed plasma jets with a high ionization degree (up to 80%).
A flow of the deposited substance is formed as a plasma flow with a high ionization degree and high particle energy. This flow condenses on the surface of the substrate, forming a film. Ion bombardment of the growing film strongly affects the microstructure and the structure of the crystalline phase.
Ion bombardment of the substrate is especially important for growing films using reactive sputtering. These are, first of all, films of metal nitrides, oxides and carbides with high hardness, wear resistance and thermal conductivity. Films are obtained by adding a reaction gas into the working chamber, which results in the fast chemical direct synthesis reaction on the substrate surface when the active plasma of the cathode material and the gas (oxygen or nitrogen) added into the chamber are mixed.
The downside of arc evaporation is the presence of a droplet phase in the film resulting in the deterioration in the continuity and corrosion characteristics of the coating. A highly ionized flow of a high density of deposited particles onto a substrate being under bias voltage induces high internal stresses in the film. Therefore, to obtain good adhesion of the coating on some substrates, high-quality cleaning of the substrate (including ionic one) is required. It is also necessary to keep a high temperature of the substrate during the deposition process. When using low-temperature materials as a substrate, for example, tools made of high-speed steel, a high temperature in the condensation zone results in the dull surface and the edge of the tool is tempered. It is especially pronounced during heating and ionic cleaning of the tool surface in the arc discharge plasma when high (> 1 kV) voltage is applied to the substrate. Magnetron sputtering (one of the methods of ion sputtering, which is primarily features the presence of a magnetic field at the sputtered surface of the target, which supports to localize the plasma and thereby to increase the sputtering rate) using a standard magnetron (SM) does not have these disadvantages. The content of the amorphous phase in the film is rather high because this method has a low degree of deposited particles ionization.
The development of magnetron sputtering methods, the emergence of unbalanced magnetrons (UM) made it possible to expand significantly magnetron sputtering application. Due to the special configuration of the UM magnetic field, ionization of the working gas and sputtered particles occurs not only at the target surface, but also throughout the entire length from the target to the substrate, i.e. the film is growing under ion bombardment. The degree of ionization of the sputtered particles is 1-10% [4].
At the same time, to implement this method high pump rates and more precise process conditions of growing metal nitride, oxide and carbide films are required. In addition, it easier to coat parts of complex geometric shapes at a large target-to-substrate distance using arc evaporation. Table 1 shows the comparative characteristics of arc evaporation (AE) and magnetron sputtering (standard magnetron -SM, unbalanced magnetron -UM). The combination of both methods in a single processing unit allows us to obtain thin films of various materials with the required characteristics. With this purpose in mind, the arc evaporation unit NNV6.6 I4 was redesigned. We additionally installed 4 unbalanced magnetrons with a target size of 85 x 500 mm (figure 2). The standard technological process, for example, to make protective and decorative coatings, can consist of the following steps: 1. Loading items into the working chamber. 2. Obtaining vacuum and heating items using a resistive heater. 3. Cleaning items in a glow discharge. 4. Cleaning items in a magnetron discharge, a high voltage being applied to the substrate. 5. Deposition of a transition temperature-protective and corrosion-resistant titanium layer using magnetron sputtering. 6. Deposition of a wear-resistant, decorative layer of titanium nitride or zirconium nitride using arc evaporation.
The redesigned unit with a slight change in the technological process supports to apply various types of thin films on the substrates with the desired crystal structure, Thus, the combination of arc evaporation and magnetron sputtering in one technological unit significantly expands the range of materials used and improves the quality of the coatings obtained. | 1,885.8 | 2021-10-01T00:00:00.000 | [
"Physics",
"Engineering",
"Materials Science"
] |
Transparency Is A Key Indicator Of The Activity Of Sovereign Wealth Funds
This article examines and analyzes the criteria for maximum transparency, which is the main criterion for evaluating the effectiveness of sovereign funds. Two globally recognized methods for assessing the maximum transparency of sovereign wealth funds have been analyzed. The Santiago Principles and the Linaburg-Madwell Transparency Index, approved by 26 member countries of the International Monetary Fund and based on generally accepted rules, have been studied and analyzed in detail. Foreign experience in ensuring maximum transparency of sovereign wealth funds has been studied. A number of problems in ensuring the maximum transparency of the Fund's activities, which are the main criteria for assessing the effectiveness of the Fund for Reconstruction and Development in our country, were analyzed and scientific and practical suggestions and recommendations were made to ensure maximum transparency of the Fund's activities.
The American Journal of Management and Economics Innovations (ISSN -2693-0811)
INTRODUCTION
In order to create favorable conditions for attracting foreign investment to our national economy, to ensure a state and individual approach to assessing the financial and economic efficiency of investment projects, to put an end to bureaucracy, red tape and bureaucracy, to increase the responsibility of officials for expertise. many Resolutions and Decrees were adopted. In particular, the development, the timely identification and elimination of a number of shortcomings that hinder the timely implementation of reforms in the country, and determines the priority of investment projects in order to gain a foothold in the international arena [1].
The Fund for Reconstruction and Development of the Republic of Uzbekistan plays an important role in the implementation of projects on modernization and technical reequipment of leading sectors of the economy in our country, as well as effective structural reforms and investment policies. One of the main tasks of the Fund is to finance strategic investment projects in the implementation of priorities related to the development of leading sectors of the economy, the formation of production infrastructure.
The existence of a number of problems in ensuring maximum transparency of the Fund, which is the main criterion for assessing the effectiveness of the Fund for Reconstruction and Development in our country, has a negative impact on the intensification of the Fund's investment processes, development of social infrastructure and investment efficiency.
These circumstances require a radical overhaul of the procedure for forming investment programs, its financing mechanisms, increasing the transparency and efficiency of the selection of proposed projects.
The President of the Republic of Uzbekistan Sh.M.Mirziyoev, emphasizing the importance of investment in achieving high economic growth and ensuring strong social protection, criticized the fact that so far the work in this area has been carried out in a fragmentary manner. many enterprises went bankrupt as a result of the lack of investors' own funds. Commercial banks, which have been tasked with rehabilitating them, have also suffered. Therefore, starting from this year, the practice of transferring bankrupt enterprises to the balance of banks has been stopped. As a result of the superficial economic analysis, the projects did not justify themselves even after their commissioning -production was not mastered due to lack of raw materials, lack of energy and gas supply, and economic inefficiency "[2].
Based on the above, the removal of restrictions on the official publication of certain information in our country today is a very important issue.
ANALYSIS OF THE RELEVANT LITERATURE
One of the criteria for determining the effectiveness of sovereign wealth funds is to ensure maximum transparency of the activities of funds, which many economists have stated in their research.
Of these, the Russian economist K. Pupynin, in his research, prioritizes the issue of transparency of sovereign wealth funds, recognizing that non-transparency of fund activities is a problem that hinders the development of the fund. [3] Another Russian economist, E. Vasin, divided the criteria for assessing the effectiveness of sovereign wealth funds into seven groups, noting that the main criterion is to ensure maximum transparency of sovereign wealth funds. [4] The Santiago Principles were developed by a special working group set up by the International Monetary Fund with a special focus on ensuring maximum transparency in the activities of sovereign wealth funds. According to him, the principles of transparency are the result of the joint efforts of sovereign wealth fund managers in developed economies and emerging markets to create a comprehensive framework that provides a clearer understanding of fund activities. [5] The Linaburg-Madwell Transparency Index was developed by Carl Linaburg and Michael Madwell of the Institute for Sovereign Wealth Funds to assess the transparency of sovereign wealth funds. [6]
RESEARCH METHODOLOGY
The main purpose of the study is to develop scientific and practical proposals and recommendations to ensure maximum transparency of the Fund for Reconstruction and Development in the country as a result of studying and analyzing the criteria for ensuring maximum transparency, which is the main criterion for assessing the effectiveness of sovereign wealth funds. Comparison, grouping, and economic statistical methods were widely used in the research process. As a result of the study, conclusions were made on ensuring maximum transparency of the Fund for Reconstruction and Development, and scientific and practical recommendations were developed to ensure the transparency of the Fund's activities.
ANALYSIS AND RESULTS
The need to ensure the transparency of sovereign wealth funds is explained by the formation of sovereign wealth funds on the one hand, and by the efficient use of fund funds on the other.
We can see from world practice that in most countries the income of sovereign wealth funds is formed on the basis of funds from the export of strategic products and gold and foreign exchange reserves. The fact that not only the current generation but also future generations have a contribution to these sources of funding shows the need for transparent governance.
Transparency of the Fund's activities is an important factor in the effective use of sovereign wealth funds, ensuring the purposefulness, targeting of funds, prevention of looting of funds.
In world practice, the criterion for ensuring maximum transparency in assessing the effectiveness of sovereign wealth funds is one of the main criteria, and two different methods of determining it are common. The Linaburg-Madwell Transparency Index includes 10 indicators and is evaluated on the principle of adding 1 point to each completed indicator. We can see these figures in Figure 1 below.
Figure 1. Indicators of transparency of sovereign wealth funds [7]
Sovereign funds with a high Transparency Index of 10 points are in Norway, New Zealand, Chile, Singapore and other countries. The transparency index of countries with sovereign wealth funds can be seen in the figure below ( Figure 2).
Figure 2. Linaburg-Madwell transparency index of some countries with sovereign wealth funds [7]
As a result of the analysis of the above picture, it will be possible to draw the following conclusions.
First, the sources of revenue generation are explained by the fact that sovereign wealth funds that are not dependent on resource sales are generally open to funds formed from the export of natural resources, while closed sources that are dependent on resource sales are closed to non-dependent funds. The transparency index is not defined at all because about 30 sovereign wealth funds in the world today do not publish data. In particular, there is no transparency index for the activities of sovereign wealth funds from Central Asia in Uzbekistan and Turkmenistan.
If we look at the index of transparency of sovereign wealth funds in countries where the sovereign wealth of the Fund depends on the export of raw materials, we can see that in Kazakhstan, Saudi Arabia, Qatar, Iran, Kuwait and Russia this figure is 5 points.
The second way to determine the transparency of sovereign wealth funds is the Santiago Principles, which are based on generally accepted rules approved by the 26 member countries of the International Monetary Fund. [7] The principles of transparency are the result of the joint efforts of sovereign wealth fund managers in developed economies and emerging markets to create a comprehensive framework that provides a clearer understanding of fund activities. The Santiago Principles represent generally accepted rules and practices that adequately reflect the goals and investment activities of sovereign wealth funds.
These principles are based on the following fundamental positions: Support for a transparent and robust organizational structure of fund management, including risk management and accountability, and appropriate operational control; Ensuring compliance with all legal norms on the disclosure of information disclosed in the countries where investments of sovereign wealth funds are made; Guaranteeing that sovereign wealth funds will invest only in terms of economic and financial risk, as well as in terms of profitability; Promoting the stability of the global financial system and supporting the free flow of investment and capital.
The Santiago Principles include 24 principles. We can conditionally divide these 24 principles into three major groups: Compliance of the legal framework, objectives and financial resources of the funds with the macroeconomic policies of the country; Institutional bases of funds and principles related to the management structure; Principles applicable to investment and risk management systems. Third, the low level of openness and transparency in the development and implementation of state and regional programs, investment projects in this area allows the misuse and inefficient use of funds from the fund, as well as various cases of abuse; Fourth, inefficient use of the Fund's resources, failure to take into account the effectiveness of investment projects, the presence of risks of looting and so on.
CONCLUSIONS AND SUGGESTIONS
Based on the above, in order to ensure the openness and transparency of the Fund for Reconstruction and Development of the Republic of Uzbekistan, special attention should be paid to the publication of the following information on the Fund's activities: The history of the Fund, the reasons for its formation, a description of the sources of funding, the composition of state property; Publication of quarterly and annual reports of the Fund; The annual reports of the Fund provide the results of independent audits; Participation of the Fund in the charter capital of corporate structures (business associations) and evaluation of their effectiveness; Provide information on the investment portfolio of the Fund and its market value, return on capital, risk, degree of diversification, as well as management costs of the Fund; Provide the Fund with the scope and geography of investment activities (only for one country, region or foreign countries); Information on projects implemented jointly with other institutions; The level of implementation of investment policy with the obligation of the Fund to comply with the ethical standards of investment; Availability of external managers to manage the Fund's assets (if any); Compliance of the Fund's website with the basic requirements of the official website of state and economic administration bodies, local state authorities.
The removal of restrictions on the official publication of information on the Fund's activities and the priority given to the above in ensuring transparency will ensure compliance with the above international principles.
This will allow the Republic of Uzbekistan to obtain a sovereign credit rating and fully demonstrate the economic potential and investment attractiveness of the country for international rating agencies and investors, as well as increase public confidence in the state and strengthen the confidence of domestic and foreign investors. | 2,641.8 | 2021-05-31T00:00:00.000 | [
"Economics"
] |
PLAAT1 Exhibits Phosphatidylcholine:Monolysocardiolipin Transacylase Activity
Tissue-specific cardiolipin fatty acyl profiles are achieved by remodeling of de novo synthesized cardiolipin, and four remodeling enzymes have thus far been identified. We studied the enzyme phospholipase A and acyltransferase 1 (PLAAT1), and we report the discovery that it has phosphatidylcholine (PC):monolysocardiolipin (MLCL) transacylase activity. Subcellular localization was analyzed by differential centrifugation and immunoblotting. Total levels of major phospholipids, and the fatty acyl profile of cardiolipin, were analyzed in HEK293 cells expressing murine PLAAT1 using gas chromatography. Apparent enzyme kinetics of affinity-purified PLAAT1 were calculated using radiochemical enzyme assays. This enzyme was found to localize predominantly to the endoplasmic reticulum (ER) but was detected at low levels in the mitochondria-associated ER matrix. Cells expressing PLAAT1 had higher levels of total cardiolipin, but not other phospholipids, and it was primarily enriched in the saturated fatty acids myristate, palmitate, and stearate, with quantitatively smaller increases in the n-3 polyunsaturated fatty acids linolenate, eicosatrienoate, and eicosapentanoate and the monounsaturated fatty acid erucate. Affinity-purified PLAAT1 did not catalyze the transacylation of MLCL using 1-palmitoyl-2-[14C]-linoleoyl-PC as an acyl donor. However, PLAAT1 had an apparent Vmax of 1.61 μmol/min/mg protein and Km of 126 μM using [9,10-3H]-distearoyl-PC as an acyl donor, and 0.61 μmol/min/mg protein and Km of 16 μM using [9,10-3H]-dioleoyl-PC. PLAAT1 is therefore a novel PC:MLCL transacylase.
Introduction
Cardiolipin is a unique dimeric phospholipid required for proper cristae architecture within the mitochondria [1]. Although 18-carbon fatty acids predominate in cardiolipin in most tissues, the fatty acyl composition of this lipid varies between organs, reflecting the diversity of functional considerations of each tissue, including the need to balance energetic requirements with the inherent toxicity of oxidative metabolism [2]. For example, the cardiolipin profile of the brain tends to be more highly enriched in saturated and monounsaturated (i.e., stearoyl-and oleoyl-rich) fatty acids that are less susceptible to oxidative damage, compared to cardiac tissue that has a higher energetic demand, but also a greater capacity for repair, and is predominantly (i.e., >80%) tetra-linoleoyl [2].
Cardiolipin is produced de novo in the Kennedy pathway. At initial synthesis, it is considered "immature" or "nascent", and it must subsequently be remodeled to contain a fatty acyl chain profile that is functionally appropriate for the particular tissue [3]. This chemical specificity is achieved through a process that utilizes phospholipase, acyltransferase, and transacylase enzymes [4][5][6]. Remodeling of "immature" cardiolipin occurs in Lands' pathway and begins with phospholipases cleaving fatty acyl chains from either one Int. J. Mol. Sci. 2022, 23, 6714 2 of 12 or both glycerol backbones, resulting in the production of monolysocardiolipin (MLCL) or dilysocardiolipin (DLCL) [7]. Mature, fully acylated cardiolipin is then reconstituted through the action of either an acyl-CoA-dependent acyltransferase, which utilizes fatty acyl-CoAs as donor substrates, or transacylases, which catalyze the transfer of a fatty acyl chain from a phospholipid such as phosphatidylcholine (PC) to the lysocardiolipin molecule, generating a new lysophospholipid from the acyl donor [8]. Differences in the enzymes involved in remodeling, and the preferential substrates utilized, result in tissue-specific cardiolipin fatty acyl profiles [3]. Maintenance of these profiles is critical to health, since alterations are associated with a variety of diseases, including diabetes [9][10][11], inflammatory and autoimmune disorders [12,13], non-alcoholic fatty liver disease [14], cardiomyopathies [15] and cardiovascular disease [16], several types of cancer [17], and neurodegenerative diseases such as Alzheimer's [18] and Parkinson's diseases [19].
Four cardiolipin remodeling enzymes have been identified. The first was a mitochondrial linoleoyl transacylase called tafazzin that was discovered in 1996 [20]. This enzyme is expressed at the highest levels in cardiac muscle, where it predominantly uses linoleate residues on phosphatidylcholine to remodel MLCL [21]. A series of different mutations in tafazzin have been identified in humans, and these adversely affect cardiolipin remodeling, resulting in Barth syndrome which is characterized by skeletal and cardiac myopathies and neutropenia [22]. The Hatch Laboratory cloned and characterized MLCL acyltransferase (AT)-1 in 1999 and reported that it is highly expressed in rodent cardiac tissue [23]. This enzyme preferentially uses linoleoyl-CoA and oleoyl-CoA as acyl donors, and MLCL as the acyl acceptor, and is lacking in activity with other lysophospholipid substrates including DLCL [23]. In 2009, this same laboratory identified the human homolog of MLCL AT-1 as the α-subunit of human trifunctional protein (αTFP) [5]. ALCAT1 was identified in 2004 as an acyl-CoA:lysocardiolipin acyltransferase that can use both MLCL and DLCL as acyl acceptors [4].
Here, we report the discovery that a fifth enzyme, phospholipase A and acyltransferase 1 (PLAAT1), also catalyzes the transacylation of monolysocardiolipin (MLCL) using phosphatidylcholine (PC) as an acyl donor. This enzyme, also known as A-C1, Harvey-Ras-like tumor suppressor (HRASLS), or HRASLS1, is a member of a homologous group of proteins. All known PLAAT enzymes possess lipid enzymatic activities, including phospholipase A1/2 (PLA) and O-and N-transacylase activities [24]. However, none have yet been described as having functions in cardiolipin synthesis or remodeling. In this work, we present evidence that PLAAT1 has PC:MLCL transacylase activity, and therefore a direct role in cardiolipin metabolism. Potential implications of this discovery for understanding health and disease are discussed, including possible implications for diabetes.
PLAAT1 Expression and Localization
Mouse tissue distribution of Plaat1 mRNA (transcript variant 1) was analyzed by semi-quantitative RT-PCR. In agreement with Hussein et al. [25], it was found to be expressed in a variety of tissues but was detected in the greatest abundance in the mouse brain, heart, and skeletal muscle ( Figures 1A,B and S1). An additional analysis from whole blood did not result in amplification of the expected target transcript, suggesting very low abundance, in agreement with the work of others (data not shown) [25,26]. Whole-brain homogenates were fractionated by differential centrifugation and immunoblotted to detect endogenous PLAAT1. Although this protein is predicted to be~18.4 kDa in mass, immunodetectable PLAAT1 was visualized between 20 kDa and 25 kDa on immunoblots ( Figures 1C,D, S2 and S3), in agreement with the findings of others [25]. This enzyme was strongly detected in the microsomal fraction, indicating localization to the endoplasmic reticulum (ER). In addition, a faint band was also detected in the mitochondrial fraction ( Figure 1C). Fractional purity was analyzed by immunoblotting for the mitochondrial protein cytochrome C, the nuclear protein histone H3, and the endoplasmic reticular protein stearoyl-CoA desaturase 1 (SCD1) ( Figure 1C). Although fractional purity was high, detection of PLAAT1 in the mitochondrial fraction could have been due to minor crosscontamination from other organelles. Alternately, PLAAT1 may have been present in a fraction that would potentially co-fractionate with the mitochondria and the ER, such as the mitochondria-associated endoplasmic reticulum membrane (MAM). Mitochondrial isolates were therefore further processed to separate the purified mitochondrial fraction from a purified microsomal fraction and a MAM fraction. Fractional purity was assessed by immunoblot analysis of the MAM-specific protein acyl-CoA synthase long-chain family member 4 (ACSL4), as well as mitochondrial cytochrome C and microsomal SCD1. Immunoblot analysis of isolated subfractions showed that endogenous PLAAT1 was detected again at the highest levels in the microsomal fraction, although a slight band also appeared in the MAM fraction ( Figure 1D). Endogenous PLAAT1 was not visible in the mitochondrial fraction once the MAM was separated out ( Figure 1D). protein cytochrome C, the nuclear protein histone H3, and the endoplasmic reticular protein stearoyl-CoA desaturase 1 (SCD1) ( Figure 1C). Although fractional purity was high, detection of PLAAT1 in the mitochondrial fraction could have been due to minor crosscontamination from other organelles. Alternately, PLAAT1 may have been present in a fraction that would potentially co-fractionate with the mitochondria and the ER, such as the mitochondria-associated endoplasmic reticulum membrane (MAM). Mitochondrial isolates were therefore further processed to separate the purified mitochondrial fraction from a purified microsomal fraction and a MAM fraction. Fractional purity was assessed by immunoblot analysis of the MAM-specific protein acyl-CoA synthase long-chain family member 4 (ACSL4), as well as mitochondrial cytochrome C and microsomal SCD1. Immunoblot analysis of isolated subfractions showed that endogenous PLAAT1 was detected again at the highest levels in the microsomal fraction, although a slight band also appeared in the MAM fraction ( Figure 1D). Endogenous PLAAT1 was not visible in the mitochondrial fraction once the MAM was separated out ( Figure 1D). Plaat1 gene expression was detected at highest levels in brain, heart, white adipose tissue (WAT), and skeletal muscle, but was also visualized (A) and found by density analysis of bands (B) to be present in other tissues at lower levels.
Data are means ± S.E.M., n = 3-6. Subcellular localization of PLAAT1 was investigated by immunoblotting microsomal, mitochondrial, and nuclear fractions produced by differential centrifugation of whole mouse brains for PLAAT1, or for markers of fractional purity including SCD1 as a marker of the endoplasmic reticulum (ER), AIF as a mitochondria-specific marker, and histone H3 as a nuclear marker (C). Alternately, mouse brains were separated to derive ER (microsomal), mitochondriaassociated ER matrix (MAM), and mitochondrial fractions for detection of PLAAT1 or markers of fractional purity (i.e., SCD1 (ER), ACSL4 (MAM), and cytochrome c (mitochondria)) (n = 3) (D).
Plaat1 Expression Increases Cellular Cardiolipin Content
Plaat1 encoded by murine transcript variant 1 was expressed in HEK-293 cells ( Figure 2A), and major phospholipid categories were isolated for analysis by gas chromatography ( Figure 2B,C). Relative to control cells, cells expressing Plaat1 had 62% more total cardiolipin (5.77 ± 1.24 nmol cardiolipin/mg protein versus 9.40 ± 0.66 nmol cardiolipin/mg protein, respectively, p = 0.018) ( Figure 2C), while total contents of PC, phosphatidylethanolamine (PE), phosphatidylglycerol (PG), and phosphatidylinositol (PI) were not significantly elevated ( Figure 2B). This elevation in total cardiolipin content was associated A B C D Figure 1. PLAAT1 expression and subcellular localization. Plaat1 gene expression was detected at highest levels in brain, heart, white adipose tissue (WAT), and skeletal muscle, but was also visualized (A) and found by density analysis of bands (B) to be present in other tissues at lower levels. Data are means ± S.E.M., n = 3-6. Subcellular localization of PLAAT1 was investigated by immunoblotting microsomal, mitochondrial, and nuclear fractions produced by differential centrifugation of whole mouse brains for PLAAT1, or for markers of fractional purity including SCD1 as a marker of the endoplasmic reticulum (ER), AIF as a mitochondria-specific marker, and histone H3 as a nuclear marker (C). Alternately, mouse brains were separated to derive ER (microsomal), mitochondriaassociated ER matrix (MAM), and mitochondrial fractions for detection of PLAAT1 or markers of fractional purity (i.e., SCD1 (ER), ACSL4 (MAM), and cytochrome c (mitochondria)) (n = 3) (D).
PLAAT1 Is a MLCL:PC Transacylase
The significant increase in cardiolipin content observed in cells expressing PLAAT1 suggested a direct catalytic role for this enzyme in the synthesis of this lipid. Like other members of the PLAAT family, PLAAT1 shares homology with lecithin:retinol acyltransferase (LRAT) which uses PC as an acyl donor [24]. We therefore tested whether affinity-purified PLAAT1 had PC:MLCL transacylase activity, using 100 µM MLCL as an acyl acceptor and increasing concentrations (0-200 µM)
Discussion
Cardiolipin synthase (CLS) catalyzes the de novo formation of cardiolipin from phosphatidylglycerol (PG) and CDP-diacylglycerol (CDP-DAG) [27]. CLS, as well as many of the enzymes involved in the synthesis of PG and CDP-DAG, have limited substrate specificity [2]. The fatty acyl profile of nascent cardiolipin therefore typically reflects the general fatty acyl profile of the cell [2]. However, cardiolipin profiles are highly tissue-specific and often exhibit a high degree of compositional similarity between related species, highlighting the importance of cardiolipin form in cell, tissue, or organ function [2]. Achievement of this specificity requires extensive remodeling [2,7,[28][29][30]. The present work demonstrates that in addition to the four known enzymes, PLAAT1 also catalyzes the re-esterification of MLCL to form cardiolipin. This constitutes the first identification of a novel cardiolipin remodeling enzyme in over a decade.
PLAAT1 is a member of the PLAAT family of proteins. Five PLAAT/HRASLS enzymes have been identified in humans (PLAAT1-5), while only three are found in mice (PLAAT1, PLAAT3, and PLAAT5) [24]. These enzymes are also known as lecithin:retinol acyltransferase (LRAT)-like proteins, due to their similar sequence homology to this enzyme [8], including a conserved NCEHFV motif in the C-terminal region that is critical for acylation and de-acylation reactions [31]. While all PLAAT family members have previously been reported to have phospholipase A1/2 activity with PC as a substrate, as well as N-and O-transacylase activity using PC as an acyl donor and either PE or lyso-PC, respectively, as acyl acceptors [25,[32][33][34][35], activity with MLCL has not previously been reported for this enzyme family.
In comparison to enzymes previously identified in cardiolipin remodeling, this analysis indicates that PLAAT1 is ostensibly most like tafazzin, which also exhibits PC:MLCL transacylase activity, rather than ALCAT1, MLCL AT-1, and αTFP, which all exhibit acyl-CoA-dependent acyltransferase activity [2]. However, our molecular analysis of cardiolipin species in cells expressing PLAAT1 suggested differences in substrate specificity. While tafazzin preferentially esterifies MLCL with linoleate residues from PC [21], the cardiolipin profile of HEK293 cells expressing PLAAT1 was significantly enriched in total and 14-, 16-, and 18-carbon saturated fatty acids and in the n-3 PUFAs linolenate, eicosatrienoate, and eicosanoate. Although ALCAT1 expression also increases the content of n-3 PUFAs in cellular cardiolipin [36], it does this at the expense of saturated and monounsaturated fatty acids, as well as linoleate, which was not observed with PLAAT1 expression.
In vitro analysis of PLAAT1 catalytic activity using affinity-purified enzyme also indicates a novel function that is unique from other known enzymes in cardiolipin remodeling. Unlike MLCL AT-1 or αTFP, which prefer to use linoleoyl-CoA, or tafazzin, which preferentially utilizes linoleate residues from PC, PLAAT1 did not display transacylase activity using linoleate in the sn-2 position of PC. Conversely, we were able to calculate kinetic parameters for PLAAT1 catalysis when the reacylation of MLCL was analyzed using dioleoyl-PC or distearoyl-PC. While these analyses strongly suggest a substrate preference for 18:0 or 18:1n-9 over 18:2n-6, it is notable that the experimental approach that was used cannot distinguish between substrate selectivity based on positional preference at the sn-1 or sn-2 position, or chemical preference for one fatty acyl species type versus another (e.g., stearate or oleate, versus linoleate). PLAAT family enzymes have been shown to have some preference for using fatty acids at the sn-1 position on the glycerol backbone of PC [24,37]. Thus, it is possible that the lack of transacylase activity observed when 1-palmitoyl-2-[ 14 C]-linoleoyl phosphatidylcholine was utilized as an acyl donor resulted from the stereochemical position of the radiolabeled linoleoyl residue in the sn-2 position, rather than the chemical nature of this species. However, the complete absence of transacylase activity exhibited by PLAAT1 with this substrate makes this notion unlikely, since PLAAT1 does exhibit activity with other PC substrates that indicate an ability to utilize fatty acyl moieties esterified at the sn-2 position of PC. The apparent enzyme kinetics resulting from this initial evaluation therefore more likely suggest a preference by PLAAT1 for utilization of PC species containing saturated and monounsaturated rather than n-6 polyunsaturated 18-carbon fatty acyl chains, but may also indicate some additional preference for fatty acyl moieties esterified at the sn-1 position of PC, which together may result in the null activity observed.
Another difference between PLAAT1 and previously identified lipid remodeling enzymes is the subcellular localization. Tafazzin, MLCL AT-1, and αTFP localize to the mitochondria [2]. Although ALCAT1 has been visualized in the endoplasmic reticulum [4], the cardiolipin remodeling activity of this enzyme has been recorded primarily in the MAM and mitochondria [36]. In the current study, PLAAT1 was predominantly detected in the microsomal fraction of whole mouse brains. Although it was also visualized to a much smaller extent in the mitochondrial fraction, further fractionation resulted in a loss of PLAAT1 detection in the mitochondrial fraction, but appearance in the MAM. Detection of PLAAT1 in the endoplasmic reticulum and MAM is not surprising, given the other known roles for this multi-functional enzyme in phospholipid metabolism. It also supports the notion that PLAAT1 may localize to more than one subcellular domain, and in that regard, a recent study has indicated that PLAAT1 can translocate to mitochondria, suggesting PLAAT1 may primarily reside in the endoplasmic reticulum but may be recruited to mitochondria and mitochondria-associated regions when required [38].
The physiological significance of PLAAT1-mediated cardiolipin synthesis remains to be determined and will be the subject of future studies. We, and others [25], have found Plaat1 to be abundant in mouse brain, heart, and skeletal muscle, although the cardiolipin content of these tissues differs significantly in rodents, with brain enriched primarily in stearate followed by oleate, heart predominantly enriched in linoleate followed by oleate, and skeletal muscle enriched in stearate followed by palmitate [2]. Although dietary fatty acid content can influence mitochondrial cardiolipin composition, the predominance of specific fatty acids in individual tissues tends to be conserved, even in the face of significant changes in diet [2]. This highlights both the importance of enzyme-regulated processes [2] and the importance of tissue-specific profiles for normal tissue function and health. For example, the high abundance of tetra-linoleoyl cardiolipin in the heart is thought to be important in meeting the energy demands of this tissue, while the lower abundance of polyunsaturated fatty acids in brain mitochondria is thought to reflect a balance between generating sufficient energy and limiting the production of reactive oxygen species that could damage irreplaceable cells [2]. The generation of knockout mice will allow for the determination of the relative contribution of PLAAT1 to cardiolipin content and composition in various tissues.
The generation of Plaat1-deficiency models will also allow for the investigation of the role of this enzyme in health and disease. Cardiolipin alterations have been reported in many pathological conditions, and it will be of particular interest to investigate the role of beta-cell PLAAT1 in type 2 diabetes development and prevention given that we, and others [25], have detected this enzyme in the pancreas. Although reactive oxygen species are generated in multiple organelles within dysfunctional beta-cells, the close proximity of cardiolipin to oxidative processes and the enrichment of this lipid with unsaturated fatty acids make it highly susceptible to oxidative damage [9,10,39]. Oxidized cardiolipin can impair the function of the mitochondria and increase proton leak, reducing the quantity of ATP generated in response to incoming glucose, which in turn restricts the insulinsecretory response that results [40]. Oxidized cardiolipin also has a higher binding affinity for cytochrome c, and once externalized to the outer mitochondrial membrane, it is an important pro-apoptotic signaling regulator that can contribute to beta-cell dysfunction and death [14,41]. In this regard, it will be interesting to investigate whether the substrate preference of PLAAT1, which appears to favor the incorporation of saturated fatty acids both in isolated assays and when expressed in cells, will provide some degree of protection from metabolic disorders related to cardiolipin oxidation.
In summary, we have identified PLAAT1 as a fifth enzyme in cardiolipin remodeling, marking the first new discovery in this field in over a decade. PLAAT1 has transacylase activity using MLCL as an acyl acceptor and PC as an acyl donor, with an apparent preference for saturated and monounsaturated fatty acids in the sn-1 position. Future studies will examine the physiological and pathophysiological roles of this enzyme in vivo.
Animals
All animal procedures were approved by the University of Waterloo Research Ethics Board and Animal Care Committee and were conducted under animal use protocols AUPP#13-13 (approved 28 May 2013) and AUPP#17-18 (approved 27 June 2017). All studies were performed in accordance with the Canadian Council on Animal Care Guidelines. Animals were housed in a temperature-and humidity-controlled environment, on a 12:12 h reversed light/dark cycle, and had ad libitum access to food and water. Male C57BL/6J mice aged 12 weeks were used for studies on endogenous tissue Plaat1 content and subcellular localization in whole-brain homogenates.
Subcellular Fractionation
Subcellular fractions were separated using homogenates from whole mouse brains as previously described by Dimauro et al. [42]. Subfractionation of the MAM was performed according to the protocol that is described in detail by Wieckowski et al. [43].
Cloning of Ad-Plaat1
The complete coding region of murine Plaat1/Hrasls1 corresponding to transcript variant 1 (NM_013751.6) and encoding for the murine PLAAT1 protein (NP_038779) was amplified by PCR from mouse whole-brain cDNA, and the resulting amplicon was subcloned into pGEM-T-Easy resulting in the production of pGEM-Plaat1 that was verified by direct sequencing. pGEM-Plaat1 was used as a template to produce an amplicon containing a C-terminal 6 x histidine tag with NotI/SalI restriction sites that was subcloned into pShuttle-IRES-hrGFP2 NotI/XhoI restriction sites, which was linearized by digestion with PmeI, and transformed into chemically competent BJ-5183 cells pre-transformed with the Ad-1 vector (Agilent Technologies, Santa Clara, CA, USA). Clones containing recombinant adenoviral DNA were verified by PacI digest followed by direct sequencing and were amplified and linearized by PacI digest prior to transfection into HEK-293 cells for formation and amplification of active adenoviral PLAAT1. Control adenovirus was produced by the same method using pShuttle-IRES-hrGFP2 without gene insertion into the multiple cloning site. The amplified virus was titred by serial dilution in cultures of HEK-293 cells grown in 12-well plates, and infectious units (IFU) were quantified by fluorescence microscopy-based counting of infected cells in multiple frames per well.
Gas Chromatography (GC)
Total lipids were extracted for gas chromatography analysis from HEK-293 cells infected for 48 h at 20 IFU/cell using the method of Bligh and Dyer [44]. The organic phase was recovered and dried under a stream of N 2 . Samples were reconstituted in 50 µL of chloroform, applied to a silica gel G plate, and resolved by TLC using a hexane:diethyl ether:glacial acetic acid solvent system (80:20:2, v/v/v) to resolve neutral lipid classes from total phospholipids, which were scraped. To isolate individual phospholipid species, the scraped phospholipid band underwent a double Folch extraction, was dried under N 2 gas, and was then reconstituted in 50 µL of chloroform. Phospholipids were then resolved by TLC on a silica gel H-plate using a chloroform:methanol:2-propanol:0.25% KCl:trimethylamine solvent front (30:9:25:6:18, v/v/v/v/v). Bands corresponding to individual phospholipid species were identified using known standards. Identified phospholipids were overlaid with 10 µg of 22:3n-3 ethyl ester internal standard (Nu-Check Prep, Elysian, MN, USA) and scraped for determination of fatty acid composition and content by gas chromatography (GC) with flame ionization detection as previously described [45].
Production of Recombinant Affinity-Purified PLAAT1
HEK-293 cells grown in 150 mm plates were infected at a multiplicity of infection (MOI) of 20 IFU per cell with Ad-Plaat1 or control adenovirus. Samples were harvested 48 h later; washed with PBS; and lysed in 100 mM Tris-HCl, pH 7.0, 10 mM NaCl, by sonication in an ice-water slurry at 65% output for 3 × 6 s bursts. Unbroken cells and organelles and cellular debris were cleared by centrifugation at 10,000× g for 10 min. One hundred microliters of monoclonal anti-HA-antibody bead slurry (Sigma Aldrich, Oakville, ON, Canada) was added to each control and HA-tagged PLAAT1 sample and incubated at 4 • C for 120 min with constant mixing. Samples were then washed in 200 µL PBS. This was repeated 3 times, then beads were reconstituted in 100 µL of PBS. Samples were then ready for the transacylase activity assay. Affinity-purified PLAAT1 was prepared fresh prior to each assay and used in enzymatic reactions immediately after the final wash, without freezing. | 5,408.2 | 2022-06-01T00:00:00.000 | [
"Biology",
"Chemistry"
] |
Kepler Multitransiting System Physical Properties and Impact Parameter Variations
We fit a dynamical model to Kepler systems that contain four or more transiting planets using the analytic method AnalyticLC and obtain physical and orbital parameters for 101 planets in 23 systems, of which 95 are of mass significance better than 3σ, and 46 are without previously reported mass constraints or upper limits. In addition, we compile a list of 71 Kepler objects of interest that display significant transit impact parameter variations (TbVs), complementing our previously published work on two- and three-transiting-planet systems. Together, these works include the detection of significant TbV signals of 130 planets, which is, to our knowledge, the largest catalog of this type to date. The results indicate that the typical detectable TbV rate in the Kepler population is of order 10−2 yr−1 and that rapid TbV rates (≳0.05 yr−1) are observed only in systems that contain a transiting planet with an orbital period less than ∼20 days. The observed TbV rates are only weakly correlated with orbital period within Kepler’s ≲100-day-period planets. If this extends to longer periods, it implies a limit on the utility of the transit technique for long-period planets. The TbVs we find may not be detectable in direct impact parameter measurements, but rather are inferred from the full dynamics of the system, encoded in all types of transit variations. Finally, we find evidence that the mutual inclination distribution is qualitatively consistent with the previously suggested angular momentum deficit model using an independent approach.
Introduction
The Kepler space telescope (Borucki et al. 2010), utilizing the transit method, has brought a major increase in the number of known exoplanets.Following the path of Kepler, the number of transit detections is rapidly increasing with the current Transiting Exoplanet Survey Satellite (TESS; Ricker et al. 2010) and the future PLAnetary Transits and Oscillations of stars (PLATO; Rauer et al. 2014) missions.
The vast amount of photometric data calls for interpretation methods that enable us to extract the most out of the data accurately and efficiently.One of the most direct ways to extract planetary physical and dynamical information from photometric (and other types of) observational data is to run multiple N-body integrations.Such integrators were implemented at high accuracy (e.g., Wisdom & Holman 1991;Chambers 1999;Deck et al. 2014).The drawback of using N-body integrations is the relatively high computational cost, which increases with the time span of the data, a consideration that becomes more and more significant as the available data increase and is now frequently spanning multiple decades.In order to enable a more efficient interpretation of photometric data, different authors approached the interpretation process in an analytic manner, usually using transit timing variations (TTVs; Agol et al. 2005).When TTVs are detectable, the orbital configuration and the planetary masses may be estimated.This made TTVs a tool of great utility and thus a topic of thorough investigation, a few examples being Nesvorný & Morbidelli (2008), Nesvorný (2009), and Nesvorný & Beaugé (2010), a series of papers that analyzed the TTV problem and developed the method ttvim; Lithwick et al. (2012), who analyzed the fundamental TTV mode to first order in eccentricity; Deck & Agol (2015), who highlighted the synodic "chopping" TTV; Hadden & Lithwick (2016), who extended the method of Lithwick et al. (2012) to second order in eccentricity; and Agol & Deck (2016), who published the publicly available code TTVFaster that analytically calculates the TTV to first order in eccentricity.All of the above, however, analyze the TTVs-which are only a by-product of the main measured quantity: the stellar flux.There are two main drawbacks to performing a TTV analysis rather than a full light-curve fitting.First, for small planets with shallow transits, an individual transit (unlike the full "folded" light curve) may not be significant in the data, making timing severely limited or impossible, thus preventing TTV analysis.In such cases, a full, global, light-curve model can extract information that cannot be extracted by attempting to fit transit times.Second, restricting the analysis to TTV ignores other types of transit variations, such as transit duration variations (TDVs).These are of high interest in probing forces out of the plane but are unfortunately even more difficult to measure than TTVs on long-cadence data such as Kepler's; the utility of such information, in the rare cases where it is observable, can be seen in the detection of the mutual inclination between planets b and c in the Kepler-108 system (Mills & Fabrycky 2017).This mutual inclination was later explained to have potentially arisen from a resonant interaction with a binary member of the host star, called ivection resonance (Xu & Fabrycky 2019); thus, in this case, the detection of TDVs eventually led to the understanding of a formerly unrecognized dynamical phenomenon.
In a previous study (Judkovsky et al. 2022a, hereafter Paper I) we introduced AnalyticLC, a method and code implementation for modeling light curves.This analytic method balances accuracy and efficiency and can also model radial velocity (RV) and astrometry to enable joint fitting of these data types.We used this method to analyze a set of twoand three-transiting-planet systems (Judkovsky et al. 2022b, hereafter Paper II).This work led to 140 new planetary masses and orbital properties, along with a list of Kepler objects of interest (KOIs) that display transit impact parameter variations (TbVs).Statistics of such transit variations are important for characterizing the distribution of planetary system properties, offering opportunities to derive inferences on population properties (Fabrycky et al. 2014;Xie et al. 2016;Millholland et al. 2021).
In this work, we extend the study to systems that contain four or more transiting planets.Such systems are interesting owing to the dynamical richness embedded in many planetary interactions involving multiple resonances.Multiplanet interactions can also lead to phenomena that are more complex than direct pairwise interactions.An example of such a scenario was presented in the KOI-500 system by Ford et al. (2011), who mentioned that the similarity between the distances from resonance might result in a dynamical interplay that could be manifested in the observations.In Paper I, we formulated this kind of interaction and denoted it super-mean-motion-resonance (SMMR).This dynamical phenomenon arises from the simultaneous interaction of three planets in a near-resonant chain.In multitransiting planet systems, such effects are more likely to be significant.Here we report three such cases in KOI-707, KOI-1589, and KOI-2038 (see Section 4).
This paper is organized as follows.In Section 2 we review the methods used in this work.In Section 3 we review the main findings from this work in two aspects: planetary masses and statistics of measured TbVs.In Section 4 we refer to each of the analyzed systems, including comparison to former literature results and discussion of the main dynamical features of each system.In Section 5 we conclude and discuss future prospects.Numerical results are provided in tables in the Appendix.
Methods
The methods used in this work are the same ones used in Paper II, with a slight adjustment: the number of walkers in the DE-MCzs process (Differential Evolution Monte Carlo with a sampling of past states using snooker updates; ter Braak & Vrugt 2008) is larger because the number of planets, as well as the corresponding number of model parameters, is larger.We repeat the main stages below.
Data Reduction
The detrended light curve is obtained by applying a cosine filter to the Presearch Data Conditioning (PDC) maximum a posteriori (MAP) Kepler data (Stumpe et al. 2014).The cosine filter is based on the one used by Mazeh & Faigler (2010) and modified such that the shortest timescale is four times the transit duration time, as provided by NASA Exoplanets Science Institute3 (NExScI).The aim of this choice is to avoid the filter affecting the transit shape itself.Outliers beyond 5σ were iteratively rejected (outliers were removed separately for Kepler long-and short-cadence data).
The raw data detrending and the model fitting affect each other, so we performed an iterative process: having detrended the light curve, we fitted a model, normalized the raw data by the best-fitting model, and then detrended again.This iterative process was regarded as converged when the χ 2 score did not improve from the previous iteration by more than the equivalent of 1σ, which is defined as the 68th percentile of the χ 2 (N) distribution, where N is the number of model degrees of freedom.This process has already been proven robust in Paper II, and its robustness was reaffirmed here.
We used all available Kepler data (long and short cadence).Binning was applied for the long-cadence data points, where the number of binning points was set by requiring that the error caused by the binning is an order of magnitude smaller than the typical data uncertainty, using the formulae given by Kipping (2010).
The model function to evaluate the instantaneous flux was AnalyticLC, which was described in Paper I and was already used to study Kepler's two-and three-transiting-planet systems in Paper II.The model uses an analytic 3D dynamical model coupled with the flux calculation formulae of Mandel & Agol (2002).For flux modeling, we took the quadratic limbdarkening parameters from NExScI.
Model Parameters
The model includes eight parameters per transiting planet.Two relate to the physical properties of the planet: μ (planet-tostar mass ratio) and R p /R * (planet-to-star radius ratio).The other six are (equivalent to) the orbital elements: P (orbital period); T mid0 (reference time of midtransit); Δe x , Δe y (eccentricity vector component differences from the neighboring interior planet; for the innermost planet these are just the eccentricity vector components); I I I I cos , sin x y = W = W (inclination vector components, where I is the inclination with respect to the z-axis and Ω is the longitude of ascending node).The axis system is defined such that the x-axis points from the center of the star to the observer, y lies on the sky plane such that the innermost planet crosses the sky plane on it, and the zaxis lies on the sky plane and perpendicular to the x-and yaxes.Using this axis system, e ecos is the eccentricity component on the line of sight and is positive if the periapse is between the star and the observer, where e is the eccentricity magnitude and ϖ is the longitude of periapse.A diagram illustrating the orbital elements and the axis system is shown in Figure 1 of Paper II.
For the innermost planet, we fix Ω = π/2, i.e., we set I x = 0.An additional model parameter is the ratio of the innermost planet semimajor axis to stellar radius, a/R * (as in Ofir & Dreizler 2013), leaving it still with eight parameters.Using Kepler's law, the semimajor axis of the other planets in the system is then scaled from the innermost one.
Nonlinear Fitting Method
We use our code implementation of DE-MCzs (ter Braak & Vrugt 2008) to generate a posterior distribution and obtain a best-fitting model.This method suits multidimensional problems with correlated parameters, as in our case of fitting a dynamical model to a light curve.
We limited the eccentricity component magnitude to smaller than 0.6 and the inclination component magnitude to larger than 50°since the model is usually invalid beyond these relatively extreme configurations; in practice, the solutions usually converged to much smaller values of a few percent/a few degrees.
The choice of the number of walkers was guided by the value of N N log , where N is the number of fitted parameters.There is no general prescription for the optimal number of walkers; we based this choice on experimenting with different types of nonlinear problems and on its satisfactory performance in our previous work on two-and three-transiting-planet systems (Paper II).Convergence was declared upon reaching a Gelman-Rubin criterion (Gelman & Rubin 1992;Brooks & Gelman 1998) of less than 1.2 for all parameters over 10,000 generations.After the removal of the burn-in, we were left with a well-mixed sample, from which statistical inference of the parameters was made.The parameter values described in the tables in the Appendix are the sample median and the difference between the median and the 15.865th and 84.135th percentiles, corresponding to 1σ in a normal distribution.
The walkers' initial states in orbital periods, reference times of midtransit and planet-to-star radius ratio, and planet-to-star mass ratio are distributed based on values and errors from NExScI.If no planetary mass is provided, we translate the archive planetary radius to an initial guess on mass by using an empirical formula given by Weiss et al. (2017, Figure 8).
We ran the nonlinear fitting procedure five times per system in order to check whether it converges to the same minimum consistently, using a different realization of the initial walkers' distribution at each run.In our former study of two-and threetransiting-planet systems, we empirically found that this number suffices for most cases for obtaining repeatable results.We also note that each run contains 112 walkers (for fourplanet systems), 144 walkers (for five-planet systems), and 184 for six-planet systems.Multiplied by tens of thousands of generations at each run, the generated samples covered a large fraction of the parameters space.In Section 4 we discuss the results system by system and note cases with ambiguous solutions that require further investigation.
The initial guess of a (1) /R * was obtained from the literature stellar mass and planetary orbital period; whenever possible (for the decisive majority of planets), we used stellar data from Berger et al. (2020).For KOI-593 and KOI-1422, where this source had no data, we used NExScI.
The fit parameters of AnalyticLC are dimensionless; we translate them to absolute masses and radii using stellar literature data from Berger et al. (2020).
Verification versus N-body Integration
In order to verify the validity of the best-fitting analytic model generated by AnalyticLC, which is calculated using truncated series expansion of the full 3D gravitational interaction, we compare it to a model generated by an N-body integration using Mercury6 (Chambers 1999).Following Ofir et al. (2018), the data uncertainties normalize the flux differences between the analytic model and the N-body model, and the sum of the squares of these normalized differences provides a χ 2 -like estimate of the mismatch between the analytic and the N-body model at the best-fitting parameter set: where F ALC (t) and F Nbody (t) are the flux values obtained from AnalyticLC and the N-body integration at each data point, respectively, and dF(t) is the data uncertainty of each data point.Nbody 2 c measures the ability to statistically discern between the two models given the data available.
We used the empirical cumulative distribution function (CDF) obtained from the posterior distribution to translate the value to CDF by interpolation and then to the equivalent number of standard deviations assuming a normal distribution.We refer to this as σ Nbody , which quantifies the systematic error of the model relative to the statistical error of the data.
Figure 1 shows an example of the comparison between AnalyticLC and the results of a full N-body integration for the four-transiting-planet Kepler-79 system (KOI-152).The best-fit solution of AnalyticLC for this system agrees well with an N-body integration (σ Nbody = 0.0321), implying that the model error with respect to full N-body integration is two orders of magnitude smaller than the statistical error arising from the data uncertainty.By performing the computationally heavy multidimensional search using AnalyticLC, while validating only the best-fit point using an N-body integrator, we ensure both the efficiency of our search and the accuracy of our result.
In Paper II we have shown a map of the validity of the models generated by AnalyticLC.Here we extend this map to include both the two-and three-transiting-planet systems from that work and the systems from this current work.This map is shown in Figure 2. Systems with eccentricities and inclinations of up to 0.1-0.2 and orbital separations of 10 mutual Hill radii can be adequately modeled by AnalyticLC in the vast majority of cases to the precision of the Kepler data.There are also systems of smaller separations and/or larger eccentricities in which the model is adequately correct.Systems that include close approaches closer than ∼7R H -8R H are usually not consistent with the N-body-integrator-based result.This is an expected outcome of our analytic approach, which inherently assumes slow variations of all orbital elements except the mean longitudes-an assumption that breaks when close encounters occur.
It is noteworthy that our analysis is a type of model fitting, and therefore it has some inherent limitations.For example, it will not model any real part of the system that was not included in the model, e.g., nontransiting planets.It is clear from Figure 2 that AnalyticLC found that some systems have relatively high eccentricities-much higher than is typical for multiplanet systems.We believe that at least some of these systems actually contain additional, nontransiting planets that affect the dynamics of the system, and the observed eccentricities are simply a reflection of AnalyticLCʼs attempt to use the degrees of freedom open to it to match the observed data.
Regarding the current sample of systems, from the 23 systems that passed all our tests (see below) and attain σ Nbody < 1.5, for 16 systems 0.1 Nbody 2 s < , implying a very good match (such that when summing in quadrature the systematic and statistical errors, the systematic error contribution is an order of magnitude smaller than the statistical one).
Comparison to Strictly Periodic Circular Model
Our model has eight parameters per planet, while a minimal model of a circular strictly periodic orbit would involve only five for the innermost planet (P, T mid0 , R p /R * , a (1) /R * , b, where b is the impact parameter) and only four for the other planets (a/R * for the other planets can be deduced from the period ratios).In order to justify the larger number of parameters in the full model, we calculated the Akaike information criterion (AIC; Akaike 1974) for both the dynamical and strictly periodic models, and we validate only solutions where the AIC of the dynamical model is better.A more detailed discussion of the selection of AIC as our information criterion is given in Paper II.
Solution Consistency with Stellar Parameters
One of our model parameters is a (1) /R * , the ratio between the semimajor axis of the innermost planet and the stellar radius.This parameter can be obtained alternatively from literature values for the stellar mass and radius (Fulton & Petigura 2018;Berger et al. 2020) and the measured orbital period of the innermost planet.We checked the consistency of our converged solution with the literature-obtained value with respect to the literature error and our posterior distribution error on a (1) /R * .In the systems for which we provide a dynamical solution, there are no cases in which this difference is larger than 2.7σ, where σ is the root sum square of the error estimates on a (1) /R * from our posterior distribution and the literature.In fact, the differences between our estimated a (1) /R * and literature values distribute nicely around zero with a standard deviation close to unity, resembling a normal distribution.We therefore consider our results to be in agreement with the literature's stellar parameters.
Consistency among Solutions
As described above, we ran DE-MCzs five times for each planetary system, each time with a different random seed.We discarded solutions for which our model is not compatible with a full N-body integration (see Section 2.4) and solutions for which the number of model parameters is statistically unjustified (see Section 2.5).In addition, we discarded solutions that imply unreasonably high planetary densitymore than 2σ above 12 g cm −3 , the approximate density at the base of Earth's outer core (Sorokhtin et al. 2011, chap. 2).This process left us with a subset of runs for each KOI.If all of them converged to the same maximum likelihood region in parameter space, the solutions are consistent with each other, and we report the obtained solution posterior distribution statistics.If they converged to different regions in parameter space, we report all of them and regard one of them as the "adopted" solution, recognizing that this solution is not unique.Selection of the adopted solution is based on differences in fit quality (χ 2 ).If there are a few solutions of similar fit quality, we publish all of them and select one of them as the adopted solution based on physical reasoning, e.g., favoring solutions with plausible planetary densities, with small eccentricities for short-orbit planets, and with smaller mutual inclinations, as we treat here systems with four or more transiting planets, which are statistically likely to possess small inclinations (Ragozzine & Holman 2010).For systems with more than one solution, we refer to the various solutions in Section 4.
For 11 out of the 23 systems with valid dynamical solutions, we report one solution, either because it was more likely than the other solutions by more than ∼3σ (e.g., , or because only one solution passed all our validity criteria (e.g., KOI-720).In some cases, a few runs converged to the same local minimum, yielding consistent solutions; in such cases, we report it once.For 12 systems, we report more than one solution (five with two solutions, seven with more).
In order to test the solutions' consistency, for each pair of runs we check the overlapping area between the probability density function (pdf) of the posterior marginal distributions of each parameter.This overlapping area, denoted as η, is a measure of the similarity of two distributions (Pastore & Calcagnì 2019), defined for two pdf's of the parameter x, f 1 (x) and f 2 (x), as where the integration is restricted to the range of validity of the variable x.A value of η = 1 means that f 1 , f 2 are the same distributions.To calibrate the values of this quantity, two Gaussians with the same standard deviation shifted from one another by one standard deviation yield η ; 0.62; two Gaussians shifted by two standard deviations yield η ; 0.31.Any value much smaller than that would mean that the distributions are significantly different; in cases where at least one of the parameters η 0.31 we publish both solutions involved (as long as they have a similar fit quality and pass all the criteria detailed above).This is the case, for example, for KOI-1336.
Sample of Fitted Planetary Systems
Our former work (Paper II) analyzed the data of two-and three-transiting-planet systems.Here we selected systems of four or more transiting planets of Kepler.
Our initial sample consisted of 56 four-planet systems, 17 five-planet systems, and 1 six-planet system (KOI-157).This sums up to 74 systems for which we began the computation process.For 67 of these systems, at least one run converged to a final solution.The validity criteria detailed above were applied to these solutions, namely, (i) N-body matching (illustrated in Figure 1), (ii) AIC improvement over a strictly periodic model (Section 2.5), and (iii) plausible planetary density (Section 2.7).This process filtered out some of these systems.In the end, we were left with a sample of 23 systems (15 of four planets, 7 of five planets, and 1 of six planets), consisting of 101 planets with a dynamical solution, for which we reported masses and orbital elements.
We combine the error obtained from our posterior distribution of the planet-to-star mass ratio and the literature error on stellar mass in quadrature to obtain the total error on the absolute planetary mass.Similarly, we combine the error obtained from our posterior distribution of the planet-to-star radius ratio and the literature error on stellar radius in quadrature to obtain the total error in absolute planetary radius.In most cases, the majority of the error in mass stems from the fit and not the stellar parameters.For the radius, it is the other way around: the radius ratio is well constrained by the fit, and most of the uncertainty arises from the uncertainty in stellar absolute radius.
General
Of the 23 systems for which we found a valid solution, 15 contain four transiting planets, 7 contain five transiting planets, and 1 contains six transiting planets (KOI-157, Kepler-11), summing to 101 planets in total.In Figure 3 we show a map of orbital periods, planetary radii and planetary densities obtained in this work, and resonance locations.This map gives a brief overview of the planetary systems for which physical properties were obtained in this work.Many systems are near at least one resonance location, as the proximity to resonances generates the high-amplitude TTVs that enable mass estimation.The uniform spacing and the similar planetary radii are visually apparent in many systems (the so-called "peas in a pod"; Weiss et al. 2018).In some of these systems, this similarity also seems to occur in planetary masses and densities; this similarity has been recently studied quantitatively and has been shown to exist, though to a somewhat smaller extent than the radius similarity (Otegi et al. 2022).
New Constraints on Planetary Masses
In the left panel of Figure 4 we show the mass-radius spread of the 101 planets in these 23 systems, with curves of constant density and the one-dimensional marginal distribution of this planet population.The radius gap at R p ∼ 1.8R ⊕ (Fulton et al. 2017) is visually apparent; the typical planetary mass is 5-10 m ⊕ .The sample is dominated by planets with densities of 1-3 g cm −3 .While these densities are definitely not consistent with a mostly rocky composition, they are also consistent with the solar system's ice giants only at the lower end of this range.This suggests that the typical composition of these exoplanets is unlike anything An overview of the N-body match with AnalyticLC for all our runs, including both this work and the former work of the two-and threetransiting-planet systems, including runs that both did and did not pass our acceptance criteria.This map demonstrates the accuracy limits of Analy-ticLC.The horizontal axis is the largest magnitude of all free eccentricities and inclinations in the system at the best-fitting solution derived by AnalyticLC.The vertical axis is the minimal closest approach in the system in units of mutual Hill radii.The color of the points represents the value of σ Nbody , with points at which σ Nbody > 1.5 (which do not pass our acceptance criterion) also marked with black edges.
found in the solar system, with the total mass more evenly divided between the rocky core and volatile envelope.Some planets have densities lower than 1 g cm −3 -these are planets massive enough to keep their large gas atmosphere, and they are not extremely hot (none of them has an orbital period shorter than 10 days).The uppermost point in this panel belongs to KOI-191.01; the derived mass and radius imply a density of ∼0.05 g cm −3 ; as we discuss in Section 4, the inference on this planet is questionable and deserves further study.The next low-density planet belongs to KOI-152.01, with a planetary mass of ∼9.4 m ⊕ and a radius of ∼7.2 R ⊕ ; this value is more reliable and is consistent with past literature to ∼1.5σ (Jontof-Hutter et al. 2014;Hadden & Lithwick 2017;Jontof-Hutter et al. 2021).The curve of 5.5 g cm −3 was chosen because it is the approximate Earth's bulk density; the curve of 12 g cm −3 was chosen because it is the estimated approximate density at the base of Earth's outer core (Sorokhtin et al. 2011, chapter 2), and as such it is used to delineate the upper limit of accepted solutions (Section 2.7).
In the right panel of the same figure, we show a comparison of the masses obtained in this work with values of HL17 and JH21, which are studies that have a large number of common KOIs with our work.Our results are in good agreement with the masses obtained by JH21 (some of which were given as upper limits; these are indicated by arrows pointing left).
HL17 applied a default mass prior and a high-mass prior and got two results for each planet; the masses we obtained are typically larger than their low-prior mass and are close to their masses obtained from high-mass priors.This is a reasonable outcome, as our prior is uniform in mass, the same as their high-mass prior.
For KOI-834.03, HL17 report a mass of 239.1 m ⊕ , which is out of the bounds of this plot; the value is not plausible, as this is a 1.95 R ⊕ planet candidate.For six planets, the results we obtained are different from the results obtained by HL17 by more than 2σ.All of these cases are indicated in Figure 4 specifying their KOI numbers: KOI-232.04,KOI-232.05,KOI-834.01,KOI-157.03,KOI-707.04, and KOI-1589.03.For KOI-707.04 our estimate is only 1 m ⊕ below the upper limit given by HL17; for KOI-232.05we might overestimate the planetary density, and the value given by HL17 seems more physically plausible; however, it is significant to less than only 3σ.In the other cases we believe that the solutions we provide are more physically plausible than the ones given in HL17.We refer in more detail to all these specific cases in Section 4.
Overall, there is good agreement between the masses obtained in this work and previously reported planetary masses, thus giving confidence in the fitting process and the reliability of the newly reported masses.The masses, radii, and orbital elements of our adopted solution are tabulated in the Appendix, along with machine-readable files.
In the left panel of Figure 5 we show the distribution of fractional error in planetary mass, σ m /m, for all the planets for which we obtained a valid dynamical solution in this work (including the 140 planetary masses obtained in Paper II).The typical fractional error in planetary mass derived from this study is better than the typical fractional error from past studies by roughly a factor of two.We attribute this improvement to the global fitting approach, which integrates all of the transits together and takes into account all types of transit variations, rather than using TTV fit only (the technique used in most of the referred studies).In addition, in the right panel of the same figure, we show that most of the new planetary masses we obtained are of planets with small TTV magnitudes, quantified by the TTV standard deviation σ TTV , calculated over the TTV values of all transit events in the best-fitting model.The current study evidently includes more TTVs of lower amplitude, and we believe that the explanation for this is that systems with weak dynamical interaction (and consequently small-amplitude TTVs) are usually of small planets, whose individual transit times are anyway poorly constrained.The combination of shallow transits with weak interactions makes it difficult, and sometimes impossible, to extract individual transit times and detect a TTV pattern; in such systems, the global flux fitting Each blue error bar is related to a single planet.The red lines are normalized histograms of the masses and radii obtained by summing up the pdf's of all points.As the sample is small and radii are well determined, the radius-weighted histogram appeared more as a collection of discrete values, so it was smoothed using a Gaussian kernel of width 0.3 R ⊕ to better show population-wide trends.The gray contours are constant-density curves.(b) Comparison of masses obtained in this work with literature values: JH21 values (blue) and upper limits (red); HL17 default mass prior, three of which show upper limits only (yellow); and high-mass prior (purple).We omit planets for which literature values are within less than 2σ of zero.For six objects labeled with their KOI numbers, the results obtained in this work disagree with the results of HL17 by more than 2σ; these are discussed in more detail in the text.The black line shows the identity function.
can exploit the data better to obtain an estimate of planetary mass.
Out-of-plane Forces
Forces out of the plane are manifested in transit light curves as TDVs, or, alternatively, TbVs.In Paper II we have shown how TbVs can be interpreted as arising from the interaction among the transiting planets, or from an interaction with a nontransiting companion.
Only in a handful of cases was mutual inclination constrained from transit variations; however, such cases are of high interest and exhibit dynamical richness.A good example is the Kepler-108 system, in which the detection of TDVs led to a photodynamical analysis that showed that the planets in this system are highly inclined (Mills & Fabrycky 2017).Later on, the appearance of such inclination was explained as arising from a binary member via an interaction denoted as ivection resonance (Xu & Fabrycky 2019).This term refers to a situation in which a resonance between the binary orbital period and the planetary nodal motion acts to pump the planetary orbital inclination.The example of Kepler-108 shows how the detection of a mutually inclined system led to the discovery of a new dynamical phenomenon.
Another avenue in researching mutual inclination in planetary systems involves population analysis of the detected transit variations, such as the usage of the TDV catalog of Shahaf et al. (2021) by Millholland et al. (2021).The latter authors addressed the distribution of mutual inclinations in planetary systems by analyzing the number of expected TDVs in the case of a dichotomic model that assumes a bimodal distribution in mutual inclination and a model that assumes that mutual inclinations can be described by a distribution arising from a model based on angular momentum deficit (AMD; Laskar & Petit 2017;He et al. 2020).The results of Millholland et al. (2021) have shown that in terms of the number of detected TDVs the AMD-based model shows better consistency with the Kepler population than the dichotomic model; this is a significant conclusion, as it offers a nondichotomic solution to the so-called "Kepler dichotomy."This exemplifies the importance of population analysis of transit variations.It is therefore a natural motivation for us to try to expand the knowledge we have on transit variations arising from forces out of the plane.
In Paper I, we produced a catalog of TbVs of planets in twoand three-transiting-planet systems; here we produce a similar catalog for systems having four or more transiting planets (Table 4).The current catalog contains 71 planets undergoing TbVs to better than 2σ, out of which 52 are better than 3σ; together with the former catalog from Paper II, we present 130 planets undergoing TbVs to better than 2σ (77 to better than 3σ), an extension to the catalog of TDVs published by Shahaf et al. (2021) that contains 31 KOIs that display statistically significant TDVs (the only KOI of multiplicity larger than 3 included in their catalog is KOI-841.02,which is also included in our catalog as displaying TbVs).
We note that for most analyzed systems the signs of the impact parameters are not definitive, despite the typical small errors on b.This is because |b| may be well constrained by the shape of the transit, even if the dynamics do not distinguish between the same-hemisphere configuration (positive b) and the opposite-hemisphere one (in the context of impact parameters and orbital alignment; see also Fabrycky et al. 2014).We tested this using the following procedure.We examined all of our valid solutions and switched the signs of the impact parameters in all possible configurations.In the language of our fitted parameters, we switched the signs of either I x or I y or both for each planet, excluding the innermost planet (for which I x is set to zero and I y is limited to positive values only, without the loss of generality).This resulted in 4 N 1 plconfigurations for each system, where N pl is the number of planets in the system.For each system, we checked the χ 2 value difference with respect to the original best-fitting solution.Only in two systems do the best-fitting solutions stand out among the others at the level of 2σ: the solutions for KOI-834 and KOI-707.This shows that the signs of I x and I y (and with them the sign of b) may be inverted in most cases.The two systems mentioned above merit further study, as our solutions constrain their full 3D orbital geometry.
We note an essential difference in the estimation method between our catalog and the catalog of Shahaf et al. (2021).Shahaf et al. (2021) used individual transit duration measurements and sought significant trends in the duration values for specific KOIs.In other words, they searched for TDVs that could be observed within the time span of the Kepler mission.In this work, we calculate the linear trend in the impact parameter per KOI using the orbital elements' evolution in time, elements that affect all transit variations combined.For many KOIs, we predict a finite TbV trend, although this trend may not be directly seen in the impact parameter measurements.Therefore, our TbV catalog is not a direct empirical observation of TbVs, but a projection of the orbital dynamics on the TbVs.This approach enables predicting the future dynamics of the systems in general: not only discovering faint, slowly varying, and periodic TbVs in existing data, but also predicting the detectability of such TbVs not detectable in the time span of the Kepler data.
Having said all that, the results here represent the largest compilation of planets with such variations to date.It is expected that future population analyses, similar to Millholland et al. (2021), would benefit from this expanded catalog of TbVs.
Incorporating forces out of the plane in the dynamical model has an additional contribution to the masses' determination.The traditional method of using the TTV only can yield mass estimates, but these are sometimes degenerate with eccentricity (Lithwick et al. 2012).Breaking this degeneracy requires either significant eccentricities that would generate a phase shift between the planets' TTVs or additional effects, such as the synodic chopping effect (Deck & Agol 2015) or higher-order TTVs (Hadden & Lithwick 2016).In Figure 6, we show an example of a nonlinear TbV pattern in addition to the linear secularly driven TbV drift.We define b resid to be the residual impact parameter variations after the removal of the best-fitting linear trend.The residual TbVs are approximately periodic, with a period that is the TTV-derived super-period, indicating that these are TbV manifestations of the same planet-planet interaction seen in the TTVs.Since AnalyticLC takes into account the various transit variations simultaneously, no additional mechanism is needed to capture this variability.The TbV information adds to the constraining power of the global light-curve approach.In panel (c) of the same figure, we show the distribution of b resid s , which is defined as the standard deviation of b resid over time of the best-fitting parameters for the 241 planets included in our adopted dynamical solutions in both this work and Paper II.It shows that the typical value of b resid (which is related to near-MMR interactions) is 10 −4 to 10 −3 , roughly an order of magnitude smaller than the typical linear change in b (which is related to secular interactions) accumulated in a year.This shows that the light curve contains information content regarding both secular and near-resonant interactions, where the former is more prone to detection for the typical observation time span of a few years.Important for MMR-originating transit variations other than TTV are the TDVs used to extract the properties of the nontransiting planet in the KOI-142 system (Nesvorný et al. 2013).
We now turn to discussing long-term TbV effects.In Figure 7, we show an overview of the 130 values of planets displaying long-term linear TbV with a significance of better than 2σ.In the left panel, we show a scatter of these planets' TbV rates against P min , the orbital period of the innermost planet in each system, which is a probe of the amount of inward migration the system has experienced.We see that rapid TbV rates occur in systems where the innermost planet's orbital period is less than 20 days.Specifically, there are eight planets displaying TbV rates faster than 0.05 yr −1 , seven of which reside in systems that contain planetary companions with less than a 5-day period (the exception is KOI-988.02).
We note that Figure 7 shows the dependence of b on the minimal observed orbital period, not on the period of the planet undergoing the TbV.Some positive correlation between the duration variation rate T and the orbital period was proposed by Shahaf et al. (2021, their Figure 5), at least for the significantly constrained T values.These are related to our measurements of b but are not directly comparable because the transit duration depends on additional factors such as orbital velocity, and the impact parameter depends on geometry alone.Note that b and T arise from different dynamical phenomena: b arises mostly as a result of out-of-plane secular interactions, while T can also arise as a result of in-plane apsidal motion that changes the skyprojected velocity of the planet.
Are systems with short-period planets expected to display stronger TbVs than other systems?If TbVs are probes of mutual inclination, then there is some supportive evidence for that.Millholland & Spalding (2020) have shown that the formation of ultra-short-period (USP) planets, which is a result of inward migration, would require some initial mutual inclination in order to enable the conservation of angular momentum along the migration process.Though we do not focus on USP planets here, the same argument implies that systems with short-period planets, which likely migrated to this configuration, would require some initial mutual inclination within the system.A more quantitative, dynamical analysis of the relation between b and P min is left for future work.
To further investigate the population of planets displaying TbV, we define The dimensionless quantity n b is the characteristic number of orbits required for the planet to change its impact parameter by unity (given a constant change rate), i.e., a currently transiting planet will be torqued out of transit after typically n b transits.In Figure 8 we plot this number against the orbital period of the TbV-displaying planets.We find that n log b varies roughly inversely with P, that is, that the measured b values are only weakly correlated with the orbital period (panel (b)).If this finding holds also for larger periods (which is not known), then planets of orbital periods of decades would typically be torqued out of transit after just a few orbital periods.This would imply not only that long-period planets have a lower geometric transit probability (Ragozzine & Holman 2010) but also that they tend to move out of transit in a small number of orbits, limiting the utility of the transit method.Further study is required in order to shed light on this point.We resist fitting the n b (P) trend because we find the fit parameters sensitive to the removal of a small number of points from the sample.
In addition to the TbV analysis, we also examine the direct output of the dynamical fitting: the mutual inclinations among the planets.In Figure 9 we show the empirical distribution of mutual inclinations obtained from all our runs that passed all our validity tests, including a summation of all configurations obtained after changing the signs of I x and I y in systems whose fits permitted this inversion (as explained above).This figure shows that the mutual inclination among the transiting planets, i m , distributes in a lognormal shape, indicating that systems with more transiting planets tend to have somewhat smaller mutual inclinations.Both the shape of the distributions and the 6), who proposed that the statistics of planetary system properties can be explained by an AMD-based model (Laskar & Petit 2017).Their proposed model was later supported by Millholland et al. (2021), based on the statistics of transit duration (Shahaf et al. 2021).We note that in many systems, both in this work and in Paper II, we propose the existence of a nontransiting external perturber.This means that the inclination values obtained in those systems may be suspect.Nonetheless, the qualitative similarity to the results presented by He et al. (2020, their Figure 6) is compelling because their method relied on detection statistics of the Kepler population, while ours relies on dynamical modeling in individual systems.The fact that both methods yield similar findings supports the proposed AMD-based model.
Description of Individual Systems
In this section, we provide details for each of the 23 systems with a dynamical solution.We review previous literature on planetary masses and highlight the dynamical features of the system.The systems are ordered by their KOI index.The comparison to former literature masses is given in Figure 10 for all the KOIs.We include both past estimates of the true planetary mass and estimates of planetary nominal mass (Hadden & Lithwick 2014;Xie 2014, hereafter HL14 and X14, respectively), which is, in many cases, an upper limit on the true mass.For the values of Hadden & Lithwick (2017, hereafter HL17), we use their high-mass prior (which is appropriate for comparing with our results, which are obtained from using the same prior as theirs).
From our sample of 101 planets, for 46 we did not find any previous literature mass value.For 41 out of those, our mass detection is significant to more than 3σ.For 10 planets, we obtained a median mass estimate lower than 2 m ⊕ , of which 6 are lower than 1 m ⊕ : KOI numbers 248.04, 520.04, 571.02, 841.04, 841.05, and2038.03. For three of those (248.04, 520.04, 2038.03)we found the results to be consistent with the upper limits given by HL17 and JH21, and for the other three (841.04,841.05, 571.02) we did not find any previous mass constraints.Out of 101 planets in systems with a valid solution, for 95 we provide mass constraints better than 3σ.
Kepler-79 (KOI-152).-Allour runs converged to solutions of similar masses and to nearly circular orbits.One of the runs yielded a solution significantly better than all others regarding fit quality; it indicates small mutual inclinations of a few 2014), within ∼1.5σ of the results of HL17 in their high-mass prior case, and consistent with the results of JH21.The solution suggests a clear distinction between the densities of the two inner planets (∼1-1.5 g cm −3 ) and the two outer planets (∼0.15, 0.25 g cm −3 ).In each subpanel, the planets in the system are sorted (left to right) by their orbital period.The numbers above are the suffix of the planets' KOI designation; for instance, the first object in the plot is .Dotted vertical lines separate different planets within a system.Bold underlined KOI numbers are used for systems with at least one planet without a previously reported mass constraint, as in Figure 3.The bold underlined KOI suffix represents a specific planet without a previously reported mass.The legend shows the colors and markers representing different literature sources.Arrows pointing down represent upper limits on mass.).-This system, which contains six transiting planets, has been examined in many studies.Lissauer et al. (2011) analyzed the TTVs known at that time to estimate the planetary masses and studied the system's dynamical stability.These masses were then estimated again (Lissauer et al. 2013) using more (14 quarters) of the Kepler data, and later again by Borsato et al. (2014).HL14 estimated the nominal masses and, later (HL17), the true masses of the planets in this system.Our results agree well with these studies, apart from the fourth known planet from the star (Kepler-11 e, KOI-157.03),where our adopted solution suggests a mass of m 12.5 0.9 0.8 -+ Å while Lissauer et al. (2013) suggest m 8 2.1 and HL17 suggest m 7.2 1.0 1.1 -+ Å .The period ratios in this system suggest that each planet TTV is affected by more than one specific frequency; this, along with the relatively massive planets and the wealth of information encoded in six transit signals, makes the solution distinct with multiple studies in agreement.
Kepler-487 (KOI-191).-Wefound no literature mass for the four KOIs in this system.Until recently, the inner two were designated as candidates and the outer two as confirmed.Currently, is designated as confirmed in NExSci following the analysis of Valizadegan et al. (2022).
This system is of dynamical interest for several reasons.First, of the four transiting planets it contains, the third one (planet b, orbital period of 15.35 days) displays a transit depth that suggests a Jupiter-size object, not a frequent scenario in compact Kepler systems.Second, this system harbors a USP planet (Winn et al. 2018).This type of planet has probably gone through a runaway migration process that requires strong dissipation mechanisms (e.g., Millholland & Spalding 2020, and references therein).Third, unlike many Kepler compact systems, the planets here are not near any resonance.
We provide one solution, with masses of .We note that this solution involves a large eccentricity of the innermost 0.7-day-orbit planet.Such a situation would be surprising, as short-orbit planets should have their orbits circularized on short timescales.
A possible explanation we suggest is that, although our pipeline converged to a four-planet model that fits the data, the system contains more than four planets.A nontransiting fifth planet could possibly explain the high-amplitude TTV (∼13.4 minutes) of the outermost transiting planet (planet c, KOI-191.04),without the need for high-eccentricity orbits.
We therefore provide the four-planet solution of the system but note that it may require further investigation in order to understand its dynamical structure.For KOI-232.04 (Kepler-122 e ), HL17 give an upper mass limit of 0.6 m ⊕ based on their high-mass prior.Given the radius of order 2.5 R ⊕ , our solution is more physically plausible.For KOI-232.05 (Kepler-122 f) HL17 give m 5.3 2.4 3.6 -+ Å , a mass significantly lower than ours.With a radius of ∼2.3 R ⊕ , our solution implies an abnormally high density.An alternative explanation for the possible overestimate of Kepler-122 f density is that this system harbors another external nontransiting companion.Such a scenario could explain the TTVs of Kepler-122 e without the need for a high mass for Kepler-122 f and at the same time explain the significant TbVs displayed by the planets in this system (see Table 4) without the need for the few-degree mutual inclinations suggested by the dynamical solution (see Table 1).
Kepler-49 (KOI-248).-HL17analyzed the interaction between KOI-248.01 and KOI-248.02(Kepler-49 b and Kepler-49 c, respectively).Taking the joint errors, our solution agrees with HL17 values and differs from the values provided by Jontof-Hutter et al. (2016) by ∼2σ.A surprising result of our best-fitting solution is the implied mutual inclination among the planets.The solution suggests a mutual inclination of the innermost planet from the other planets of ∼25°.This tilt is unexpectedly high but may have a real dynamical origin.This innermost planet is at a period ratio of 2.8 with the second planet, not close to any first-order mean motion resonance (MMR), while the second and third planets are close to the 3:2 MMR.The short orbital period of the innermost planet (2.58 days) approaches the 1-day boundary of the orbits of USP planets (Winn et al. 2018).Such planets are thought to have undergone rapid inward migration, during which angular momentum conservation requires that they possess initial orbital inclination (Millholland & Spalding 2020).We highlight this system, as it might include a planet in the process of becoming a USP planet.Future observations of the orbital period of the innermost planet could reveal whether this is the case.
Kepler-26 (KOI-250).-Theplanets in this system were confirmed based on Steffen et al. (2012), who analyzed their TTVs and have shown that the TTVs are anticorrelated and hence the planets are dynamically interacting.Our solution suggests masses of 3. for KOI-250.02(Kepler-26 c), and 5 0.9 0.8 -+ for KOI-250.04 (Kepler-26 e).The masses of Kepler-26 b and Kepler-26 c agree with those given by Hadden & Lithwick (2016), Jontof-Hutter et al. (2016), and HL17.The masses of Kepler-26 d and Kepler-26 e agree with the upper limits given by JH21.Our solution involves eccentricity of order 0.1 for the innermost 3.54-day planet and inclinations of about 10°; these are relatively high values, and we suggest that further observations would be needed in order to confirm them.
Kepler-169 (KOI-505).-Wefound no literature values for the masses in this five-transiting-planet system.We provide two solutions; although one of them is of better fit quality by ∼2σ, we select the other one as the adopted solution owing to the more plausible density it suggests for the innermost planet.Both solutions suggest small eccentricities of a few percent and inclinations of a few degrees, which are also expressed as 2σ TbV signals of the four innermost planets.The adopted solution suggests a monotonic mass order, from ∼1 m ⊕ of the innermost planet to ∼7 m ⊕ for the outermost planet.
Kepler-176 (KOI-520).-Alladjacent pairs in this fourtransiting-planet system are near the 2:1 MMR.HL17 estimated the masses of KOI-520.01 (Kepler-176 c) and KOI-520.03(Kepler-176 d): m m 3.8 , 5.9 Å for KOI-520.04 (Kepler-176 e), consistent with the results of HL17.The outer planet's mass and density are exceptional; its derived mass is one of the lowest known of any exoplanet, and with a radius of about 1.36 R ⊕ , it possesses a density of 0.44 g cm 0.11 0.11 3 -+ -, a surprisingly low value for a small planet below the radius gap.The eccentricities are a few percent.The innermost and outermost planets display significant TbVs, explained in this solution by mutual inclinations of 8°-9°.Based on these unusual characteristics, we highlight this system for further observations, and we suspect that there is an additional planet(s) in the system that affects the observed dynamics.Such a planet might explain the low mass and eccentric orbit of the outermost planet, KOI-520.04,suggested by our obtained solution.
Kepler-186 (KOI-571).-Wefound no literature values for the masses in this five-transiting-planet system.We provide one solution with significant mass constraints of >3σ for four out of the five planets, with masses of m 3.5 0. is surprisingly low for a 1.52 R ⊕ planet, as it is difficult to retain a substantial atmosphere with such a low mass.In addition, the solution includes a high eccentricity value of 0.2 for the innermost, 3.88-day-orbit planet, .We hence consider this solution as questionable and conclude that further observations would be required to constrain the parameters of this system.
Kepler-616 (KOI-593).-Wefound no literature mass for this four-planet system.We provide two solutions, which are consistent in most parameters and differ only in some of the eccentricity components.Our adopted solution suggests masses of m 6. Å for KOI-707.02(Kepler-33 f); those masses differ from the results of HL17 by 1.5σ-2σ.This difference might partially arise from the TbVs of the outermost planet, which is not taken into account in a TTVonly analysis.Our solution suggests small eccentricities of a few percent but significant inclinations of up to 10°; an alternative explanation for the observed TbVs in this system could be an external, nontransiting companion.This system contains a fingerprint of the SMMR effect we mentioned in Paper I: planets Kepler-33 d and Kepler-33 e are slightly inside the 3:2 MMR, with a super-period of roughly 393.76 days, and planets Kepler-33 e and Kepler-33 f are slightly inside the 4:3 MMR, with a super-period of approximately 321.7 days.The relative proximity of these super-periods yields a TTV pattern arising from the SMMR effect, for which we estimate an amplitude of a few minutes at a timescale of roughly 4.81 yr.In addition, this system is one of two (along with KOI-834) in which inverting the impact parameter signs of the best-fitting solutions yields a fit quality worse by ∼2σ (see Section 3.3).This gives us confidence that the impact parameter signs are meaningful, and therefore these two systems deserve further investigation regarding their 3D structure.
Kepler-221 (KOI-720).-Forthis system of four transiting planets, we found in the literature a nominal mass for planet c (HL14).Å for KOI-812.03(Kepler-235 e).The solution suggests eccentricities of up to 0.1 and inclinations as high as 10°(though with large error bars).In addition, the solution suggests that the innermost transiting planet KOI-812.01, which is above the radius gap, has a similar mass to the second planet (KOI-812.04);it is a surprising result because with similar masses it would be more likely for KOI-812.01 to lose its atmosphere than for KOI-812.04.Given all of the characteristics of the solutions above, we conclude that more data would be required to constrain the planetary masses in this system.It's entirely possible that there is a nontransiting companion in this system, and by not taking it into account, we overestimated the mass of .-HL17analyzed the interaction among the four outer planets out of the five transiting planets.These four planets (c, d, e, f) form a near-resonant 2:1 chain.We provide different solutions, suggesting masses of 5-10 m ⊕ for the three innermost planets, a giant-planet mass of order 70 m ⊕ for the fourth planet, and a mass of 10-15 m ⊕ for the outermost planet.All solutions suggest small eccentricities of a few percent.Most of them, including the adopted one, indicate inclinations of a few degrees.The high-prior mass values of HL17 for planet c are insignificant; our results for planets d and f are consistent with HL17 up to the uncertainty.Our solution for planet e suggests a higher mass than in the solution of HL17; we find our solution physically plausible, as the planetary radius is about 9 R ⊕ .This system is one of two (along with KOI-707) in which inverting the impact parameter signs of the best-fitting solutions yields a fit quality worse by ∼2σ (see Section 3.3).This gives us confidence that the impact parameter signs are meaningful, and therefore these two systems deserve further investigation regarding their 3D structure.
Kepler-27 (KOI-841).-Twoplanets are confirmed in this five-transiting-planet system (KOI-841.01is Kepler-27 b, KOI-841.02 is Kepler-27 c), while the others are considered candidates.HL17 estimated the masses of planets b and c and the mass of the innermost KOI-841.03.We provide a solution that suggests masses of m 2. The two outermost planets, which are of long periods, are not significantly constrained.The suggested masses of planets b and c are slightly higher than the values given by HL17.The indicated eccentricity values of the innermost planet, of order 0.1, are high for a 6.54-day-orbit planet; hence, this dynamical solution is suspicious.An alternative explanation could be the existence of nontransiting companions in the system, which could also explain the TbV signals of KOI-841.01 (Kepler-27 b), KOI-841.02(Kepler-27 c), and KOI-841.05 without resorting to the significant inclination of KOI-841.05 suggested by the solution.
Kepler-245 (KOI-869).-Wefound in the literature nominal masses for the two outermost planets given by HL14.We provide three dynamical solutions with similar fit quality.The adopted one is selected based on its coplanar structure, likely in a four-transiting-planet system (Ragozzine & Holman 2010).Our adopted solution suggests masses of m 3. Å for KOI-935.03.All of these masses are within 1.5σ of the HL17 masses.All planets display significant impact parameter variations; this is probably the source of the relatively high mutual inclinations suggested by the solutions.A possible alternative explanation for the impact parameter variations would be an external nontransiting companion that applies torques on the transiting planets.
Kepler-58 (KOI-1336).-Steffen et al. (2013) have shown that the TTVs of KOI-1336.01(Kepler-58 b) and KOI-1336.02(Kepler-58 c) are anticorrelated and hence reside in the same system, based on the data available at the time.Later, HL14 estimated the nominal masses of KOI-1336.01(Kepler-58 b) based on the TTV amplitude of KOI-1336.02(Kepler-58 c).HL17 provided masses for those planets with large error bars.We provide three solutions, all with similar fit quality.The adopted solution is selected based on best matching with N-body integration.Our adopted solution suggests masses of m 6. Kepler-84 (KOI-1589).-Thisfive-planet system has been analyzed by HL14 for nominal masses, and later by HL17.We provide two solutions; the adopted one is slightly better regarding fit quality.The solutions agree on the planetary masses, apart from slightly different values for KOI-1589.03(Kepler-84 e).The adopted solution suggests masses of m 5.8 1.7 Å for KOI-1589.05(Kepler-84 f).Our solution agrees with the upper limits given by HL17 on KOI-1589.04and KOI-1589.05and with the values given for planet c.Our values are smaller than those given by HL17 for KOI-1589.01and KOI-1589.03.KOI-1589.02displays a significant TbV signal (∼4σ).This system displays a small-amplitude TTV pattern owing to the SMMR effect we mentioned in Paper I: Kepler-84 b and Kepler-84 c are slightly inside the 3:2 MMR, with a superperiod of roughly 273.12 days, and Kepler-84 c and Kepler-84 e are outside the 2:1 MMR, with a super-period of approximately 211.84 days.The relative proximity between these super-periods yields a TTV pattern at a timescale of roughly 2.587 yr, for which we estimate the amplitude to be of order 1 minute.
Kepler-416 (KOI-1860).-In the literature, we found only a nominal mass estimate of KOI-1860.01(Kepler-416 b), provided by HL14.The four transiting planets in this system form an 8:4:2:1 near-MMR chain.We provide two solutions of similar fit quality, which are consistent in most parameters and differ mainly in the mass of the third planet and its orbital inclination with respect to the other planets.We select the adopted solution as the one that suggests a more coplanar structure and has a slightly better fit quality (by an amount equivalent to ∼0.5σ).Kepler-338 (KOI-1930).-For this four-planet system, we found only nominal masses for planets KOI-1930.KOI-1930.03,where the density of the innermost planet is significantly larger than the densities of the other planets in the system.Kepler-341 (KOI-1952).-Wefound no literature mass estimates for this four-planet system.We provide two solutions that agree on most of the fitted parameters, where the adopted one (chosen by its much better agreement with an N-body integration) suggests masses of m 3. Å for KOI-2038.04(Kepler-85 e), consistent with the masses given by HL17.Our solution suggests a roughly coplanar, circular structure with no significant observed TbVs.In this case, due to the large number of different solutions, our confidence in the designation of a specific solution as adopted is not large; in order to constrain the system parameters, additional data would be required.The planets in this system form a chain of three 3:2 MMRs.Kepler-85 c and Kepler-85 d have a super-period of roughly 130.8 days, and Kepler-85 d and Kepler-85 e have a superperiod of approximately 136.4 days.This proximity of the super-periods gives rise to a nonpairwise TTV pattern, which we called SMMR in Paper I. Such an effect also occurs as a result of the proximity of other super-periods in this system; for example, planets b and d (innermost and third from inside) are affected by their proximity to the 2:1 MMR, with a superperiod of roughly 114.2 days.This super-period is close to the 136.4-day super-period of planets d and e.This proximity of super-periods generates an SMMR with a timescale of approximately 700 days that affects the b-d-e triplet.Our best model estimates a total TTV with amplitude 7-8 minutes owing to these SMMR effects.
Summary and Future Prospects
This work completes our previous one (Paper II), in which we analyzed light curves of two-and three-transiting-planet Kepler systems by fitting a model based on the analytic approach AnalyticLC, described in Paper I. In the current work, we fit a model to light curves of Kepler four-, fiveand six-transiting-planet systems.The model takes into account the 3D interactions and therefore is sensitive to different types of transit variations in addition to typically modeled TTVs (Hadden & Lithwick 2014, 2017;Jontof-Hutter et al. 2021, and others).The usage of the entire light curve simultaneously yields, in many cases, tighter constraints on planetary mass, as shown in Figure 5, which allowed a very high yield of determined masses: of the 101 planets in systems with good solutions, significant mass was determined for 95 of them.This figure also shows that the contribution to new planetary masses in our two works together is most significant for small TTV amplitude planets.This is probably due to the difficulty of obtaining a good fit for systems with weak dynamical interactions that yield small TTVs.The global fit approach, which integrates the dynamical information from the entire light curve (TTVs, MMR-related TbVs, secular effects, SMMR, etc.), succeeds in extracting additional information from the data.
In addition to planetary masses, these works investigated impact parameter variations.Because long-term TbV results are attributed to orbital nodal precession, these signals are probes for mutual inclination within the system.We note here that in addition to these long-term variations, other TbV patterns arise from near-resonant terms and are automatically taken into account in our model; however, in the catalog we compiled here, we specifically address linear-trend TbVs.In Paper II and this work, we compiled a list of TbV-displaying KOIs.We show the general spread of the TbV rates in Figure 7 and find that the typical rate (or, at least, the typical rate we can detect with a significance better than 2σ) is of order 0.01 yr −1 .This limited sample of KOIs shows that rapid TbVs appear only in systems that host a planet orbiting the star with an orbital period of less than ∼20 days; this may be explained by the exchange of angular momentum in the process of planetary migration.This preliminary conclusion requires a larger sample to gain significance and a more detailed quantitative dynamical explanation.
We compute the number of orbits for a planet to cease transiting and find that this quantity typically decreases for increasing orbital periods; the trend is shown in Figure 8.This dimensionless timescale is essential in planning long-term observation campaigns to search for Earth-like planets.
We foresee two types of future continuations of this work.One is extending our knowledge of the characteristics of individual planetary systems by using an analytic model based on AnalyticLC.This work and Paper II concentrated on Kepler photometric data; however, large data sets of long-timespan RV observations are natural candidates for such an analytic approach, which is particularly efficient for long time spans.Another related direction would be to combine Kepler data with those from TESS (Ricker et al. 2010) and the upcoming PLATO observations (Rauer et al. 2014), or RV data and astrometric data from Gaia (Gaia Collaboration et al. 2018).A second research avenue is a statistical analysis of the number of detected TbVs and their magnitudes to better understand the nature of planetary systems as a population.Previously, studies showed that an AMD-based model is more consistent with the number of detected TDVs than a model based on a bimodal distribution of planetary systems (Millholland et al. 2021).Using our significantly updated catalog of TbVs may allow further progress.
In Paper I, we published the method and code implementation AnalyticLC, used in this study as the fitting model.We report here a few improvements to the code, mainly that it now enables fitting a joint model of photometry and RV more robustly than before.We also added the ability to incorporate the stellar quadruple moment in the dynamical calculations.The code is available at https://github.com/yair111/AnalyticLC. Note.The numbers are printed in a short version to allow the table to fit the page; the precise values are given in machine-readable format.All parameters are given in MKS, apart from the epoch for which they are specified, which is given in days, with respect to Barycentric Kepler Julian Date (BKJD).The value of the gravitational constant used in our integrator is G = 6.674279896547770 × 10 −11 .The axis system is such that y-z forms the plane of the sky and x points toward the observer (transit occurs when x > 0).
(This table is available in its entirety in machine-readable form.)
Figure 1 .
Figure 1.An example for the N-body matching test performed on the best-fitting solution of the planetary system of Kepler-79 (KOI-152), demonstrating the ability of AnalyticLC to generate a model consistent with full N-body integration of a four-planet system.The mismatch is quantified as 10 Nbody 2 c ~, (σ Nbody = 0.0321)-a systematic error much smaller than the statistical error arising from the data uncertainty.(a) TTV pattern of the N-body model (o) and the AnalyticLC model (x) for the four planets in this system (b in blue, c in red, d in yellow, e in purple).The symbols are on top of each other at this scale.(b) "Residuals": times of midtransit mismatch between the N-body-generated model and AnalyticLC, which is of order a few seconds for the innermost planet and of order a minute for the three outer planets.(c) Manifestation of the mismatch to terms of relative flux.The mismatch standard deviation within the in-transit points is 0.7, 10, 25.5, and 9 ppm for the four planets correspondingly, innermost to outermost.The typical Kepler short-cadence data (which is almost all of the data) uncertainty for this star (∼1000-1200 ppm).
Figure 2 .
Figure2.An overview of the N-body match with AnalyticLC for all our runs, including both this work and the former work of the two-and threetransiting-planet systems, including runs that both did and did not pass our acceptance criteria.This map demonstrates the accuracy limits of Analy-ticLC.The horizontal axis is the largest magnitude of all free eccentricities and inclinations in the system at the best-fitting solution derived by AnalyticLC.The vertical axis is the minimal closest approach in the system in units of mutual Hill radii.The color of the points represents the value of σ Nbody , with points at which σ Nbody > 1.5 (which do not pass our acceptance criterion) also marked with black edges.
Figure 3 .
Figure3.Orbital period, radii, and densities of the planets with mass estimates from this work.KOI numbers for which at least one planet does not have a previously reported mass value or upper limit are shown in underlined bold text.The size of the circles represents the absolute planetary size, and the color scale indicates our adopted density.Only planets with densities estimated to the significance of more than 4σ are color filled; others are open.For some of the planets (e.g., in the systems KOI-593 and KOI-1589) the relative masses are well constrained, but the absolute mass (and hence density) is not well constrained owing to a high uncertainty on the stellar mass and radius, and hence these planets are shown here open.For reference, the legend shows the size and density of Earth, Jupiter, and Neptune.The short vertical black lines indicate the locations of the closest first-order MMRs to the observed period ratio of adjacent planets.The black plus signs similarly indicate the locations of second-order MMRs-note that many planets are close to these MMRs.Both firstand second-order MMRs are indicated only if they are close to the observed period ratios.We do not show second-order MMRs that are a multiplication of a first-order MMR (e.g., 4:2 and 2:1).
Figure 4 .
Figure 4. Planetary masses and radii obtained from this work.This figure shows the overall spread of masses and radii and the good agreement of planet masses with former literature values.(a) Mass-radius diagram.Each blue error bar is related to a single planet.The red lines are normalized histograms of the masses and radii obtained by summing up the pdf's of all points.As the sample is small and radii are well determined, the radius-weighted histogram appeared more as a collection of discrete values, so it was smoothed using a Gaussian kernel of width 0.3 R ⊕ to better show population-wide trends.The gray contours are constant-density curves.(b) Comparison of masses obtained in this work with literature values: JH21 values (blue) and upper limits (red); HL17 default mass prior, three of which show upper limits only (yellow); and high-mass prior (purple).We omit planets for which literature values are within less than 2σ of zero.For six objects labeled with their KOI numbers, the results obtained in this work disagree with the results of HL17 by more than 2σ; these are discussed in more detail in the text.The black line shows the identity function.
Figure 5 .
Figure 5. Overall statistics of mass estimates in terms of fractional mass error and TTV magnitude for the planets in this study and our former study (Paper II).(a) Distribution of relative error in planetary mass for the combined results of this study and our former study (Paper II; blue) and past literature (orange).The literature sources taken into account here are detailed in Figure 10 and in Judkovsky et al. (2022b, Figure 10).(b) Distribution of σ TTV , the standard deviation of the TTV signal, in the population of planets for which we obtained a dynamical solution and provided masses in this work and in Paper II.
Figure 6 .
Figure 6.An example for a nonlinear TbV pattern, which is superimposed on a secularly driven TbV linear pattern.(a) The evolution of the impact parameters of the planets in the system KOI-935 (Kepler-31) for the sample median parameter values, shifted by the initial b value, b initial , to make the trends visible at the same scale.(b) b resid , the residuals of this evolution after the removal of the linear trend (which is attributed to secular interactions).The thick lines show the residuals for the sample median parameter values, and the narrow lines represent the dynamics for 10 randomly selected parameter sets from our sample to highlight the nonlinear TbV pattern in this system seen in all solutions.The outermost pair, with orbital periods of roughly 42.63 and 87.65 days, reside near but just outside the 2:1 MMR, causing a super-period of roughly 1570 days, which is clearly seen in the nonlinear TbV signal.The largest nonlinear part of the TbV has a semiamplitude of 4 • 10 −3 in the best-fitting solution.Such a small signal by itself might not have been detected by direct methods that aim to directly measure the impact parameter for each individual event; it was found here by integrating the information of all transit variations together (timing, duration, depth) in a single dynamical model.(c) The distribution of bresid s in the best-fitting parameters in the adopted dynamical solutions for 241 planets included in this work and in Paper II, and the linearly accumulated TbV over 1 yr for the same sample (using the median value of b for each planet).The dashed lines indicate the median values of the two distributions; these differ by a factor of 10.
Figure 7 .
Figure 7. Distribution of TbV magnitude in our 130-planet sample for which b is predicted with a significance level larger than 2σ, listed in the Appendix and in Paper II.(a) The absolute value of TbV rate against the shortest-period companion for the 130 planets in our sample that display TbV to better than 2σ.(b) The empirical cumulative distribution function of the TbV magnitude in this sample, showing that the typical value is of order 10 −2 yr −1 , with red circles indicating the median and the percentiles equivalent to 1σ and 2σ in a normal distribution.
Figure 8 .
Figure 8. Measured impact parameter variation rates.(a) Timescale for transits to disappear (namely, number of orbital periods for b to change by 1) against an orbital period of 130 TbV-displaying planets.The radii of the circles represent the absolute planetary radii.(b) Absolute values of b vs. orbital period.The values of b are spread around ≈10 −2 across the range of observed orbital periods.The correlation of b | | with P is weak.
Figure 9 .
Figure9.Distribution of mutual inclinations among all pairs in all our runs that passed the dynamical validity criterion σ Nbody < 1.5, and after shuffling over all configurations of the signs of I x and I y .Each solid curve is obtained by averaging all empirical pdf's from all the runs passing the criterion above calculated over uniform bins in i m and shown in a logarithmic scale.The dashed lines indicate the best-fitting Gaussian for each curve: blue for systems with two or three planets, and red for systems with more than three planets.The vertical lines indicate the 99% confidence intervals of the expectation values of the fits (also listed in the legend), which are 2°.49 ± 0°.02 for systems with two or three planets and 2°± 0°.04 for systems with more than three planets.This gives a qualitatively similar picture to that ofHe et al. (2020) in both the distribution shape and its dependence on multiplicity.
degrees.The outermost transiting planet (KOI-152.04,Kepler-79 e) is almost grazing, with an (absolute value of the) impact parameter of roughly 0.96 and a planet-to-star radius ratio of about 0.026.The two intermediate planets (KOI-152.02and KOI-152.01,namely Kepler-79 c and Kepler-79 d) display significant TbVs of order 0.01 yr −1 , and the innermost planet (b) displays significant, somewhat slower TbVs of order 0.003 yr −1 .The best model TTV nicely matches the TTV data of Holczer et al. (2016).Our adopted solution suggests masses of m for KOI-152.04 (Kepler-79 e).These are within ∼1.5σ of the results of Jontof-Hutter et al. (
Figure 10 .
Figure10.Comparison of the masses obtained from this work with past literature by KOI.In each subpanel, the planets in the system are sorted (left to right) by their orbital period.The numbers above are the suffix of the planets' KOI designation; for instance, the first object in the plot is.Dotted vertical lines separate different planets within a system.Bold underlined KOI numbers are used for systems with at least one planet without a previously reported mass constraint, as in Figure3.The bold underlined KOI suffix represents a specific planet without a previously reported mass.The legend shows the colors and markers representing different literature sources.Arrows pointing down represent upper limits on mass.
for KOI-593.04; the three innermost transiting planets have a significant mass detection of more than 3σ.The solution suggests eccentricities of a few percent and inclinations of a few degrees.Kepler-33 (KOI-707).-Thisfive-planet system has been analyzed by HL14,Hadden & Lithwick (2016), and HL17.Our solution suggests masses of m 4 .03, where the innermost planet has a density slightly higher than Earth and the other planets have densities lower than Earth.The two outermost planets display significant TbVs; these may relate to the 5°-10°of mutual inclinations suggested by the solution; an alternative explanation could be an external nontransiting companion.Kepler-235 (KOI-812).-Wefound no literature mass for this four-planet system.Our solution suggests masses of m 5 . No significant TbVs are observed.Kepler-31 (KOI-935).-Thisfour-transiting-planet system has been analyzed byFabrycky et al. (2012), who used TTVs from the first eight Kepler quarters to conclude that the transit signals arising from KOI-935.01 (Kepler-31 b) and KOI-935.02(Kepler-31 c) arise from the same host star.HL17 estimated the masses of KOI-935.01 (Kepler-31 b), KOI-935.01 (Kepler-31 c), and KOI-935.01 (Kepler-31 d); KOI-935.04 is currently designated as a candidate.We provide three solutions and select the adopted one owing to fit quality; it is better than the others by Δχ 2 ∼ 230, equivalent to ∼7σ.Our adopted solutions suggest masses of m 01 (Kepler-338 b) and KOI-1930.04(Kepler-338 e) in HL14.Hence, our masses for KOI-1930.02(Kepler-338 c) and KOI-1930.03(Kepler-338 d) are the first estimates we are aware of.Our solution suggests masses of m 5 .04 (Kepler-341 e), as well as a coplanar structure with eccentricities of order a few percent.Kepler-85 (KOI-2038).-In the literature, we found a nominal mass estimate for planet Kepler-85 c by HL14 and mass estimates for all planets by HL17.We provide four different solutions, with the adopted one suggesting masses of m 4
Table 6
Instantaneous Coordinates, Velocities, and Orbital Elements of the Best-fitting Parameters for the Adopted Solutions in This Work and in Paper II | 18,235 | 2024-02-12T00:00:00.000 | [
"Physics"
] |
Basic reproductive biology of daggertooth pike conger, Muraenesox cinereus: A possible model for oogenesis in Anguilliformes
Introduction Eels are animals commonly used in zoological research, as these species have a unique catadromous life history and belong to a phylogenetically ancient group of Teleostei. However, eel reproduction is difficult to investigate, since mature samples are not easily obtainable in the wild. In this study, we tested daggertooth pike conger (Muraenesox cinereus), an Anguilliformes species, as a potential model for the investigation of the reproductive biology of eels. Seventy individuals were caught between June and October, which is supposed to be their spawning season, from inshore of the Seto Inland Sea. Results The lengths and ages of samples ranged from 510 to 1239 mm and three to nine years, respectively, and the sex ratio was skewed towards females (96 % of the total sample). The gonado-somatic index of the females peaked in July. Histological observation revealed that these ovaries were similar to those of other eel species and contained matured oocytes (migratory-nucleus stage), suggesting that pike conger spawn inshore in July. The plasma concentrations of sex steroid hormones (estradiol-17β and 11-keto-testosterone) in females gradually increased during maturation and decreased after spawning, indicating the involvement of these hormones in oogenesis of pike conger. Conclusions The present study is the first to report on characteristics of natural oogenesis in pike conger. Because naturally maturing samples can easily be captured, daggertooth pike conger may represent an excellent model for the study of reproduction in Anguilliformes.
Introduction
The unique catadromous nature of eels (Anguilla japonica, A. anguilla and A. rostrata), make them a representative species of the phylogenetically ancient group of Teleostei [1], and because of this feature they are commonly used experimental animals for zoological studies that involve migration, environmental adaptation, and reproduction. This species grows in freshwater or in coastal areas of East Asia, but spawns offshore of the western North Pacific after extreme long-distance migrations [1][2][3][4]. Thus, eels captured in freshwater or inshore possess only immature gonads and naturally maturing fishes are not easily available. Furthermore, the gonads of eels remain dormant in culture conditions [5,6]. Artificial gonadal development/ maturation in cultured eels is partially possible by administration of gonadotropic hormone reagents, such as salmon pituitary extracts [7,6,8]. However, reports show that there are physiological and morphological differences between wild and artificially matured eels [4]. Therefore, it is imperative to study the reproductive biology, especially the process of natural gonadal maturation, of eels.
The daggertooth pike conger (Muraenesox cinereus; order Anguilliformes, family Muraenesocidae) is widely distributed in the Indo-West Pacific Ocean [9,10]. Since pike conger is evolutionally primitive among Anguilliformes [10] and can be continuously captured by bottom trawling from the Seto Inland Sea, Japan [11], this fish may represent a useful model for studying Anguilliformes reproduction. However, such reports on the reproduction of daggertooth pike conger are completely lacking. Therefore, we took up this study with an aim to obtain basic information on the reproduction of pike conger.
As a first step, we examined the sex ratio, body size, age and gonadal maturity during the possible spawning season (July-October). Additionally, we analyzed the concentrations of sex steroid hormones (Estradiol-17ß and 11-keto testosterone, which are known to be involved in eel gametogenesis [12]), in plasma during oogenesis by enzyme-linked immunosorbent assay (ELISA). To our knowledge, this is the first report on the oogenesis of Anguilliformes in the wild.
Sampling procedures
After anesthesia, body weight and length were measured. Blood samples were collected from the caudal vein with heparinized syringes. After centrifugation, plasma was collected and stored at −30°C until analysis. Eels were sacrificed by decapitation, and gonads and otoliths were removed. Gonads were weighed to calculate the gonadosomatic index (GSI = gonad weight/body weight × 100). Pieces of the gonad were fixed with Bouin's solution for histological analysis. Otoliths were cleaned and stored dry until further analysis. All procedures were performed in accordance with the Guidelines for Animal Experimentation established by Okayama University.
Age determination
To determine the ages of the individual fish, all otoliths were analyzed according to previously published method [13]. In brief, otoliths were embedded in polyester resin and then cut into 0.3 mm transverse sections with a saw microtome (SP1600, Leica Microsystems GmbH, Wetzlar, Germany). The sections were mounted on glass slides and their surfaces were ground to sequentially finer grades using carborundum paper. Finally, the sections were etched with 0.2 N HCl for 30 s. The number of rings (opaque zones) in the section, representing the age of individual fish, were counted under a microscope.
Gonadal histology
To identify the sex and gonadal maturity, the fixed gonads were dehydrated and embedded in paraffin. Paraffin samples were serially sectioned to 7 μm and stained with hematoxylin and eosin. The determination of oocyte maturity was based on a previous report on common Japanese conger [14].
Measurement of steroid hormones
The concentrations of estradiol-17β (E2) and 11-ketotestosterone (11KT), the major fish estrogen and androgen respectively, in the plasma were determined by enzyme-linked immunosorbent assay (ELISA). The analyses were performed according to a previous report [15], and absorbance was measured in a microplate reader (MTP-300; CORONA Electric Co. Ltd., Japan).
Statistical analysis
Data on the steroid hormone levels and GSI are presented as means ± SEM. Significant differences were evaluated by one-way ANOVA, followed by Tukey-Kramer Multiple comparison test using PRISM 5.0b software (GraphPad, San Diego, CA). P < 0.05 was taken as the threshold for statistical significance.
Sex ratio, age and growth
A total of 70 pike congers captured in inshore were used in this study. The sex of the samples was determined by histological observation. In our samples, females (n = 67) predominated over males (n = 3) ( Table 1). The relationship between total length and age is shown in Fig. 1. The total lengths and ages ranged from 510 to 1239 mm and 3-9 years old, respectively. Males appeared to be smaller than females in length.
Changes in gonadosomatic index in females
Changes in GSI in females during sampling period are shown in Fig. 2. GSI was 2.26 ± 0.77 at the first sampling (June 10), but rapidly increased to a peak by 9 July (6.86 ± 2.58). Thereafter, the GSI decreased gradually, reaching a minimum by 18 October (0.98 ± 0.31).
Histological observation of the ovary
Gonadal histological observation revealed that the ovary of pike conger contained oocytes at various developmental Tempreture is daily average stages (Fig. 3). In addition, adipose cells were observed in the ovaries of all samples (Fig. 3). Based on the most advanced maturity of oocyte in the ovary, we classified the individual females into five stages ( Table 2 and Fig. 3).
Stage I
Immature. Fish at this stage had ovaries containing many pre-vitellogenic oocytes. No mature oocytes were observed in the ovary (Fig. 3a).
Stage II
Vitellogenesis onset. The ovary of this stage harbored oil droplets and primary yolk oocytes (Fig. 3b).
Stage III
Vitellogenesis progression. Many oocytes at the secondary vitellogenic stage were observed in the ovary (Fig. 3c).
Stage IV
Vitellogenesis completion. Many tertiary yolk-stage oocytes (Fig. 3d) and few migratory nucleus-stage oocytes (Fig. 3e) were observed in the ovary. This stage was observed in the youngest sample also (age 3).
Stage V
Post-spawning. The ovaries of this stage contained atretic oocytes (Fig. 3f ).
Changes in female maturation
Changes in female maturation during the sampling period are shown in Fig. 4. Ovaries of fish captured in June showed active vitellogenesis (stage II and III). All females of 9 July were in stage IV. Fishes in the stage V appeared in August. All fishes of October were stage I.
Plasma steroids in female
The plasma levels of sex steroid hormones during female maturation are shown in Fig. 5. The levels of E2 were maintained throughout different maturity stages, with peak in stage IV (Fig. 5). The level of 11KT was significantly higher in stage IV than that of other stages (Fig. 5).
Histological observation of testis
Few males (n = 3) were captured in this study. One male captured on 25 June had mature testis showing various stages of spermatogenesis, including sperm (Fig. 6a). In
Discussion
In this study, we investigated the basic reproductive biology of daggertooth pike conger. First, we examined the sex ratio, age and body size. Second, the oogenesis of pike conger was characterized by the histological observation and the spawning period was determined.
The changes in GSI and maturity indicate that the pike conger's spawning season is around July. Gonadal histological observation revealed that pike conger has an asynchronous type of ovary, containing oocytes at various developmental stages, and that this fish is mature by at least 3-years-old, which is the youngest sample in this study. Considering the general rule that fish with the asynchronous type of ovary spawn several times during one spawning season [16], pike conger at least 3-yearsold could spawn several times around July.
We found that the sex ratio was skewed towards females. Although the reason for this fact could not be established in this study, similar sex ratio imbalances have also been reported in other Anguilliformes [17][18][19]. In the case of the wild European conger eel (Conger conger), females were captured exclusively in the shallow coastal waters, whereas males were observed in deeper waters [17]. In a recent study on the wild Japanese eel, large variations in sex ratio were observed among the sampling locations [18]. Considering these previous reports and the present study, it appears that the habitats of Anguilliformes females and males may differ in the wild, although this needs to be confirmed by capturing pike congers from different sites or by bio-logging of pike conger females and males.
As described in the Introduction, A. anguilla, A. rostrata and A. japonica captured inshore always have immature gonads, and their spawning areas have been reported to be located offshore in the open ocean [20,1], making it difficult to use these fish for studies of eel reproduction. In contrast, pike congers captured inshore have matured testes and ovaries with adipose cells, which are the typical features of Anguilliformes [17,[21][22][23][24]. Therefore, pike conger may represent a better model for studying the reproduction of Anguilliformes.
Histological observation also revealed that the migratory nucleus stage oocytes were present in the ovary of pike conger (stage IV). It is well established that this stage oocytes immediately precede hydration and ovulation [25,26]. The spawning area of pike congers thus appears to be close to our sampling area in the Seto Inland Sea, although the precise location remains unknown. Unlike in catadromous Anguilliform species [20], changes in the external morphological features for spawning migration, such as skin color, eye-size, and In this study, we measured the concentrations of sex steroid hormones (E2 and 11KT) in plasma during oogenesis (Fig. 5). However, the concentrations of both hormone were lower than that of other eel species (average concentration of E2 and 11KT in pike conger are 28.4 and 17.5 ng/ml, respectively) [23,28]. These results raise the possibility that major steroid hormones of pike conger are different from other eel species.
Although the plasma level of E2 is low in pike conger, changes of this hormone during oogenesis were largely consistent with artificially matured Japanese eel induced by salmon pituitary extracts [29]. Thus, E2 play important roles in oocyte growth (vitellogenesis) in pike conger, as well as Japanese eel.
Level of 11KT in pike conger increased gradually during vitellogenesis, similar to other eel species [29]. In Japanese eel, 11KT is thought to be involved for control of lipid droplet accumulation into oocytes [30]. In addition, ability of androgen to induce the spawning migration and silvering was reported in some eels [31][32][33]. Therefore, 11KT may play an important role in vitellogenesis and spawning migration in pike conger. Plasma steroid hormone level (pg/mL) (14) (4) | 2,749.6 | 2015-09-03T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
Progress toward SHAPE Constrained Computational Prediction of Tertiary Interactions in RNA Structure
As more sequencing data accumulate and novel puzzling genetic regulations are discovered, the need for accurate automated modeling of RNA structure increases. RNA structure modeling from chemical probing experiments has made tremendous progress, however accurately predicting large RNA structures is still challenging for several reasons: RNA are inherently flexible and often adopt many energetically similar structures, which are not reliably distinguished by the available, incomplete thermodynamic model. Moreover, computationally, the problem is aggravated by the relevance of pseudoknots and non-canonical base pairs, which are hardly predicted efficiently. To identify nucleotides involved in pseudoknots and non-canonical interactions, we scrutinized the SHAPE reactivity of each nucleotide of the 188 nt long lariat-capping ribozyme under multiple conditions. Reactivities analyzed in the light of the X-ray structure were shown to report accurately the nucleotide status. Those that seemed paradoxical were rationalized by the nucleotide behavior along molecular dynamic simulations. We show that valuable information on intricate interactions can be deduced from probing with different reagents, and in the presence or absence of Mg2+. Furthermore, probing at increasing temperature was remarkably efficient at pointing to non-canonical interactions and pseudoknot pairings. The possibilities of following such strategies to inform structure modeling software are discussed.
Introduction
RNA molecules are ubiquitous within the cell but are also present outside of the cell as in plant phloem [1]. Some of them have even been shown to be glycosylated and exposed on the cell surface [2]. They fulfil very diverse functions including coding and non-coding roles, are involved in most steps of the genetic expression regulation, and can specifically interact with small molecules, proteins, or other nucleic acids (DNA or RNA). Their sequence is indisputably critical for the coding role or the specific recognition of other DNA and RNA sequences. Beyond the importance of their sequence, they also adopt specific three-dimensional structures that are responsible among others for the catalytic activity of ribozymes (RNA enzymes), the strong and specific binding to metabolites, proteins and ribosome, the assembly of phase-separated membrane-less organelles [3,4], the availability of sequence signals (e.g., [5]) and the modulation of ribosomes progression. Structure can be considered as another layer of encoded information, not only in compactly folded RNA, such as rRNA, tRNA, ribozymes, riboswitches but also within mRNA [6]. This is particularly obvious within viruses' genomes in which information is very streamlined [7,8].
Given the central role of RNA and its structure, it is of great interest to be able to model RNA structure as accurately as possible. Physical methods that were very successful for determining protein structures suffer from limitations for RNA, for instance NMR can only be applied to short RNA and flexible RNA molecules are most of the time hardly amenable to crystallization for X-ray diffraction. In addition, those methods are very time consuming, while new RNA sequences are uncovered every day. The RNA structure modeling process most of the time firstly consists of predicting the secondary structure before inferring the three-dimensional architecture. However, identifying the correct secondary structure is complicated since the native structure is the result of indirect folding trajectories forming secondary and tertiary interactions. This is well exemplified by the formation of the P4P6 domain from the Tetrahymena thermophila group I intron. The secondary structure of the folding intermediate presents a P5abc domain with additional Watson-Crick base pairs as compared to the native form. The formation of tertiary interactions between P5abc and the P4 domain leads to the release of the additional base pairs that were yet required to direct the formation of the tertiary structure [9].
The accuracy of computational secondary structure modeling has been significantly improved due to the integration of experimental information from chemical structure probing in the form of soft constraints [10,11]. Two main types of chemical probes are used, those such as DMS (dimethyl Sulfate) or CMCT (cyclohexyl-3-(2-morpholinoethyl) carbodiimide metho-p-toluene sulfonate), that probe the availability of H-bond donor or acceptor sites located on the Watson-Crick face of the base [12], and SHAPE probes that report on the flexibility of the ribose [13][14][15][16]. Tremendous efforts and progress have been made over the last years to develop high throughput experimental [17][18][19][20][21] and computing pipelines [22,23].
However, the accurate prediction of the secondary structure remains a challenge for long RNA because of intrinsic RNA properties and technical problems. Firstly, most RNA are likely to adopt several conformations whether in vitro or in a cellular context, which may in some instances reflect experimental artefacts, but also reflect their potential to regulate their function through structural rearrangement. RNA structure probing experiments then yield averaged signals representing all the conformations and therefore do not properly inform the prediction software. Secondly, RNA structure includes a handful of non-canonical base pairings accounting for a significant part (≈25%) of the total number of base pairs [24] that significantly contribute to the folding stability, often by forming specific structural motifs. The relevance of such observation is striking in the case of the P4P6 domain of the Didymium iridis lariat-capping ribozyme (DiLCrz), the ribozyme used in this study. The DiLCrz crystal structure [25] shows a P4P5 secondary structure quite different from the originally predicted one [26]. In the original secondary structure, a four base pairs core is followed by two more taking place after a bulging U, whereas the crystal structure reveals that only one non-canonical base pair takes place after the bulge. The loss of the two WC base pairs is energetically compensated by the formation of a trans WC base pair, which fosters A residues from J5/4 to interact with the shallow groove of the P4 stem to weave A-minor interactions. In other words, the loss of secondary structure interactions is compensated by formation of tertiary ones. Such examples showcase that modeling of non-canonical and [canonical and non-canonical) tertiary interactions is an essential part of meaningful RNA structure modeling, even if such complex interactions add fundamental challenges in comparison to computational or computer-assisted modeling of canonical RNA secondary alone. Note that thermodynamics models and calculations without such interactions (i.e., restricted to canonical, secondary interactions) are long established and highly efficient for computer-assisted secondary structure prediction [27][28][29]. We can distinguish several lines of computational structure modeling including non-canonical and tertiary interactions. First, there are approaches that model RNA 3D structure, e.g., by fullatom fragment-based sampling methods [30,31] or coarse-grained simulation [32], which then allows reading-off secondary and tertiary interactions from the 3D representation. Typically, such methods are computationally limited, e.g., it is reported that "problems involving de novo building more than 80 nucleotides will be challenging for FARFAR2 conformational sampling" [30] whereas successful predictions are reported for up to 500 nt by RNAvista [33,34], which integrates fragment-based modeling with base pair annotation.
Other approaches like MC-Fold/MC-Sym [35], CycleFold [36], and RNAwolf [37] include non-canonical secondary interactions in their prediction of extended secondary structure. Such methods have the potential to predict larger structures, but (by principle) they do not predict tertiary interactions. The exact prediction of tertiary interactions has been well studied in the form of pseudoknot prediction. Pseudoknot motifs, which frequently occur in functional RNAs, are characterized by a short Watson-Crick helix, the pseudoknot, the strands of which are topologically separated: one of its two strands is intercalated between the strands of at least one other helix of the RNA. Because their prediction is computationally intractable in realistic energy models (NP-hard, inapproximable) [38,39], efficient algorithms were presented only for restricted classes of pseudoknots (e.g., [40,41]) and the most popular structure modeling methods ignore them completely. Nevertheless, pseudoknot prediction has been implemented in several faster algorithms [42,43] that first calculate pseudoknot free structures and then search, in a second round, for pairings between the still single-stranded bases, or which stochastically simulate the folding of the RNA chain in the course of virtual transcription [44]. Although quite successful, this strategy faces two pitfalls, first it generates several models that the user cannot discriminate, secondly, it relies on the accurate prediction of the secondary structure in the first round which is not certain for the reasons mentioned above. Although experimental probing data significantly improve structure prediction, it is not of much help for non-canonical interactions or pseudoknot pairings. Indeed, as they are involved in genuine Watson-Crick base pairs, pseudoknot positions show probing patterns similar to regular helices, while tertiary interactions are often mildly reactive as they are usually more dynamic than canonical pairings. Despite the above-mentioned difficulties, teams of experts gathering many competencies are able to generate impressively accurate tertiary structure models for compactly folded RNA [45][46][47].
Here we investigate several approaches in order to precisely identify nucleotides involved in tertiary structure elements, notably pseudoknots and kissing loops. To this end we considered two physicochemical properties of these structures: first, their formation is stabilized by the presence of magnesium and second, they have lower thermal stability than the core secondary structure. Our mid-term goal is to develop an experimental and computational workflow that allows any molecular biology laboratory to automatically model RNA secondary structure including kissing loops and pseudoknots and indicate the presence of other tertiary interaction. We take advantage of the well-established crystal structure of DiLCrz to serve as a benchmark for our study. This compactly folded RNA gathers in only 188 nucleotides many of the recurring structural motifs found in RNA including a pseudoknot, a kissing loop pairing, and tetraloop/receptor interaction.
RNA Preparation
RNA was in vitro transcribed by run-off transcription using T7 polymerase in 40 mM Tris-HCl pH 8.0, 25 mM MgCl 2 , 5 mM DTT, 5 mM rNTPs, 1 mM spermidine and 20 U RNAsin (Promega, Madison, WI, USA). Template DNA was then digested at 37 • C for 20 min with RQ1 DNAse (Promega) and RNA was precipitated by addition of 2.5 M lithium chloride and centrifugation at 16,100× g for 30 min at 4 • C. RNA pellet was washed with 70% ethanol then resuspended in nuclease-free water. RNA was purified through G25 size exclusion chromatography and quantified spectrometrically. Its integrity as well as the absence of aberrant products were confirmed by gel electrophoresis.
Melting Curves
Six pmoles of RNA were diluted in 14 µL of H 2 O and denatured for 2 min at 80 • C. 4 µL of pre-warmed 5X folding buffer (HEPES pH 7.5 40 mM, KCl 500 mM, MgCl 2 ranging from 0 to 5 mM final concentrations) were added and the solution was allowed to cool to room temperature in 5 min. After the addition of 2 µL of RiboGreen (ThermoFisher, Waltham, MA, USA, final concentration: 300 nM), the samples were incubated for 10 min at 37 • C. The melting curves were obtained with a PikoReal RT PCR system (ThermoFisher) by heating the sample from 37 to 95 • C at 0.04 • C·s −1 and measuring the fluorescence emitted by the ribogreen. Raw fluorescence curves were normalized and derived. Derivatives from three replicates were averaged and smoothed. Data analysis was performed using Graphpad Prism v5.02.
SHAPE Probing
SHAPE chemical probing was mostly performed as previously described in [48,49]. Briefly, 36 pmoles of DiLCrz RNA were diluted in 160 µL of water and denatured for 2 min at 80 • C. Then, 40 µL of pre-warmed folding buffer (HEPES pH 7.5 40 mM, KCl 500 mM, 5 mM MgCl 2 ) were added and the solution was allowed to cool down to room temperature for 5 min. The sample was then incubated at 37 • C. After 10 min, a 20 µL aliquot (corresponding to 6 pmoles) was removed and added to 2 µL of 40 mM 1M7 and allowed to react for 2 min, while a second 20 µL aliquot was mixed with neat DMSO. The temperature was then gradually (about 10 min between each step) increased to 53 • C, 65 • C, 74 • C and 85 • C and the same probing procedure was applied. Probing with NMIA was performed identically, BzCN probing was performed at a final concentration of 40 mM and HEPES was adjusted to 80 mM. Probed RNA were precipitated at −20 • C with 10% ammonium acetate 5 M, 2. Fluorescently labeled cDNA were then precipitated at −20 • C with 10% sodium acetate pH 5.4, 20 µg of glycogen as a carrier and 2.5 vol. of ethanol. Pellets were washed with ethanol 70%, vacuum-dried for 10 min, resuspended in 40 µL of sample loading solution and run on a CEQ 8000 capillary electrophoresis sequencer (Sciex, Concord, ON, Canada). The resulting traces were analyzed using QuSHAPE [50].
Reactivities were the mean from three independent replicates. Within each triplicate, values more than 0.4 reactivity units apart from the other measurements were considered as outliers and discarded.
Comparison of Two Probing Profiles
Probing profiles were compared as described in [49]. In brief, reactivities averaged over three replicates in two conditions, R1 and R2, were used to calculate the absolute difference ∆R = |R 1 − R 2 | and the relative change ∆R/(R 1 + R 2 ). The reactivity difference between two conditions is considered significant when both variables exceed a threshold of 0.2, and the p-value from a two-sided t-test is inferior to 0.05. Nucleotides with undetermined reactivity are excluded from the analysis. Such an approach excludes small reactivity changes between weakly reactive nucleotides as well as large yet unmeaning reactivity changes between highly reactive nucleotides.
Nucleotide Clustering
Nucleotides were clustered by the KMeans algorithm as vectors of reactivities obtained at 37, 53, 65, 74 and 85 • C. Clustering was done using scikit-learn and a custom python script available at https://github.com/gdebissc/DiLCrz, accessed on 3 November 2021. Data were log2-transformed and standardized (nucleotide-wise then temperature-wise). Undetermined reactivities were removed prior to clustering. Dimensionality reduction by principal component analysis reveals that nucleotides can be grouped in at least three or four clusters. The optimal number of clusters was further determined using the elbow method. Briefly, the within-cluster sum of squared errors, or distortion, was calculated for each cluster number between 1 and 10 ( Figure S5B). The result graphically indicates that increasing the cluster number above four does not significantly reduce the distortion.
All-Atom Molecular Dynamics Simulations
MD simulations were carried out using GROMACS 5 package [51][52][53][54] using the Amber ff99+ parmbsc0 force field [55,56]. The molecular systems were placed in a cubic box and solvated with TIP4P-EW water molecules [57]. The distance between the solute and the box was set to at least 14 Å. The solute was neutralized with potassium cations and then K + Cl − ion pairs were added to reach the salt concentration of 0.15 M. We used the ion corrections of Joung et al. [58] as this force field has been shown to produce stable RNA structures [59]. The parameters for Mg 2+ are taken from [60]. In our protocol, Mg 2+ ions were first placed at known binding sites while the remaining ions randomly replaced solvent molecules to reach a concentration of 0.02 M [61]. Long-range electrostatic interactions were treated using the particle mesh Ewald method [62,63] with a real-space cut-off of 10 Å. The hydrogen bond lengths were restrained using P-LINCS [53,64], allowing a time step of 2 fs. The translational movement of the solute was removed every 1000 steps to avoid any kinetic energy build-up [65]. After energy minimization of the solvent and equilibration of the solvated system for 10 ns using a Berendsen thermostat (τ T = 1 ps) and Berendsen pressure coupling (τ P = 1 ps) [66], simulations were carried out in an NTP ensemble at a temperature of 300 K and a pressure of 1 bar using a Bussi velocity-rescaling thermostat [67] (τ T = 1 ps) and a Parrinello-Rahman barostat (τ P = 1 ps) [68]. During minimization and heating, all the heavy atoms of the solute were kept fixed using positional restraints. The length of the simulation was 1000 ns. To assess the stability of the trajectory, we verified that the root mean square deviation (RMSD) and the number of hydrogen bonds were stable along the trajectory.
Definition of the Parameters
To get insights into the relationship between the structure and the reactivity, we computed several quantities along the MD simulation. First, we computed the hydrogen bonds and the stacking for each frame using the software MC-Annotate [69][70][71]. The hydrogen bonds were classified using the Leontis/Westhof classification [24,72] and we detailed if the HBs are formed between the O2 and an atom in the Watson-Crick (W), Hoogsten (H) or the B side (the position is between the W and H side) or between OP and an atom in these sides [70].
Second, we computed the multiplets (triplets, quadruplet, etc.) and the secondary structure for each frame using the software DSSR (dissecting the spatial structure of RNA) v.2 [73]. For each hydrogen bond and multiplet, we computed the percentage of the occurrence.
To take into account the ribose conformation, we described the sugar ring using pseudorotation parameters. Although there are four possible pseudorotation parameters for a five-membered ring [74], two, in particular, are useful to characterize the sugar conformation: the so-called phase (Pha) and amplitude (Amp). While the amplitude describes the degree of ring puckering, the phase describes which atoms are most out of the mean ring plane. We calculated these parameters using the expressions given below [75]: with v i the ring dihedral angle i. This approach has the advantage of processing the ring dihedrals v 1 (C1 -C2 -C3 -C4 ) to v 5 (O4 -C1 -C2 -C3 ) in an equivalent manner [76]. Conventionally, sugar ring puckers are divided into 10 families described by the atom which is most displaced from the mean ring plane (C1 , C2 , C3 , C4 or O4 ) and the direction of such displacement (endo for displacements on the side of the C5 atom and exo for displacements on the other side). Using the Curves + program [76] for each simulation trajectory, for each nucleotide we computed the percentage of appearance for each family. In order to understand the interplay between the sugar conformation and the chemical reactivity, we grouped the sugar puckers into two large families. The sugar puckers C1 -exo, C2 -endo, C3 -exo, C4 -endo belong to the B-like family, while C1 -endo, C2 -exo, C3 -endo, C4 -exo belong to the A-like family.
Finally, as in our previous work [14], we computed a set of distances, valence angles and dihedral angles to capture the flexibility of DiLCrz RNA structure. For each quantity, we computed the average value and its standard deviation.
Probing DiLCrz Structure in Presence or Absence of Mg 2+ Ions
DiLCrz RNA structure was first probed with different SHAPE reagents, namely NMIA, 1M7 and BzCN in native-like conditions including 5 mM of magnesium ions ( Figure 1A). The reactivity maps obtained are discussed in the light of the X-ray structure previously reported [25]. We use the following nomenclature: Px are helices numbered as in Figure 1, Lx are the corresponding terminal loops, Jx/y are the nucleotides joining helix x to helix y following a 5 to 3 orientation, and 3WJ is the P2/P2.1/P10 junction. The exact nature of all the interactions evidenced in the 3D structure is summarized in the Supplementary Table S1. Overall, the three reactivity patterns fit remarkably well with the three-dimensional structure. The crystal structure was accompanied by SAXS data that already showed the agreement of the conformation in solution with the crystal structure. All these remarks indicate that the dominant conformation of DiLCrz in solution corresponds to the reported three-dimensional structure. Indeed, most nucleotides predicted to be involved in cis Watson-Crick base pairing are not reactive. Exceptions to this trend correspond to the region U 47 -G 49 /A 143 , and to few positions at the edge of helices (G 12 (P2) or U 128 -A 132 (P8)), which appear to react with NMIA, 1M7 or BzCN although they are located within a supposedly stable stem ( Figure 1A). Very reactive nucleotides are located in loops, while moderately reactive positions are mostly involved in non-canonical or long-range interactions. However, the reciprocal is not true, for instance, not all single stranded nucleotides are reactive (see for instance G 74 , G 87 or A 101 ), and the reactivity of many nucleotides involved in tertiary contacts is indistinguishable from the reactivity of the nucleotides involved in the secondary structure. Notably, most nucleotides involved in pseudoknots are not reactive, which is deceptive for modeling algorithms that cannot predict such type of interaction at the first intention. As previously described for NMIA and 1M6 with various RNA [77,78], the reactivity profiles are very similar for the three probes, although some nucleotides appear to have different susceptibility to the different reagents (see Figures 1A and S1). Yet, as discussed below, no systematic rule informative for modeling algorithms could be deduced from comparing the different profiles. Magnesium ions are known to be a key element for RNA tertiary structure stability and compaction [79][80][81][82]. In order to capture the magnesium-dependent folding of DiLCrz the RNA was probed with the three SHAPE reagents in the absence of MgCl 2 . Here again, although not identical, the reactivity patterns obtained are very similar. Many of the nucleotides showing a reactivity change upon Mg 2+ addition are identical for the three different probes (Figures 1B and S2). Most importantly, the vast majority of the less reactive nucleotides (27 out of 32, i.e., 84% for 1M7) in the presence of Mg 2+ are either single-stranded, involved in the pseudoknot, the kissing loop or a non-canonical interaction according to the crystal structure (see Supplementary Table S1). Among the exceptions are nucleotides involved in the 2 bp P6 which would be predicted to be very unstable on its own and is stabilized by the tertiary structure. A 69 or G 75 involved in helix closing base pairs also appear more sensitive to the probe in the absence of magnesium ions as well as G 134 which is also involved in a tertiary interaction. Interestingly enough, few positions appear more reactive in the presence of Mg 2+ , among which nucleotides in L5 and L8 have been observed to be the most mobile in the crystal structure. Forty-four nucleotides are only involved in tertiary contacts, i.e., participating in pseudoknot-type interactions or in non-canonical base pairs but not at the same time in a WC base pair (triple interaction). Amongst these, 19 are significantly less reactive in presence of Mg 2+ towards one probe or another (15 with 1M7). In contrast the reactivity of only 7 (for the three probes) out of the 102 nucleotides only involved in regular helices are sensitive to the presence of Mg 2+ . However, if probing in the absence of magnesium fairly clearly highlights nucleotides involved in the three-dimensional folding, there is one notable exception. Indeed we noted that the reactivity of the 5 strand of the P7 pseudoknot (G 111 -G 115 ) remains essentially unaffected by the absence of magnesium while P7-3 becomes very reactive. This suggests that the pseudoknot is destabilized in absence of Mg 2+ and that under such conditions P7-5 sequence adopts an alternative folding. Interestingly, unconstrained predictions using RNAfold [83] or RNAstructure [84] consistently yield models in which A 112 -G 115 are base paired with C 70 -U 74 ( Figure S3).
Thus, albeit very informative, the comparison of SHAPE reactivity profiles in the presence and absence of magnesium proves, in this case, to be insufficient to pinpoint the nucleotides involved in kissing-loops and pseudoknots.
Structure Characterisation by Thermal Denaturation
In order to evaluate the thermal stability of the various three-dimensional structural elements and highlight the involved nucleotides, we followed DiLCrz thermal unfolding. We first set up a melting experiment using the RiboGreen fluorescent dye as described [85]. Ribogreen fluoresces upon specific binding to single strand RNA. Upon temperature increase the different structural elements unfold, more single strand RNA becomes available for ribogreen binding and consequently increases the fluorescence signal. However, in the meantime, RiboGreen binding to RNA is destabilized as the temperature increases. The fluorescence curve in function of temperature therefore results from a non-specific constant fluorescence decay inflected or even inverted by the local increase of RNA single strand availability. As for UV thermal melting results, the curve inflections are determined by plotting the derivative of the relative fluorescence units (RFU) over temperature (-dRFU/dt). In such a plot, maxima represent the melting temperature transitions. Using this technology, we followed DiLCrz thermal denaturation at increasing MgCl 2 concentrations ranging from 0 to 5 mM (see Figures 2A and S4). In absence of Mg 2+ , fluorescence regularly decreases until 62 • C with only a slight inflection at 50 • C, followed by a significant increase peaking at 75 • C. The 50 • C inflection becomes more pronounced as [MgCl 2 ] increases, shifts to 53 • C in presence of 2.5 mM and is finally individualized as a 57 • C peak at 5 mM MgCl 2 . In the meantime, the 75 • C peak gradually shifts to a higher temperature to reach 81 • C at 5 mM MgCl 2 where a slight inflection at 70 • C can also be observed. As the 57 • C peak builds up with the presence of magnesium ions, we interpret it as the cooperative unfolding of the tertiary structure while the large peak at the higher temperature would represent the cumulative melting of the multiple secondary structure elements. To confirm this hypothesis, we repeated the experiment using three variants designed to disrupt the tertiary structure, A 90 G, G 111 C and A 168 G. Mutation A 90 G disrupts the triple interaction A 90 /G 72 -C 94 that stabilizes P5-P4-P6 stacking, G 111 C destabilizes the P7 pseudoknot, and finally, A 168 interacts with A 110 and is in the heart of the catalytic center since the branching reaction occurs between C 167 and U 169 [86]. As can be observed in Figure 2B, in the presence of magnesium the 57 • C peak is almost or even completely abolished for the three mutants while the 80 • C peak is present for all three mutants and essentially similar to what is observed with the WT. In absence of magnesium, the melting profiles of the WT and the three variant constructs are almost identical, showing a single major peak at 75 • C ( Figure 2C). This strongly suggests that the 75-80 • C represents the secondary structure, stable in absence of magnesium ions, while the 57 • C peak can be attributed to the cooperative melting of the tertiary structure which is formed only in the presence of divalent ions. These results indicate that the thermal stability of DiLCrz secondary and tertiary structures are in two distinct temperature ranges under our experimental conditions. This led us to measure the 1M7 SHAPE reactivity profiles along the thermal denaturation. We thus established the reactivity profiles at five different temperatures from the folded structure (37 • C) to the essentially denatured sequence (85 • C) going through the local minima 53 • C, 65 • C and 74 • C ( Figure S5). As expected for a progressive unfolding process, the average reactivity gradually increases while the reactivity value dispersion decreases with temperature. This is consistent with all nucleotides becoming equally very reactive. Note that some nucleotides still appear poorly reactive at 85 • C. This is due to the reactivity calculation/normalization process and means that they are less reactive than the average, but not that they are unreactive. In order to get a clearer view of the behavior of each nucleotide along the temperature curve, we clustered their reactivity profile using a kmeans algorithm (see material and methods). Four clusters define the following categories. The first (grey in Figure 3A) gathers nucleotides that are very reactive at 37 • C, and remain very reactive (although their absolute reactivity value goes down because of the reactivity normalization process [50]). The second cluster regroups nucleotides which become highly susceptible to 1M7 modification over 37 • C (red in Figure 3A), while the third cluster is constituted by nucleotides which become increasingly reactive with temperature starting at 53 • C (green in Figure 3A). Finally, the last category clusters positions that become increasingly reactive only over 65 • C (blue in Figure 3A). Annotating the DiLCrz structure scheme makes visible that essentially the four categories also coincide with the different types of structural elements ( Figure 3A,B). Indeed, most nucleotides of the same structural element fall in one of the four categories which can roughly be defined as single strand nucleotides (grey), nucleotides involved in tertiary contacts (red), secondary structure (green), and finally the highly stable helix P2.1 (in blue). Note that although highly unstable the U 128 -A 132 base pair is attributed to the blue cluster, this is an artefact and limitation of our clustering methods. As general trends, tertiary structure opens up cooperatively between 53 and 65 • C while secondary structure starts melting at 65 • C and some elements appear to be stable over 80 • C. Notable exceptions are nucleotides in P10 (U 42 , G 171 , G 172 ), the 5 part of P3 (G 61 ) and the bottom of P15 (C 142 -A 143 ) that cluster with the tertiary contact nucleotides (red in Figure 3A). Indeed, those nucleotides appear to be involved in peculiar or dynamic pairings (see below). As nucleotides clustered on the basis of a clear reactivity increase between 37 and 53 • C appear to be mostly involved in tertiary structure, we carried out a statistical analysis to identify the positions with a significantly increased reactivity over this temperature transition (see Figure 3C and material and methods). Interestingly this approach is more conservative than the clustering, highlighting most nucleotides involved in the tertiary structure (non-canonical base-pairs and long distance pairing involved in pseudoknots). Notably, nucleotides involved in both sides of the P7 pseudoknot and the P5-P2.1 kissing loop are emphasized by this strategy. In addition to information about the tertiary structure, SHAPE reveals that the different helices open up at different temperatures, but not always in the order predicted by thermodynamic calculations on isolated helices. For example, P15 (∆G • = −8.1 kcal), which is predicted to be the second most stable secondary structure element after P2.1 (∆G • = −10.7 kcal), is essentially accessible at 65 • C while nucleotides within P2.1 are not reactive before 85 • C. Even more surprising, P9 (∆G • = −2.8 kcal) or even the two base pair helix P6, which is predicted to be hardly stable when isolated, are not highly reactive below 85 • C. This is a mere example that thermodynamic calculation on isolated duplexes poorly describes helices that are part of a larger structure.
Molecular Dynamics Analysis
The vast majority of the high SHAPE reactivities observed in the study described above reveal single-stranded positions, while nucleotides involved in the secondary structure are essentially unreactive. However, we were puzzled by the reactivity of some nucleotides involved in stable Watson-Crick pairings which also appear to be differentially susceptible to the different SHAPE probes, or to be unexpectedly reactive at 53 • C or in the absence of divalent cations. In order to get insight into these unexpected SHAPE reactivities and better interpret probing results, we resort to molecular dynamics (MD). We combined the different quantities computed along the MD simulations to shed light on some particular regions of DiLCrz structure. First, we focused on the nucleotides that for each pair of probes show a significant difference in chemical reactivity. The nucleotides U 83 (the 3 residue from L5) and A 161 (the 3 residue from L9) are both involved in non-canonical hydrogen bonds and at some point of the MD simulation they can form a triplet (26% of the time) and a quadruplet (97% of the time), respectively. Figure 4A-C shows some distinct conformations for the pairing of the nucleotides U 83 and A 161 . The ribose of U 83 not only adopts several conformations, in particular the C2 -endo and C3 -endo, but also some pseudo-rotation intermediates, which is a sign of medium to high reactivity [14]. Moreover, the global flexibility of the nucleotide can explain the high reactivity for the three probes. On the contrary, in the case of A 161 , the ribose puckering changes from the stable C3 -endo conformation to an intermediate one (C2 -exo), and is less flexible than the previous ones along the simulation.
Second, we analyzed the nucleotide G 49 (P15) that has a low reactivity to NMIA and 1M7 and a high reactivity toward BzCN. This result suggests that this nucleotide assumes a transient state before reaching a final and stable conformation. This hypothesis is supported by the observation that along the MD simulation, on the one hand, the puckering of G 49 can briefly adopt a C2 -endo conformation and on the other hand, the local flexibility suggests the presence of two possible local conformations. A similar trend has been observed for the nucleotide G 182 that has also a low reactivity with NMIA and 1M7 and a medium reactivity with BzCN.
Third, we focused on P10 and the nucleotides upstream and downstream from this helix (see Figure 4D). The change in reactivity for the region A 41 -G 44 may reflect the ability of these nucleotides to form triplets along the MD simulation. Although most of this region is not very dynamic, while A 41 can form several non-canonical hydrogen bonds and at the same time the ribose adopts a C2 -endo conformation, the ribose of U 42 often adopts intermediate conformations and the nucleobase stacks with A 146 (downward stacking between U 42 and A 146 which also forms an outward stacking with A 175 ).
Finally, we analyzed the P15 region, G 46 -A 53 /U 137 -U 144 (see Figure 4E,F). Both strands of the helix are relatively flexible, however, in the region G 46 -A 53 the ribose moieties are on average more flexible than in a standard helix and often adopt the C2 -endo conformation. On the contrary, within the U 137 -U 144 strand, only the ribose of U 144 is in the C2 -endo conformation. Moreover, on the 5 strand nucleotides G 46 and G 49 stack with non-adjacent nucleotides C170 and A142, respectively. G 46 also forms a non-canonical hydrogen bond with C 118 (O2 B sides) along the MD simulation. These observations could explain why its reactivity increases up to 53 • C, although it is seen as a secondary structure pairing.
Using Probing Data for Secondary Structure Modeling
The probing results performed with the different SHAPE probes at 37 • C in the presence, or absence of Mg 2+ ions were used as constraints for secondary structure prediction software. Most of the secondary structure was correctly predicted by RNAstructure and RNAfold, which both do not predict pseudoknots. However, in most of the proposed models the P7 part of the pseudoknot nucleotides are incorrectly paired (see Figure S2). In many of the models, A 112 -G 115 are paired to C 70 -U 73 compromising not only P7 but also the prediction of P4 and P6. Such misprediction can be expected as the P7 pseudoknot nucleotides are unreactive to the SHAPE reagents in native conditions, and only the 3 part of the pseudoknot becomes reactive in absence of Mg 2+ . In order to better inform the prediction software about the tertiary structure and pseudoknots, the reactivity profile obtained at 53 • C was integrated in two different ways. In a first approach, 1M7 reactivity data collected at 53 • C were directly used as constraints. In a second approach, the nucleotides with significantly augmented reactivity when raising the temperature from 37 to 53 • C, were assigned a reactivity value of 10 while the reactivity of other nucleotides was kept to their value obtained at 37 • C with 1M7, NMIA or BzCN. This strategy prevents the formation of secondary structure pairings with the nucleotides involved in pseudoknots but does not improve the overall prediction. For instance, in some of the resulting models, P3 is destabilized to the benefit of spurious pairings.
We then evaluated the performance of our recently developed workflow IPANEMAP, which extends RNAfold-based probing-informed modeling by sampling and clustering steps that allow us to take multiple sets of probing data into account [87,88]. When using 1M7 reactivities as constraints IPANEMAP yields two models almost equiprobable. The first is the model described above with scrambled P7 and P4 for the prediction obtained with RNAstructure or RNAfold. In the second model, most pairings are correctly predicted including P4 and three out of five of the nucleotides involved in the P7 pseudoknot (G 111 -C 113 ) are left unpaired in the loop of an alternative P6 hairpin ( Figure S7A). Running IPANEMAP with multiple sets of constraints representing any combination of the data obtained with other reagents, and/or in any of the other conditions described does not improve the models. Interestingly, when informed with the reactivity data obtained using NMIA in the presence or absence of Mg 2+ IPANEMAP yields one model which captures most of the secondary structure, but the two base-pair P6 ( Figure S7B).
In another approach, we sought to predict the P7 pseudoknot pairings from the experimental results. To this end, we generated artificial constraints files aiming at indicating nucleotides potentially involved in pseudoknots to the prediction software. Reactivity of positions for which the SHAPE values significantly vary between two conditions was set to −0.2 while the value for other nucleotides was set to 5. When using such artificial constraints obtained from the reactivity with or without magnesium no base pair was predicted independently of the prediction software utilized. In contrast, when following this strategy with the probing results obtained with 1M7 at 37 and 53 • C, the only pairings predicted form P7. We conclude that the temperature differential SHAPE strategy allows for the prediction of DiLCrz pseudoknot.
Discussion
Evaluation of the reactivity profiles regarding the X-ray crystal structure confirms that SHAPE probing very accurately reflects RNA folding. Nucleotides highly reactive are single-stranded, while positions not susceptible to modification are involved in canonical pairings. Finally, nucleotides showing a mild reactivity are most of the time engaged in dynamic tertiary pairings. Molecular dynamics simulation further shows that paradoxical reactivity observed for nucleotides within stable helices most probably reflects the fast equilibrium of these nucleotides between two conformations. We then followed two strategies to pinpoint nucleotides involved in tertiary structure. The differential Mg 2+ SHAPE analysis selectively reveals many of the nucleotides involved in non-canonical interactions. As expected, nucleotides involved in pseudoknots are also more reactive in absence of Mg 2+ , although they are WC cis base pairs. However, if the 3 strand of P7 is well highlighted in the absence of Mg 2+ , the 5 strand remains poorly reactive toward all probes. As suggested by some of the models obtained, it may reflect the involvement of these strands in alternative structure in absence of Mg 2+ since this region mostly relies on tertiary interactions. The temperature-dependent SHAPE probing seems to be promising to spot residues involved in long-range pairings forming pseudoknots.
Several studies have reported the effect of magnesium and/or temperature on RNA chemical reactivity [89][90][91][92][93][94][95]. We observed an asymmetric change in P7 reactivity when magnesium was omitted. This observation is reminiscent of what has been observed with the env25 pistol pseudoknot. In this case, the 5 strand reactivity is rather insensitive to the presence of magnesium while the 3 strand reactivity increases in the absence of magnesium [95]. It also resembles temperature-dependent probing of tRNA Asp where one strand of the D-stem becomes reactive at a temperature higher than the second one [92]. The authors mentioned that purines may maintain stacking interactions in the absence of base-pairing better than pyrimidines, which may be due to their bicyclic chemical structure. In line with this hypothesis, the purine-rich strand is the most stable in each of the three examples of asymmetric change in reactivity discussed here (DiLCrz P7, pistol pseudoknot and tRNA Asp D-stem). Although P7 pseudoknot reactivity is asymmetrically affected by magnesium, it is symmetrically affected by the increase of temperature. In fact, probing upon temperature ramping and probing in presence and absence of magnesium reflect well distinct processes. While in the magnesium-dependency experiment, the RNA is folded either in the presence or absence of magnesium, in the temperature-dependent experiment it is first folded at 37 • C then progressively denatured. Thus, in the first case, we observe two independent conformations while in the second case the successive conformations are dependent on the initial conformation.
DiLCrz Hierarchical Thermal Unfolding
The four different temperature clusters suggest somehow an unfolding model of the DiLCrz. Below 37 • C, the reactivity of P8 and P5 loops (L8 and L5) indicate intrinsic molecular motions that are observed in the crystal structure. While there is no electronic density for residues from L8 in the X-ray DiLCrz crystal structure, L5 is observed since the kissing interaction with L2.1 somehow maintains the L5 structure, in spite of greater than average temperature factors (B factors), which express the uncertainty of the atomic coordinates. Highly reactive residues from J6/7 (C 109 -A 110 ) have also higher temperature factors. The same observation applies to residues from J15/7 (G 145 , A 146 ) which interact with the three-way junction (P2/P2.1/P10) and are also characterized by higher B factors. At the same time, L9 is also reactive, indicating a strong cooperativity between the tertiary interactions L9-P2, L2.1-L5, and the J15/7-3wj interaction. The 3wj acts as a receptor for A 146 . This interaction is instrumental to stabilize the trans WC pair G 46 -U 144 . Heating beyond 37 • C provokes melting of this G-U pair and other residues from cluster 2 (red). Opening up the P15 G-U pair is accompanied by melting of P10 and P7. Simultaneously, the tertiary interactions L9-P2 and L2.1-L5 are lost. The A-minor motif from J5/4 is also lost, breaking J3/4, which cannot stabilize P6 anymore. Then, when 65 • C is reached, residues from cluster 3 (green) start to melt. It is worth noting that, at this stage, the secondary structure elements that are intimately linked to the tertiary structure have already been partially lost, like P10 and P4P6. The fourth cluster regroups residues that remain folded in spite of a high temperature (blue). Surprisingly, the most stable element is P2.1. Since P2.1 is on the 5 side of the ribozyme, it is transcribed early during transcription and is presumably the first element of the ribozyme to be folded. Moreover, this element, together with P2 and P10, constitutes a three-way junction, which binds A 146 in J15/7, a critical region for catalysis, thus potentially regulating the activity of the ribozyme [86]. The present data thus suggest that P2.1 may be a folding nucleation element toward the active structure of the ribozyme.
The present scheme is supported by previous studies on the tRNA Asp thermal unfolding using NMIA [92]. The authors identified five melting transitions that can be clustered in three main unfolding events. The initial event is described by the first two transitions (51 and 53 • C, respectively), first concerning the melting of the tertiary interactions mediated between the D-and T-loops, secondly followed by D-stem strands separation (53 • C). The second event at 58 • C mainly consists in melting of the acceptor and anticodon regions, which reorganize by forming alternative pairings between the 3 residues from the acceptor stem, the 5 strand form the anticodon stem, and the 3 end from the anticodon loop. The formation of this alternative secondary structure presumably compensates for the tertiary structure loss. The third event occurs when the alternative stems melt individually when the temperature rises. The T-stem loop remains incredibly stable along the whole process and may be compared to P2.1 from DiLCrz. Although we cannot exclude that alternative secondary structures could be formed during thermal unfolding of DiLCrz, the present SHAPE data do not present differential reactivities between cognate helical strands at temperatures <65 • C. The DiLCrz unfolding scheme thus appears as a more hierarchical process than the one from tRNA Asp .
Molecular Dynamics Shed Light on Unexpected Reactivity
As previously reported, the different SHAPE probes show very similar but not identical reactivity patterns [77,96]. The analysis of the structure dynamics of DiLCrz shows that when significant differences are observed between the probes, the reactivities are medium to high, and the nucleotides are most of the time involved in multiplets. For example, the nucleotide A 161 is mostly involved in a quadruplet and there are significant differences between each pair of probes. Another example is given by the nucleotide C 171 that is involved in either a triplet or a multiplet with four other nucleotides.
In the case of G 182 where low reactivity towards both NMIA and 1M7 was observed, while a high/medium reactivity was observed with BzCN, the nucleotide adopts a transient conformation before reaching a final and stable state (see Video S1). Indeed, along the MD simulation the puckering of the nucleotide briefly adopts a C2 -endo conformation, changing from C3 -endo and some intermediate states close to either the C2 -endo or the C3 -endo conformation. In addition, the structural local flexibility suggests the presence of two possible conformations.
Finally, our study confirms that the flexibility of the sugar is a key element for the chemical reactivity but that further investigations are still required to better characterize the SHAPE reagent reactivity and clarify what appears to be a complex scenario. For instance, the high to medium reactivity observed with both NMIA and BzCN within P15 were quite puzzling and could lead one to question a model displaying P15. Molecular dynamics suggests that albeit stable this helix is flexible, showing transient alternative conformation and unusual stacking.
Informing Prediction Software to Integrate Pseudoknots and Other Tertiary Motifs
Although pseudoknots are present in many RNA, predicting them is still a challenge in terms of modeling and computation. Thus, as exemplified in this study, failing to take them into account may lead to incorrect secondary structure models. While more general and exact methods seem precluded, predicting pseudoknots in two rounds has been applied successfully; most prominently in ShapeKnots [43]: a first round predicts several models for secondary structure, and a second round searches for complementarity between the remaining single stranded nucleotides; identified pseudoknots are rewarded with a thermodynamic bonus. Using the 1M7 reactivity pattern obtained in this study, ShapeKnots delivers ten possible models among which the second most stable is a model including the P7 pseudoknot and otherwise correct but for the absence of P6 and the kissing-loop between P5 and P2.1. Despite this unquestionable success, in absence of additional data, it is impossible for the experimenter to decide which of the 10 models is the most likely. Data such as those presented here (Mg 2+ or temperature differential SHAPE) undoubtedly indicate which model to choose. As shown above, the data obtained with the temperature differential SHAPE allow to identify unambiguously the P7 pseudoknot, and when constrained with the 1M7 or the NMIA reactivity map IPANEMAP yields models in which respectively, part or all of the P7 sequences are left available. The combination of both steps would lead to a model identical to the one obtained with ShapeKnots. However, such a post modeling evaluation for ShapeKnots or semi-automated modeling for IPANEMAP is feasible in the modeling of a short RNA such as DiLCrz, but does not appear realistic in the context of longer RNA. In the near future we will extend the IPANEMAP workflow to infer tertiary interactions based on multi-condition probing data. The system will be informed of nucleotides with a high probability to form tertiary structure. The software will at first exclude them from secondary structure modeling, which allows it to identify pseudoknot helices and even recurrent tertiary structure motifs in a second modeling phase. For instance, comparing the profiles of the three probes on DiLCrz highlights A 161 , a nucleotide part of a GAAA tetraloop that interacts with a receptor. Such motifs are frequent within RNA structures, the combination of multiple SHAPE probing and the signature sequence (GNRA) could help to identify such structures ab initio. One of the challenges in modeling DiLcRz structure is the prediction of the L2.1/L5 kissing loop. Both temperature-and Mg 2+ -differential probing strategies detect most of the nucleotides involved in the kissing loop (4 and 5 out of 6 nt, respectively). However the informed second round of modeling does not predict 2 bp long interactions to avoid the prediction of spurious pairings. In contrast, predicting three base pairs long range or cross interactions should be possible-it would require the prior detection of all the nucleotides involved. Comparing SHAPE data performed in different conditions or with different probes reveals not only the secondary but also some elements of the tertiary contacts. In this context, it is crucial to further understand the mechanisms behind SHAPE reactivities and the interplay between SHAPE probes, their reactivities, and RNA structures using methods at different length scales and by comparing diverse experimental data on a RNA benchmark dataset. This will allow deriving more general rules to inform 2D and 3D modeling software. In particular, the integration of this new information in an innovative theoretical framework for molecular dynamics simulations will open the route to bias three dimensional RNA structures and to better predict their structural motifs.
Supplementary Materials: The following are available online at https://www.mdpi.com/article/ 10.3390/ncrna7040071/s1, Figure S1: Comparison of the reactivity obtained with the three probes, Figure S2: Reactivity profiles in absence of Mg 2+ , Figure S3: Model obtained with RNAfold and RNAstructure using 1M7 reactivities as soft constraints, Figure S4: Melting profiles of the wild-type DiLCrz with error bars, Figure S5: 1M7 probing profiles at different temperature, Figure S6: Clustering nucleotides according to 1M7 probing profiles at different temperature, Figure S7: Secondary structure models obtained with IPANEMAP, Video S1: Trajectory of the region around the nucleotide G 182 , Table S1: in this file is summarized the pairing status of the nucleotide and some differential probing (exposition (red) means the difference is positive; protection (green) means the difference is negative). We reported the type of base pairing computed in the x-ray structure (columns B to E) and along the MD simulation (columns F to J) (presence >5%) using MCAnnotate [69][70][71]. The HBs in italic are non-canonical. c: cys. t: trans. W: Watson-Crick side. H: Hoogsten side. B: B side, the position is between the W and H side. S: sugar side. We also detail if the HBs are formed between the O2 and an atom in the Watson-Crick (W), Hoogsten (H) or the B side (the position is between the W and H side) or between OP and an atom in these sides [68]. PK: Pseudo-knot. KL: kissing loop. | 11,909.8 | 2021-11-05T00:00:00.000 | [
"Computer Science",
"Biology",
"Chemistry"
] |
The Jet-Disk Coupling of Seyfert Galaxies from a Complete Hard X-ray Sample
We analyze the jet-disk coupling for different subsamples from a complete hard X-ray Seyfert sample to study the coupling indices and their relation to accretion rate. The results are: (1) the power-law coupling index ranges from nearly unity (linear correlation) for radio-loud Seyferts to significantly less than unity for radio-quiet ones. This declining trend of coupling index also holds from larger sources to compact ones; (2) the Seyferts with intermediate to high accretion rate (Eddington ratio $\lambda\sim$ 0.001 to 0.3) show a linear jet-disk coupling, but it shallows from near to super Eddington ($\lambda\sim$ 0.3 to 10), and the former is more radio loud than the latter; (3) the Seyfert 1s are slightly steeper than the Seyfert 2s, in the jet-disk correlation. In the linear coupling regime, the ratio of jet efficiency to radiative efficiency ($\eta/\varepsilon$) is nearly invariant, but in low accretion or super accretion regime, $\eta/\varepsilon$ varies with $\lambda$ in our model. We note that a radio-active cycle of accretion-dominated active galactic nuclei would be: from a weaker jet-disk coupling in $\lambda<0.001$ for low luminosity Seyferts, to a linear coupling in $0.001<\lambda<0.3$ for radio-loud luminous Seyferts and powerful radio galaxies/quasars, and to a weaker coupling in $0.3<\lambda<10$ ones.
Introduction
Active galactic nuclei (AGNs) are the most powerful sources in the Universe and believed to be powered by their central massive black hole accretion. In the meantime, there are significant feedbacks from AGN to host galaxy. The accretion and feedback together have formed the co-evolution of the AGN and its host galaxy. In addition to the AGN winds and slow outflows to host galaxy, radio jet (collimated outflow) is a prominent feedback from AGN to host galaxy and to intergalactic medium, although the jetted phase of AGN is only about 10% of its whole life from large AGN sample statistics [1]. Thus, a big question is why AGN is radio silence (non-jetted) in most of its lifetime, but active in its whole lifetime in optical and probably in X-rays. Most AGNs are identified in optical field from local Seyfert galaxies to distant quasars.
For the complexity of AGN structure, their electromagnetic emissions are believed to be multi-components. The optical continuum (including UV and part of IR) and hard X-rays are thought to be mainly from the accretion disk and its corona. The bolometric disk luminosity is often scaled proportionally to the optical emission or hard X-rays from the disk, which is regulated by the mass accretion rate (or Eddington ratio) [2,3]. It is possible that an AGN evolves from low to high accretion rate, then reversely from high to low rate, for a lifetime (or episode) cycle of AGN. A jetted phase is present only in a part time (∼10%) of a life cycle of AGN, it is not clear which part of the accretion cycle (accretion rates) the jetted phase is most correlated.
We know that the accretion disk luminosity is proportional to the mass accretion rate in the form of L disk = εṀc 2 , with ε the disk radiative efficiency, c the speed of light. A jet power can be where η is the jet efficiency depending on jet production mechanism, the Eddington ratio λ = L disk /L Edd , with L Edd the Eddington luminosity, M the black hole (BH) mass in solar mass unit.
We can see that the jet-disk coupling is relative to the η/ε in Equation (1), which may be otherwise not constancy for different type of sources which being at different accretion rates [4]. We absorb this unknown factor η/ε in Equation (1) into the power-law index µ in the form of assuming η/ε is also a function of the disk luminosity as η/ε = f unction(L disk ) = const × L q disk , thus µ = 1 + q, where c 1 , c 2 , and c 3 are constants. With this model, we can fit the relation of jet power and disk luminosity with µ and derive the q, in order to study the jet-disk coupling for different subsamples. These subsamples should come from a well-defined complete sample. In the literature, there are discussions of the impact of magnetic flux [5] and the black hole spin [6,7] on the jet-disk correlation, which may introduce some scatter in the correlation.
In this paper, we reanalyze the complete hard-X-ray Seyfert sample by Panessa et al. [8] in more detail, in order to investigate the jet-disk coupling for different subsamples. In the Panessa sample, the 2-10 keV X-ray flux is from Malizia et al. [9], the radio data mainly come from the NRAO (National Radio Astronomy Observatory) VLA (Very Large Array) Sky Survey (NVSS; Condon et al. [10]) at 1.4 GHz, with additional complements being from the Sydney University Molonglo Sky Survey (SUMSS; Bock et al. [11]) at 843 MHz, see Section 3 for more details. In order to avoid strong Doppler boosting effect (thus the luminosities measured are far away from their intrinsic values), sources like blazars are not included in this sample.
Some Previous Statistics and Models
There are several statistical findings that radio power almost linearly correlates with the emission line luminosity or accretion disk luminosity in powerful radio galaxies and quasars [12][13][14][15]. These results are often explained by the accretion-jet model [14,16]. On the other hand, the statistical results of low luminosity AGNs (LLAGNs) show shallower correlations of jet-disk coupling [17][18][19] with the power-law indices of 0.4-0.7 as summarized in [20]. This relative weak jet-disk coupling in the LLAGNs may be attributed either to an inefficient accretion [21] or to the BH spin-jet model [20,[22][23][24].
Panessa et al. [8] presented a complete hard X-ray sample of relatively luminous Seyfert galaxies, and found significant correlation with slopes being consistent with those expected for radiatively efficient accreting systems. This complete sample consists of a couple of subsamples, which were not analyzed individually in [8].
The Subsample Analysis on Jet-Disk Coupling
The Panessa et al. sample [8] is a well-defined complete hard X-ray sample of relatively high luminosity AGN at z < 0.36, including Seyfert 1s and Seyfert 2s, with X-ray (2-10 keV and 20-100 keV) and radio data from NVSS at 1.4 GHz (or for some sources, the 843 MHz has been converted into 1.4 GHz flux density assuming a spectral index of -0.7, S ∝ ν α ). The radio morphologies are classified as resolved (R), slightly resolved (S), and unresolved (U) (whose size smaller than one-half the restoring beam size at full width at half-maximum (FWHM = 45 arcsec), e.g., for a source at redshift of ∼ 0.1 typical in our study, the unresolved size is ≤ 23 kpc) and the rms noise in the images are about 0.5-1 mJy/beam. Black hole mass is available for most of the sources. Therefore, we can estimate the radio jet power, bolometric disk luminosity (20L X (2-10 keV), [8]), Eddington ratio, and X-ray radio loudness (R X = L R /L X (2-10 keV), with LogR X > −4.5 for radio loud and LogR X < −4.5 for radio quiet, as defined in [30]). The mechanical radio jet power P j (rather than radio luminosity L 1.4 by [8]) is used for including the work on environment by jet, expressed as P j ∼ 2.05 × 10 7 (L 1.4 ) 6/7 ( erg s −1 ) [15,20], and the integrated flux density of source is adopted. The error of jet power is very small (not shown) in the log-log plot, considering an error of radio flux density of < 10% for the VLA data. Here, we assume that the model of jet power [15] which derived mainly from massive BHs in FRI and FRII sources, can be used to the relatively lower BH masses of Seyfert galaxies in our sample.
The fitted slopes are 1.13 ± 0.17, 0.84 ± 0.18, and 0.65 ± 0.15 for the resolved, slightly resolved, and unresolved sources, respectively in Figure 2 The slopes for radio loud and radio quiet Seyferts are shown in Figure 2(c) with a nearly linear correlation (1.10 ± 0.15) for radio loud and a shallower correlation (0.63 ± 0.08) for radio quiet ones.
The slope of the subsample in near to super accretion rate (Eddington ratio 0.3 < λ < 10) and that in moderate to high accretion rate (0.001 < λ < 0.3) is 0.68 ± 0.18 and 0.96 ± 0.21, respectively in Figure 2(d). It is surprising that the jet-disk coupling index is lower at the very high accretion rates.
Subsample
Size The radio loudness versus the disk luminosity shows a bi-model alike distribution, the radio loud ones (RLs) show slightly positive correlation, and the radio quiet ones (RQs) show a slightly negative correlation in Figure 3(a). These positive and negative correlations (with the power-law index of ρ) can be explained with the steeper index µ = 1.1 of jet-disk coupling in the RLs and the shallower µ = 0.63 in the RQs (Figure 2(c)), for P j ∝ L 6/7 1.4 and P j ∝ (L disk ) µ = (20L X ) µ : where ρ = (7/6)µ − 1. Figure 3. (a) Log (radio loudness) vs. Log (disk luminosity in erg s −1 ), with the best linear fit y = (0.27 ± 0.15)x − 15.89, and y = (−0.13 ± 0.08)x + 0.61, for radio loud (red) and radio quiet sources (green). (b) Log (radio loudness) vs. Log (Eddington ratio) for radio loud (red) and radio quiet sources (green).
For the whole sample, the radio loudness shows a decrease trend from Eddington ratio of 0.001 to 0.3 (or -3 to -0.5 in log scale), and has no significant correlation with the accretion rate in 0.3 < λ < 10, in Figure 3(b). Implications of the Figure 3(b) are discussed in Section 4.
The results for the slope of jet-disk coupling, correlation coefficient (with probability) of the subsamples are summarized in Table 1. We will discuss the results further as following.
Discussion
We use the 20L X (2-10 keV) as approximate bolometric disk luminosity. The higher energy luminosity in 20-100 keV (L HX in short) is also available in the Panessa et al. sample [8], who plotted the radio luminosity L 1.4 versus either L X (2-10 keV) or L HX (20-100 keV), showing that the radio-X-ray correlation slope is steeper in L HX band than in L X band. This is caused by the non-linear correlation of L HX ∝ L 0.84±0.04 X as we fitted. The L X (2-10 keV) is widely used to estimate the bolometric disk luminosity [e.g., 8,17,31], and the high energy loss (or break) in the 20-100 keV band would prevent the L HX as a good estimate for bolometric disk luminosity.
The hard X-ray selected sample is almost unbiased with respect to absorption compared with the optical estimate for bolometric disk luminosity, and the Seyferts are mostly located in the nearby Universe, reducing as much as possible the selection (e.g., distance) effects [9]. The Seyfert 1s, 2s, and R, S, U sources are equally distributed in redshift [8], and K-corrections are ignored in our analysis since the redshifts for our Seyferts are relatively small (≤ 0.36), and the correlation results in Table 1 should be not caused by the redshift or distance effects [32].
We obtained various jet-disk coupling indices (from 0.47 to 1.13) for different subsamples from the complete hard X-ray Seyfert sample. The linear correlation of jet-disk coupling (µ ∼ 1), e.g., in the radio loud Seyferts, leads to q = µ − 1 = 0, i.e., a constant ratio of η/ε ∝ L q=0 disk = constant. This implies both the jet efficiency η and disk radiative efficiency ε are invariant, or they vary proportionally in the radio loud Seyferts. The linear correlation can be explained in Equation (2) with the invariant ratio η/ε, and which is often referred to the accretion-dominated jet, as also shown in FRII quasars [14] and thought to be regulated by radiatively efficient accretion flow or disk corona model [25,28,29].
The Seyfert 1s are usually more face-on to us than the Seyfert 2s, however, the beaming effect may not cause a steeper index of jet-disk coupling but can cause the scatter of the coupling as analyzed in [33]. There are slightly larger median values of P j and 20L X in the Seyfert 1s than in Seyfert 2s (see column 3 in Table 1), implying probably a potential contribution of L X from the jet base in Seyfert 1s. This effect would be not significant as noted in [32].
The index of jet-disk coupling, e.g., in the radio quiet sources (but not radio silence), is shallower, with µ < 1, q = µ − 1 < 0, i.e., η/ε ∝ L q<0 disk ∝ (λM) q<0 . This implies the ratio of jet efficiency η and disk radiative efficiency ε varies with disk luminosity (or accretion rate and BH mass) for the radio quiet sources.
There is a decline trend of radio loudness vs. Eddington ratio λ from 0.001 to 0.3, and no significant correlation with λ from 0.3 to 10 in Figure 3(b) in the Seyfert galaxies. The decline part in the intermediate to high accretion regime shows a linear jet-disk coupling in Figure 2(d), whereas in the near to super Eddington regime there is a shallower index (µ = 0.68 ± 0.18) in Figure 2(d) with a low significance of correlation in Table 1. The result implies that very high accretion (near to super Eddington) may quench the jet, as noted in [34]. There are similar decline trends between radio loudness and accretion rates as also found in [2,5,35].
Furthermore, we used 20L X (2-10keV) as an approximation of the bolometric disk luminosity, we note that in Vasudevan and Fabian [36], the bolometric disk luminosity is about 20L X (2-10keV) in Eddington ratio less than 0.2, but has a sharp increase to ∼ 60L X (2-10keV) in λ > 0.2. We have tried to test the jet-disk correlations by using the two bolometric corrections of disk luminosity in the two accretion regimes respectively, the correlation results in Table 1 are almost not changed. A future study should consider the bolometric correction dependence on accretion rate with a well-fitted function.
On the other hand, the low luminosity AGNs (Eddington ratio around λ ∼ 0.0001 or less) show smaller indices (0.4-0.7) of jet-disk coupling [20], that could be explained by radiatively inefficient accretion flows (RIAF, [21]) and/or by the BH spin-powered jet (because of the spin-jet power being weakly correlated with the accretion rate, [20,23]).
In addition, we propose that a radio-active cycle of accretion-dominated AGN would be: from a weak jet-disk coupling (µ < 1, q < 0) in low Eddington ratio (λ < 0.001) in cosmic LLAGNs (excluding local radio-loud LLAGNs which we called 'pseudo' radio-loud ones in the sense of a distant universe), to a linear (or even steeper) correlation (µ ≥ 1, q ≥ 0) in intermediate to high Eddington ratio (0.001 < λ < 0.3) for radio-loud luminous Seyferts and powerful radio galaxies/quasars [14], and to a weak coupling (µ < 1, q < 0) again in near to supper Eddington (0.3 < λ < 10) (with a quenched/frustrated jet and a weak dependence of the ratio η/ε on accretion rate).
In this scenario, the most of accretion-dominated AGNs may live in their radio loud phase (∼10% of AGN lifetime) in the middle regime of accretion rate with an efficient (linear or steeper) jet-disk coupling, especially for luminous Seyfert galaxies in this paper and for FRII quasars in [14]. In this idea we do not consider local LLAGNs that are about 50% radio loud [18,19], for that the fraction will be much smaller for a distant universe due to radio undetectablity of the local LLAGNs) and their jet might not be accretion dominated but BH-spin dominated [20].
Moreover, another fundamental factor seems to be changes in the spectral energy distribution (SED), which is driven by changes in the type of accretion disk (RIAF, standard disk, superluminous disk) for different regimes of accretion rates [2,37], that may partly be related to the function of ratio η/ε on accretion rate and BH mass, and so the coupling of mechanical (jet) vs. radiative (disk) power, e.g., in Equation (2).
Summary and Conclusions
We reanalyzed the jet-disk coupling for various subsamples of a complete hard X-ray Seyfert sample in order to study the coupling indices and their relation to accretion rate. The results are: i) the power-law coupling index ranges from nearly unity (linear correlation) for radio loud Seyferts to significantly less than unity for radio quiet ones. This decline trend of coupling index also holds from larger sources to compact ones; ii) the Seyferts with intermediate to high accretion rate (Eddington ratio λ ∼0.001 to 0.3) show a linear jet-disk correlation, and the coupling shallows from near to super Eddington (λ ∼0.3 to 10), and the former is more radio loud than the latter; iii) the Seyfert 1s have a slightly steeper jet-disk coupling than the Seyfert 2s.
A theoretical implication of the results is that, in the linear coupling regime, the ratio of jet efficiency to radiative efficiency (η/ε) is nearly invariant, whereas in the low accretion or super accretion regime, η/ε varies with λ in our model of Equation (2).
A radio-active cycle of accretion AGN would be: from a weak jet-disk coupling in low Eddington ratio (λ < 0.001) for LLAGNs, to a linear correlation in intermediate to high Eddington ratio (0.001 < λ < 0.3) for radio-loud luminous Seyferts and powerful radio galaxies/quasars, and to a weak coupling again in near to supper Eddington (0.3 < λ < 10). In this scenario, the most of accretion-dominated AGNs may live as a radio loud source in the middle regime of accretion rate with an efficient (linear) jet-disk coupling, especially for radio-loud luminous Seyferts and quasars. | 4,194.4 | 2020-05-10T00:00:00.000 | [
"Physics"
] |
Snapshot of a magnetohydrodynamic disk wind traced by water maser observations
The formation of astrophysical objects of different nature and size, from black holes to gaseous giant planets, involves a disk-jet system, where the disk drives the mass accretion onto a central compact object and the jet is a fast collimated ejection along the disk rotation axis. Magnetohydrodynamic disk winds can provide the link between mass accretion and ejection, which is essential to ensure that the excess angular momentum is removed from the system and accretion onto the central object can proceed. However, up to now, we have been lacking direct observational proof of disk winds. This work presents a direct view of the velocity field of a disk wind around a forming massive star. Achieving a very high spatial resolution of ~0.05 au, our water maser observations trace the velocities of individual streamlines emerging from the disk orbiting the forming star. We find that, at low elevation above the disk midplane, the flow co-rotates with its launch point in the disk, in agreement with magneto-centrifugal acceleration where the gas is flung away along the magnetic field line anchored to the disk. Beyond the co-rotation point, the flow rises spiraling around the disk rotation axis along a helical magnetic field. We have performed (resistive-radiative-gravito-) magnetohydrodynamic simulations of the formation of a massive star and record the development of a magneto-centrifugally launched jet presenting many properties in agreement with our observations.
flow co-rotates with its launch point in the disk, in agreement with magnetocentrifugal acceleration where the gas is flung away along the magnetic field line anchored to the disk. Beyond the co-rotation point, the flow rises spiraling around the disk rotation axis along a helical magnetic field. We have performed (resistive-radiative-gravito-) magnetohydrodynamic simulations of the formation of a massive star starting from the gravitational collapse of a rotating cloud core threaded by a magnetic field. A magneto-centrifugally launched jet develops around the forming massive star which has properties matching many features of the maser and thermal (continuum and line) observations of our target. Our results are, presently, the clearest evidence for a magnetohydrodynamic disk wind, and show that water masers and forming massive stars provide a suitable combination of tracer and environment to allow us studying the disk-wind physics.
Magnetohydrodynamic (MHD) disk winds have been proposed to be the engines of the powerful jets observed at varying length scales in many diverse sources, from young stellar objects 32 (YSO) to black holes 4 . According to the classical model of an ideal MHD disk wind 4 , in the reference frame co-rotating with the launch point, the flow streams along the magnetic field line anchored to the accretion disk. An observer at rest sees magnetocentrifugal acceleration: the magnetic field keeps the flow in co-rotation with its launch point while its radial distance increases, till reaching the Alfvén point where the poloidal kinetic and magnetic energies are equal. Beyond the Alfvén point, the flow spirals outward Article number, page 2 of 36 along the rotation axis with a stably increasing ratio of the streaming onto the rotational velocity, until it gets eventually collimated into a fast jet 29,14 . So far, the best observational evidence for a MHD disk wind has been the finding of line of sight velocity gradients transversal to the jet axis, which are interpreted in terms of jet rotation and the imprint of the magneto-centrifugal acceleration 3,9,17,1 . However, that is an indirect evidence and the derivation of key parameters, as the launch radius and the magnetic lever arm, can be seriously affected by systematic biases 39 . On scales of ∼100 au, a few studies based on Very Long Baseline Interferometry (VLBI) maser (the laser equivalent at microwave band) observations have revealed rotating disk-like 19,21,37 , conical 22 In October 2020, we performed novel observations (see Fig. 2a) of the water maser emission in IRAS 21078+5211 by including all telescopes available in the VLBI network, with the aim to simulate next-generation radio interferometers which will improve current sensitivities by more than an order of magnitude (see Appendix A). In the following, we will show that these new observations prove that the water masers trace magnetized streams of gas emerging from the YSO's disk (see Fig. 2b and Appendix H). The maser emission concentrates in three regions to NE, N, and SW, inside the three dotted rectangles of Fig. 2a), whose sky-projection is known from previous observations of the maser proper motions and radio jet (see Appendix E), we observe two elongated structures, blue-and red-shifted (with respect to the systemic V LSR of the YSO: V sys = −6.4 km s −1 ) to NE and SW, respectively. These structures are Article number, page 3 of 36 the opposite lobes of a collimated outflow from the YSO, located in between the two lobes; the disk axis (the black dashed line in Fig. 2a) is the intercept of the jet axis at the YSO position. From previous VLBA observations we know that the jet axis has to lay close to the plane of the sky, with an inclination ≤ 30 • . According to the maser V LSR , the jet is inclined towards us to NE, and away from us to SW.
The jet and disk axes provide a convenient coordinate system to refer the maser positions to. In the following, we present the interpretation of the maser kinematics, which is based on the analysis of the three independent observables: z, the elevation above the disk plane (or offset along the jet), R, the radial distance from the jet axis (or transversal offset), and the maser V LSR . As discussed in Appendix A, the accuracy of the maser positions is ≈ 0.05 au, and that of the maser V LSR ≈ 0.5 km s −1 . Without loss of generality, we can express the maser velocities as the sum of two terms, one associated with the toroidal component or rotation around the jet axis, V rot , and the other associated with the poloidal component including all the contributions owing to non-rotation, V off . Since the jet axis is close to the plane of the sky and we observe the rotation close to edge-on (see Fig 3), we can write: where φ is the angle between the rotation radius R and the line of sight, and ω and t are the angular velocity and the time, respectively. Fig. 4c shows the remarkable finding that the spatial coordinates z and R of the maser emission in the SW flow satisfy the relation: where C, the amplitude of the sinusoid, f z , the spatial frequency, and z 0 , the position of zero phase, are fitted constants (see Table 1). In Appendix B we demonstrate that the masers in the NE flow can be separated in three different streams, each of them satisfying the relation 4 (see Fig. 5c and Table 1). The comparison of Eqs. 2 and 4 leads to a straightforward interpretation of the sinusoidal relation between the coordinates by taking: The former equation indicates that the rotation radius is the same for all the masers, the latter shows that the motions of rotation around and streaming along the jet axis are locked together, which is the condition for a spiral motion.
Denoting with V z the streaming velocity along the jet axis, we can write |z − z 0 | = V z t and, comparing with Eq. 3, we derive the relation between the rotation and streaming Article number, page 4 of 36 velocities: According to Eq. 5, the observation of a well defined sinusoidal pattern requires that ω and V z are directly proportional, or constant. The constancy of V z implies that V off is also constant, because, if the rotation radius does not change, V off is the projection along the line of sight of V z . Following Eqs. 1 and 2, the constancy of ω and V off would result into a tight linear correlation between V LSR and transversal offsets R. While a good linear correlation between V LSR and R is observed for the SW and NE-1 spiral motions (see Figs. 4b and 5b, black symbols), the scatter in velocity is considerable for the NE-2 spiral motion (see Fig. 5b, red symbols). In Appendix D we investigate the physical reason for the observation of well defined sinusoidal patterns despite the presence of a significant velocity scatter. Applying the equations of motions for an axisymmetric MHD flow, we find that the magnetic field configuration has to be helical over the maser emission region, and the motion along such an helical field line, in the reference frame co-rotating with the launch point, leads to the sinusoidal pattern of maser positions.
We consider now the N region (see Fig. 2a) and show that, in this region as well, the maser kinematics is consistent with the predictions for a MHD disk wind. The N masers have a larger separation from the jet axis than the NE and SW masers. A few nearby masers show quite different V LSR , which could hint at distinct streams, as observed (see Fig. 5a) and discussed (see Appendix B) for the NE flow. In this case, however, only a single stream is reasonably well sampled in position and V LSR with the masers, and we focus our kinematical analysis on that. The spatial distribution of this stream presents an arc-like shape: a subset of maser features draws a line at small angle with the disk axis and another group extends at higher elevations about parallel to the jet axis (see Fig. 6a). Fig. 6c shows that the maser V LSR increases linearly with R in the range −95 au R −70 au up to an elevation z ≈ 60 au. The relatively large separation from the jet axis and radial extent, and the arc-like shape of the maser distribution lead us to think that the N emission is observed close to the plane of the sky. In this case, the maser V LSR should mainly trace rotation, which is also expected to be the dominant velocity component at low elevations above the disk. Then, the good linear correlation between V LSR and R indicates that the masers co-rotate at the same angular velocity, ω N = 0. Eqs. 1 and 2, the derivation of ω N does not depend on the maser geometry. Therefore, the finding that the masers lay along a line intercepting the disk at ≈ −40 au provides an "a posteriori" test of the assumption that the N emission is observed close to the plane of the sky.
The masers found at elevation z > 60 au appear to set aside of the arc-like distribution (see Fig. 6a) and their V LSR are significantly more negative and do not follow the linear correlation with the radius (see Fig. 6c). A natural interpretation is that the location at where ω K is the Keplerian angular velocity of the launch point. Eq. 6 shows that ω e is the angular velocity of the spiraling trajectory as observed in the reference frame corotating with the launch point. Based on axisymmetric MHD models 31,39 , the ratio ω K /ω Article number, page 6 of 36 increases stably from 1 up to a value ≈ 4 while the gas climbs from z A to 10 R K , where z A is the elevation of the Alfvén point and R K is the launch radius. Being ω ≤ ω K , the negative value of ω e indicates that the rotation angle of the maser positions decreases with z. Following the previous discussion, Eq. 5 has to be corrected by replacing ω with ω e : A good test of the above considerations comes directly from our data. Assuming that all the masers move along a single trajectory and using the fitted values of ω and f z in Eq. 5, we obtain implausibly small values for V z : 31, 37, 17 and 49 km s −1 , for the SW, NE-1, NE-2, and NE-3 spiral motions, respectively. There are two strong observational evidences that the derived streaming velocities are too small. First, comparing them with the values of V off (see Table 1 Since V off corresponds to the line of sight projection of V z , we can write: where V off is corrected for the systemic V LSR of the YSO, and i axi is the inclination angle of the jet axis with the plane of the sky. As we know that i axi ≤ 30 • , Eq. 8 allows us to derive a lower limit for V z , reported in Table 2. Using the derived lower limit of V z and the corresponding value of f z (see Table 1), by means of Eq. 7 we can calculate a lower limit for ω e . Finally, we use Eq. 6 and the fitted value of ω (see Table 1) to infer a lower limit for ω K = ω +|ω e | and, knowing the mass of the YSO, a corresponding upper limit for the launch radius R K (see Table 2). The NE-1 stream, which extends the most in elevation (from 20 to 130 au, see Fig. 5a), includes a group of maser features at elevation of ≈ 20 au, which should be located closer to the Alfvén point. In Appendix F, we study the change of V LSR versus R internal to this cluster and obtain a lower limit for ω K ≥ 2.4 km s −1 au −1 that agrees well with the corresponding value reported in Table 2 for the NE-1 stream.
The results shown in Table 2 are in general agreement with the theoretical predictions from MHD disk winds 34,33 : the inferred lower limits on the streaming velocity V z increase steeply with decreasing launch radius; for the SW and NE-3 streams, which are those with relatively lower uncertainty in the measurement of both ω and R (see Table 1), the ratio between V z and the rotation velocity ω R is ≥ 2-3.
In conclusion, our observations resolve, for the first time, the kinematics of a MHD disk wind on length scales of 1-100 au, allowing us to study the velocity pattern of individual streamlines launched from the disk. As represented in Fig. 2b, close to the disk rotation axis we observe flows spiraling outward along a helical magnetic field, launched from locations of the disk at radii ≤ 6-17 au. At larger separation from the rotation axis, we observe a stream of gas co-rotating with its launch point from the disk at radius of ≈ 40 au, in agreement with the predictions for magneto-centrifugal acceleration. Our interpretation is supported by (resistive-radiative-gravito-) MHD simulations of the formation of a massive star that lead to a magneto-centrifugally launched jet whose properties agree with our maser and thermal (continuum and line) observations of IRAS 21078+5211. These results provide the best evidence for a MHD disk wind to date. Since water maser emission is widespread in YSOs, sensitive VLBI observations of water masers can be a valuable tool to investigate the physics of disk winds.
Acknowledgments
We Notes. Column 1 denotes the maser stream; Cols. 2 and 3 provide the values of ω and V off from the linear fit of maser VLSR versus R; Cols. 4, 5 and 6 report the amplitude, the spatial frequency and the position of zero phase, respectively, of the sinusoidal fit of the maser coordinates R versus z.
a The determination of this error, smaller than the value, 1.7 km s −1 au −1 , from the linear fit, is discussed in Appendix C.
Notes. Column 1 indicates the maser stream; Col. 2 reports the estimated streaming velocity along the jet axis; Cols. 3 and 4 give the estimate of the angular velocity and the radius, respectively, at the launch point. Table 1. Table 1. We plot the linear transformation of the radii to reduce the overlap and improve the visibility of each of the three streams. The dashed curves are the fitted sinusoids. In Table 1, we report the parameters of the sinusoidal fits of the radii.
Article number, page 14 of 36 we estimate that the error on the absolute position of the masers is 0.5 mas. The spot positions are determined by fitting a two-dimensional elliptical Gaussian to their spatial emissions. The uncertainty of the spot position relative to the reference maser channel is the contribution of two terms: ∆θ spot = ∆θ 2 fit + ∆θ 2 bandpass . The first term depends on the SNR of the data, following 35 :
Appendix B: Resolving the NE emission into three distinct streams
The masers in the NE emission are separated into three distinct streams identified with different symbols: dots, triangles and squares (see Fig. 5a). Each stream traces a sinusoid in the plane of the sky (see Fig. 5c), which is the signature for a spiral motion. The identification of the maser features belonging to each of the three sinusoids can be done with good confidence. First, we note that the masers at elevation z ≥ 90 au can be unambiguously divided into two streams, dots and triangles, on the basis of the very different V LSR of nearby emissions. Both dots and triangles at z ≥ 90 au have R decreasing with z, suggesting that they could trace the descending portion of a sinusoid, and one can argue that the corresponding ascending portion would be traced by masers with z < 90 au. Next, we note that the group of masers with 30 au ≤ z ≤ 60 au draw an arc, and, since they cannot be part of a sinusoid together with other masers at larger R, they have to trace a third sinusoid with maximum radius close to the radius, R apex = 17.2 au, of the apex of the arc. We have also verified that the blue-shifted cluster of masers located at elevation z ≈ 20 au (inside the dashed rectangle of Fig. 5a) cannot be adjusted within this third sinusoid. We have fitted the expression: This method allows to appreciate the overall spatial structure traced by maser emission with very different brightness levels.
fixing the maximum velocity to ω R apex and fitting the zero level ω R apex − C, with the amplitude C as a free parameter. Fig. C.1 shows that, apart a minor fraction of scattered results, the radius estimated from the sinusoidal fit decreases regularly with ω, and attains a value consistent with the observations, 17.5 ± 2.5 au, for ω = 2.0 ± 0.5 km s −1 au −1 , thus effectively reducing the uncertainty to 0.5 km s −1 au −1 .
Figs. 4b and 5b
show that the maser V LSR are linearly correlated with R in the SW and NE-1 streams. For the masers belonging to the NE-2 stream, the measurement scatter from the linear fit of V LSR versus R is considerable (with large fit errors, see Table 1) which appears to be too large with respect to the observed maser V LSR . In the following, we investigate the physical reason for the observation of well defined sinusoidal patterns despite the co-occurrence of significant velocity scatters.
In a stationary, axisymmetric MHD flow, the two fundamental equations of motions 30,34,33 linking velocity and magnetic field along a field line are: where V p and V φ , and B p and B φ are the poloidal and toroidal components of the velocity and magnetic field, respectively, ρ is the gas mass volume density, ω K is the Keplerian angular velocity at the launch point, R is the rotation radius, and k is the "mass load" of the wind, expressing the fixed ratio of mass and magnetic fluxes along a given magnetic field line. Since V φ − ω K R is the toroidal velocity in the reference frame co-rotating with the launch point, Eqs. D.1 and D.2 lead to the well-known result that the velocity and magnetic field vectors are always paralell in the co-rotating reference frame.
where ω is the angular velocity of the trajectory at radius R, the two equations above can be combined into: where we have used the definition of ω e in Eq. 6. We can define the magnetic field helix angle α B = arctan(B φ /B p ), which is the angle with which a helical field line winds around the jet axis.
We have seen that the observation of well defined sinusoids in the plane of the sky requires that the rotation radius R, and the ratio of the effective angular velocity ω e on the streaming velocity V z keep constant (see Eq. 7). Since the poloidal velocity V p is equal to V z if R is constant, Eq. D.3 implies that α B does not vary along each of the observed maser streams. However, the reverse argument holds too. If the magnetic field is sufficiently stable and has a constant helix angle (that is, a helical configuration), the motion along the field line (in the co-rotating reference frame) requires a constant value of the ratio ω e /V z . That preserves the maser sinusoidal pattern despite the concomitant presence of a relatively large velocity scatter. From Eqs. D.3 and 7, with V p = V z , we have: Using the values reported in Table 1 to the latter across a few 10 au (see Fig. 2a). The interpretation of the water masers in IRAS 21078+5211 as external shocks at the cavity wall of a wide-angle wind faces also the difficulty to account for the huge maser V LSR gradients, ≥ 40 km s −1 over ≤ 10 au, transversal to the NE stream at elevation z ≥ 90 au (see Fig. 5a). If the masers, instead of delineating two streams at different velocities, traced shock fronts (seen close to edge-on) at the wind wall, it would be very difficult to explain such a large velocity difference for almost overlapping emissions.
We conclude this discussion on alternatives to the disk wind interpretation showing that it is improbable that we are observing multiple outflows from a binary system. Inspecting Table 1), determined at larger elevations. That can be interpreted as evidence that the gas closer to the Alfvén point rotates faster than that at higher elevations, in agreement with the MHD disk-wind theory. As described in Appendix A, each maser feature is a collection of many compact spots for which we have accurately measured position and could indicate that the internal V LSR gradient is influenced by non-rotation terms, as, for instance, the increase of the streaming velocity with the elevation. In any case, assuming that also for these two features the internal V LSR gradient is dominated by rotation, we obtain a consistent average value for the in-feature angular velocity of ω = 2.4 km s −1 au −1 .
We stress that the internal motion of the maser features does not contribute to the features' V LSR used in our analysis of the maser velocity pattern on larger scales (see Table 2, where higher streaming velocities correspond to smaller launch radii. The southern features are distributed in three separated clusters (see Fig. 2a ratio, which we take as 20 times the critical (collapse-preventing) value 27 and corresponds to a relatively weak initial magnetic field. A constant value of the opacity of 1 cm 2 g −1 was used to model the gas and dust, as well as an initial dust-to-gas mass ratio of 1%.
We used an axisymmetrical grid of 896×160 cells in spherical coordinates, with the radial coordinate increasing logarithmically with the distance to the center of the cloud.
An inner boundary of 3 au was set up, inside of which the protostar is formed through accretion. No flows are artificially injected from the inner boundary into the collapsing cloud.
The simulation starts with an initial gravitational collapse epoch. After ∼ 5 kyr, enough angular momentum is transported to the center of the cloud to start forming an accretion disk that grows in size over time. Roughly at the same time, we observe the launch of magnetically-driven outflows. Magnetic pressure arising from the dragging of magnetic field lines by the rotating flow eventually overcomes gravity and seeds the formation of the outflow cavity, thrusting a bow shock in the process (Fig. H.1d), which propagates outwards as the cavity grows in size. Previous observations of IRAS 21078+5211 have uncovered the presence of a bow shock located at distances of ≈ 36000 au from the forming massive YSO 26 . The initial launch of the magnetically-driven outflows provides a possible formation mechanism for the observed bow shock and in return, the propagation of the bow shock provides an estimation for the age of the system.
In the simulation, the protostar reaches a mass of 5.24 M (a value in the expected mass interval from observations) after 13.84 kyr of evolution. We estimate that the bow shock has propagated to a distance of ≈ 30000 au at that time, roughly in line with the observations. At the same time, the accretion disk has grown to about 180 au in radius, in agreement with the observational estimates 26 , as well. The data reveal a magnetocentrifugally launched jet, in a similar way as reported by the literature 4,13 , however, we see that the launching region of the jet is narrowed by the ram pressure of the infalling material from the envelope and the presence of a thick layer of the accretion disk which is vertically supported by magnetic pressure. Material transported from large scales through the accretion disk reaches the launching region, located at z 100 au (see Fig. H.1b), where the centrifugal force is stronger than gravity, and simultaneously, where the flow We provide this comparison as an example that a magneto-centrifugally launched jet around a forming massive star yields a consistent picture with the observations. However, the coincidences should be taken with caution, as different combinations of initial conditions may yield similar results, which means that a wider and deeper investigation is needed in order to determine the conditions of the onset of star formation from the observational data. | 6,150.2 | 2022-08-11T00:00:00.000 | [
"Physics",
"Geology"
] |
Modifications for Quasi-Newton Method and Its Spectral Algorithm for Solving Unconstrained Optimization Problems
In this paper, two modifications for spectral quasi-Newton algorithm of type BFGS are imposed. In the first algorithm, named SQN EI , a certain spectral parameter is used in such a step for BFGS algorithm differs from other presented algorithms. The second algorithm, SQN Ev-Iv , has both new parameter position and value suggestion. In SQN EI and SQN Ev-Iv methods, the parameters are involved in a search direction after an approximated Hessian matrix is updated. It is provided that two methods are effective under some assumptions. Moreover, the sufficient descent property is proved as well as the global and superlinear convergence for SQN Ev-Iv and SQN EI . Both of them are superior the standard BFGS (QN BFGS ) and previous spectral quasi-Newton (SQN LC ). However, SQN Ev-Iv is outstanding SQN EI if it is convergent to the solution. This means that, two modified methods are in the race for the more efficiency method in terms less iteration numbers and consuming time in running CPU. Finally, numerical results are presented for the four algorithms by running list of test problems with inexact line search satisfying Armijo condition.
Introduction
The optimization problem is a model among many mathematical approaches that deals with solving real life problems, solving exactly and numerically, for different branches; such as, statistics; physics and engineering.This leads to more attempted from the researchers to present more efficient methods continuously.
Now, consider a minimization of unconstrained problem:
min (), ∈ ℛ 𝑛 1 where, : ℛ → ℛ is bounded below and twice differentiable function.Many numerical methods are used for solving Eq.1.Quasi-Newton approaches are among the most recommended methods having the efficiency in solving these types of problems, due to their super-linear rate of convergence and global convergent property.
In this most popular algorithm, the inverse of Hessian matrix will be approximated by the formula: whereas, it begins with initial positive definite matrix 0 up to the required steps.The positive definite property of Hessian matrix approximation or the preserve it after any modifications is the matter that many researchers deal with it.For instance, Mahmood 2 gave a modification BFGS update formula the inverse version, tried to show how it remains symmetric and positive definite, this is the reason to make the problem convergence to the solution in minimization problems.Additionally, this issue is serious in other quasi-Newton method types 3 .The self-correcting property is a technique to overcome the illconditioned problem, for this, Cheng and Li 4 scaled the quasi-Newton equation and suggested a new spectral scaling for BFGS method with this property.Their method has the property of selfcorrecting alike as conventional BFGS has with more efficiency in correcting a large eigenvalue of Hessian matrix approximation might suffer from.This leads to the improvement in BFGS method.Minutely, in their work, they use the exact line search to minimize strictly convex problem from the dimension , in which it is terminated in steps as steepest descent method.Also, in uniformly convex problems, the method with Wolfe condition is globally and R-linear convergent.Nakayama et al., 5 studied a symmetry rank 1 memoryless quasi-Newton with a parameter of spectral scaling given in 4 .Later, the global convergence of the formula proposed by Nakayama and Narushima 6 .Finally, the hybridization of it with three-term conjugate gradient utilizing a spectral parameter was designed by Nakayama 7 .Additionally, the efficiency of memoryless BFGS method using spectral scaling of 4 in minimizing the eigenvalues was proved by Lv et al. 8 .However, conjugate gradient methods (CGM) have many studies in this area.Firstly, a gradient parameter is used in proposing a new spectral scaling parameter 9 .Another idea was nested between spectral parameter and CGM one and this was given by Wang et al., 10 with presenting the importance of it in solving a large-scale problems.Furthermore, a fast CGM is given with combining a new direction with spectral parameter and previous direction, this is proposed in 11 .Also, the convex combination idea take a place in this topic, the spectral scaling defined as a convex combination of two CGM coefficients 12 .In a constrained optimization problem with bounded condition, there is a spectral parameter with memoryless property for Broydon class presented by Nakayama et al. 13 .Eventually, for a real live example, scientist use the spectral algorithms to analyze the problems as a drug abuse problems; see 14 .After all the presented works, SQN EI and SQN Ev-Iv algorithms are proposed; that deal with new position for spectral scaling parameter and value.In more details, it is an idea to think how changing the involving parameters out of updating formula for Hessian matrix approximation in BFGS method is affecting in the optimization processing.This is aiming at finding more efficient algorithms in spectral scaling methods due to their importance in soling real-life problems.The article, presents two methods with their step algorithms in next section.The third section is proved the convergence of the two method and the relationship between them along with some mild assumptions.There is a proof for the sufficient decent, global and superlinear rate convergence.Finally, the numerical results are presented.
Materials and Methods
In this work, two spectral quasi-Newton methods have been suggested as follows: The SQN Ev-Iv Algorithm In the first part, the SQN Ev-Iv algorithm is suggested, in which the acceleration parameter is involved in https://dx.doi.org/10.21123/bsj.2023.8020P-ISSN: 2078-8665 -E-ISSN: 2411-7986 Baghdad Science Journal search direction; after the Hessian matrix inverse approximation has done.In other words, the search direction contains all required update for BFGS Hessian matrix formula, then it multiples with the spectral scaling parameter.Now, our new suggestion algorithm SQN Ev-Iv ; which the spectral parameter sets in the direction is as following: with, the condition: Thus, the algorithm steps are given as: Step 1: Initializations step: choose 0 identity matrix or positive definite matrix 0 ∈ ℝ ,tolerance = 1 × 10 −7 Step 2: Start with 0 = − 0 0 , = 1 Step 3: Termination criteria, if ‖ ‖ ≤ or maximum number of iterations reached Stop.
The SQN EI Algorithm
This subsection is about another new algorithm named SQN EI .The spectral parameter of Cheng and Li (2010) 4 is used but in different step of algorithm.The direction is given as this formula: So, the steps of the SQN EI algorithm are the same as previous subsection, but without step 6 and in step 8, is defined as in Eq.8.
The Convergence Analysis of SQN Ev-Iv and SQN EI The convergence analysis of our algorithm is discussed in this part.
A list of Assumptions:
For conducting the analysis in section 3, some assumptions are needed as follows: (i) Assume that , an objective function, is twice continuously differentiable. (ii) The Lipschitz continuous properties for the Hessian matrix at * , that is, it is satisfying the inequality: with the existing of a positive constant c and all in neighborhood of * (iii) If the objective function ∈ 2 and = {: () ≤ ( 0 )} is a convex level set, then there exist two positives 1 and 2 satisfying 1 ‖‖ 2 ≤ ′ ∇ 2 () ≤ 2 ‖‖ 2 ∀ ∈ ℜ , ∈ and ∇ 2 () is Hessian matrix of .
The Relationship between SQN Ev-Iv and SQN EI Parameters
It is obvious that, SQN Ev-Iv parameter is a function of SQN EI .However, there is a relation between them; as long as there is a condition for the one in SQN Ev-Iv , this means that: Then, multiplying both sides of Eq.9 ‖ ‖ 2 , and taking the square root to them to obtain the following: Which means the parameter of SQN Ev-Iv is less than the parameter of SQN EI .
Sufficient Descent Property of Two Algorithms
In this algorithm, assumed that the direction as given in Eq.6 and Eq.8, that is; it is used to prove the descent direction: This means that it has descent direction property for = 0. Now, it is wanted to prove for ≥ 1.
Since, in this part the direction search is given as Therefore, has descent direction property in SQN Ev-Iv .In the same way for SQN EI the property holds for it is gotten.
The Global Convergence Analysis
In order to prove the global convergence of the proposed algorithms, Lemma 7 in 15 showed a line search with Armijo condition and the property of descent direction satisfies one or both following inequalities.That is, if ℎ( +1 ) = ( +1 ) − ( ), then either: since is bounded then ‖ ‖ < , therefore: and by assumption 1 (iii), it is: where, and are positive constant.
Remark 1: Theorem 1 is hold, the global convergence, when the direction of SQN EI algorithm is used.Therefore, by subsection 2, the same result is obtained.
Superlinear Rate of Convergence
In this section, the superlinear convergence is presented and proved.In order to do that, some principles are need and recall them in 16 two lemmas 4.9 and 4.10.Then with holding the assumption 1, a sequence of numbers as { } such that whenever, and are given in Eq.5.These all tend to the boundedness of the sequence of Hessian matrix approximation and its inverse, that is, { } and { −1 }: Therefore: Proof: Therefore: as it has given that: On the other hand:
Results and Discussion
This section gives the results and presents all findings throughout the performance profile plots, the cumulative distribution function.In this way, the significant difference will only show in the interesting area.The time of CPU running and the number of iterations are two criterions in showing the effectiveness of the suggested algorithms in numerical optimization branch.This way is used in this paper.For the comparison, 46 function tests are used, that are in each of 17 and 18 , as listed in Table 1.About the dimensions, the various dimensions for functions are taken when the function is multivariate.This means that, if is the dimension number, then for the first three functions in Table 1, when ID = 1, 2 and 3, = 2 is used, while for all other listed functions; = 2, 4, 6, 8, 15,30 is used except Diagonal 2, ID = 14, the used dinamsions were = 2,3,4,10,15.As a final result, the overall data becomes 260 for plotting the performance profile.The Fig. 1 presents plots for the four procedures.In details, Fig. 1(a) shows how the SQN Ev-Iv algorithm behaves in terms of iteration numbers for all test functions.The SQN EI is preferable than QN BFGS and SQN LC methods for reducing iteration numbers.However, this result is inaccurate with SQN Ev-Iv algorithm after satisfying the condition Eq.7.
Whenever the problem is convergent, SQN Ev-Iv is more recommended among the four algorithms.Meanwhile, Fig. 1(b) reveals the time consuming of CPU for running all of contest algorithms.Again, the SQN Ev-Iv procedure is better than others.For conducting the analysis in Fig. 1, the number of test functions was 43; that is, all functions identify in Table 1 is utlized excluding functions = 24,37, 39.Furthermore, there were two functions, = 5, 11, made a terrible for some dimension, for instance, function with = 5, = 6, 20, 30 and = 11, = 20, 30 are excluded from the analysis.In other word, 43 test problems are filtered with Eq.7.
On the other hand, Fig. 2 demonstrates how the cumulative distribution line changes when involves all 46 problems named in Table 1; but inequality Eq.7 is ignored in SQN Ev-Iv algorithm.Along with this failure, SQN Ev-Iv remains the selected algorithm comparing with QN BFGS and SQN LC except SQN EI .Overall, SQN EI is dominates the all engaged methods in this study without the inequality Eq.7 holds.For all programs, the MATLAB 2018a codes are written with using inexact line search satisfying the strong Wolf condition and the error = 1 × 10 −7 or iteration number reached at maximum number, in which it was 1000; where is the dimension of objective function.Furthermore, the Table 1 contains the name of all functions used in the comparison with some suggested dimensions.For running programs, the suggested initial values for those functions given in 17 ; is used however, there were some testing functions with no initial points; in this case, the border values of defined region is used as an initial point in 18 .
Conclusion
The paper suggested two new algorithms, SQN EI and SQN Ev-Iv .SQN EI depended on the new position in the usage of defined spectral parameter for the past work while, SQN Ev-Iv beside the new place of it; there is a new value to accelerate the process of solving problems.In the past, one or two spectral parameters were used in a Hessian approximation matrix updating formula.However, this new technique shows the effectiveness of approaches in optimizing problems.That is to say, SQN EI and SQN Ev-Iv algorithms were preferable in comparison to each of QN BFGS and SQN LC according to running computer system processer and iteration numbers.In general, SQN EI is better than others.However, there is a contest between the two proposed algorithms by a condition decision; it made SQN Ev- = − −1 , = − −1 5
Figure 1 .Figure 2 .
Performance profile for four algorithms, with filtered functions (a) Number of iterations and (Performance profile for algorithms, with all functions (a) Number of iterations and (b) CPU running time + ( + ) ′ | 3,126.2 | 2023-11-19T00:00:00.000 | [
"Mathematics",
"Computer Science"
] |
The wandering subspace property and Shimorin’s condition of shift operator on the weighted Bergman spaces
In the present paper, we first study the wandering subspace property of the shift operator on the Ia\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$I_{a}$$\end{document} type zero based invariant subspaces of the weighted Bergman spaces La2(dAn)(n=0,2)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$L_{a}^{2}(dA_{n})(n=0,2)$$\end{document} via the spectrum of some Toeplitz operators on the Hardy space H2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$H^{2}$$\end{document}. Second, we give examples to show that Shimorin’s condition for the shift operator fails on the Ia\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$I_{a}$$\end{document} type zero based invariant subspaces of the weighted Bergman spaces La2(dAα)(α>0)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$L_{a}^{2}(dA_{\alpha })(\alpha >0)$$\end{document}.
where f (z) = ∑ ∞ n=0 a n z n is the power series representation of f. Let be the boundary of the unit disk . A function (z) in H 2 is called inner if | (z)| = 1 a.e. on . The famous Beurling theorem [2] says that every invariant subspace of the multiplication operator T z on the Hardy space H 2 other than {0} has the form H 2 Let dA be the normalized Lebesgue area measure on , and let: The space L 2 ( , dA ) consists of complex valued functions f on such that: It is well-known that L 2 ( , dA ) is a Hilbert space with the above norm. For any > −1 , we define: then L 2 a (dA ) is a closed subspace of L 2 ( , dA ) . These spaces will be called the weighted Bergman spaces. Let B denote the shift operator on L 2 a (dA ) which maps every f ∈ L 2 a (dA ) to zf. If = 0 , for convenience, the Bergman space L 2 a (dA 0 ) and the Bergman shift B 0 will be denoted by L 2 a and B , respectively. In 1996, Aleman, Richter and Sundberg [1] proved that the Beurling type theorem holds for B on L 2 a . Later, different proofs of the Beurling theorem for B on L 2 a were given in [6,7,9]. In [9], Shimorin proved the following theorem. (i) ‖Tx + y‖ 2 ≤ 2 � ‖x‖ 2 + ‖Ty‖ 2 � for allx, y ∈ H , and (ii) ∩ ∞ n=1 T n H = {0} , then T possesses the wandering subspace property on H.
Remark 1.2 The above condition (i) is equivalent to say that T is bounded and bounded below on H and satisfies the following equation:
If T satisfies the condition (ii) on H, then we say that T is analytic on H. Definition 1. 3 We say that Shimorin's condition for T holds on H if T satisfies the above conditions (i) and (ii) (see [12]).
Shimorin's theorem implies that T possesses the wandering subspace property on H if Shimorin's condition for T holds on H. As an application of Shimorin's theorem, it is proved that for any −1 < ≤ 0 the Beurling type theorem holds for B on L 2 a (dA ) , and then as a corollary, they gave a simpler proof of the Beurling type theorem on the Bergman space. One year later, Shimorin proved that for any −1 < ≤ 1 the Beurling-type theorem holds for B on L 2 a (dA ) (see [10]). The first step to solve Question 1 is to verify whether Shimorin's condition for the operator T holds on the invariant subspaces. It is always difficult to verify directly according to Definition 1.3, for example, the reproducing kernel Hilbert spaces with the complicate kernel functions. If Shimorin's condition for the operator T fails on some invariant subspaces, then we must try other ways to solve Question 1.
The famous example is the zero-based invariant subspaces of the weighted Bergman spaces L 2 a (dA )( > −1).
Definition 1.4
For any n ≥ 1, > −1 , now suppose the sequence A = {a 1 , a 2 , … , a n , …} ⊂ (may be same). Let: then I A is an invariant subspace of B . The subspace I A is called a zero based invariant subspace of L 2 a (dA ) (see [3]). When A = {a 1 , a 2 , … , a n } is a finite set of points inside , for distinguishing the difference, we denote by: In particular, let A = {a} , the kernel function for the I a type zero based invariant subspaces of L 2 a (dA ) (see [3]) is: In the present paper, we give the following theorem to show that Shimorin's condition for the shift operator fails on the I a type zero-based invariant subspaces of the weighted Bergman spaces L 2 a (dA )( > 0).
I a 1 ,a 2 ,…,a n = I A . We study the zero based invariant subspaces of the weighted Bergman spaces is also due to the following facts: Hedenmalm and Zhu [4] have showed that for any > 4 there exists some I a type zero based invariant subspaces of L 2 a (dA ) don't possess the wandering subspace property. So in this case, the Beurling type theorem for B on L 2 a (dA ) fails. In [9], Shimorin conjectured that the critical value for the Beurling type theorem on the weighted Bergman spaces L 2 a (dA ) is = 1 . In 2004, Hedenmalm and Perdomo [3] essentially proved that the Beurling type theorem fails in L 2 a (dA ) for > c , where c ≈ 1.04 . It is natural to discuss the following question: Question 2 Does Shimorin's condition for B hold on all I a type zero based invariant subspaces of L 2 a (dA ) with ∈ (1, +∞)?
gives a negative answer to Question 2. Then we must try other ways to solve the following natural question: Question 3 Does B possess the wandering subspace property on the I a type zero based invariant subspaces of L 2 a (dA ) with ∈ (1, +∞)?
In [12], the authors gave a positive answer for the case of = 2 of Question 3 and proved the case of = 2 of Theorem 1.5. To solve Question 3, let 2 , 2 be the bidisk and torus which are the Cartesian product of 2 copies of and , respectively. The Hardy space H 2 ( 2 ) over the bidisk 2 consists of holomorphic functions f on 2 satisfying: where is the normalized Haar measure on 2 . It is also well-known that H 2 ( 2 ) is a Hilbert space. Let M be a closed subspace of H 2 ( 2 ) , we say that M is a submodule if M is invariant under multiplication operators T z and T w . In particular, for any subset X ⊂ H 2 ( 2 ) , let: where A( 2 ) is the bidisk algebra, that is the closure of polynomials in z and w under the norm of H ∞ ( 2 ) . Then [X] is a submodule, it is called the submodule generated [X] = clos span A( 2 )X , by X. The orthonormal complement of a submodule is called a quotient module. Let (S z , S w ) be the two variable Jordan block which are the compression operators defined on a quotient module N, more precisely, where P N is the projection of H 2 ( 2 ) onto N. In particular, let K 1 = [z − w] be the submodule generated by z − w , and let N 1 = H 2 ( 2 ) ⊖ K 1 be the related quotient module. As we know, the quotient module N 1 plays a great role in many situations since the compression operator S z on N 1 is unitarily equivalent to the Bergman shift on L 2 a . In [11], by lifting the Bergman shift as S z on N 1 , Sun and Zheng gave a proof of the Beurling type theorem for B on L 2 a . In [12], the authors considered the submodule K 0 = [(z − w) 2 ] and the related quotient , and the following lemma [12] holds: , then S 1 z and B 2 are unitarily equivalent.
The space L ∞ ( ) is the collection of all essentially bounded measurable functions on . For each (e i ) ∈ L ∞ ( ) , the Toeplitz operator with symbol (e i ) is the operator T defined by: where P is the orthogonal projection of L 2 onto H 2 . By some statements in [12], we get the following theorem: The last part of the proof of Theorem 8 in [12] has in fact proved that a ∉ (T ) . We believe that Theorem 1.7 is a general phenomenon in the case of all zero based invariant subspaces of L 2 a (dA )( > 1) . We also believe that B does not possess the wandering subspace property on some zero-based invariant subspaces of L 2 a (dA )( > 1) and it will substantiate Shimorin's conjecture. Our basic problem is to find and prove the general phenomenon, and then, we prove the wandering subspace property of B on all zero based invariant subspaces of L 2 a (dA )( > 1) via the spectrum of some Toeplitz operators on the Hardy space H 2 . It is obvious that Question 3 is a special case of our basic problem. In the present paper, we prove the similar phenomenon in the case of Bergman space and reprove the wandering subspace property of the Bergman shift on the I a type zero-based invariant subspaces. The following is our main theorem: Through Theorems 1.7 and 1.8, we conjecture that the following proposition holds for the general case of the weighted Bergman spaces L 2 a (dA n )(n = 0, 1, 2, …) . We believe that the proof of Conjecture 1 will make progress in solving Question 3, but we cannot prove it in this paper. This paper is arranged as follows. In Sect. 2, we prove Theorem 1.7. In Sect. 3, we give an equivalent condition that S z does not possess the wandering subspace property on any fixed invariant subspace of S z on N 1 . In this part of the research process, we simply get the same characterization of M 00 as in [11]. In Sect. 4, we give the proof of Theorem 1.8, and then as a corollary, we reprove that the Bergman shift possesses the wandering subspace property on all I a type zero based invariant subspaces. In Sect. 5, we prove Theorem 1.5.
In this paper, for a Hilbert space H and a bounded linear operator T on it, we denote by Lat(T) the lattice of invariant subspaces for T on H.
The proof of theorem 1.7
Let then {̃1,̃2, …} is an orthonormal basis of H 1 0 (see [12]). In [12], the authors have proved the following proposition: if and only if there exists a nonzero solution for the following where for any i ≥ 0 , and ̃1,̃2, … is an orthonormal basis of H 1 0 as mentioned above.
In this section, we first point out that Proposition 2.1 can be written in the following simple form:
and only if there exists a nonzero solution for the following equations of
for any i ≥ 0 , and ̃1,̃2, … is an orthonormal basis of H 1 0 as mentioned above.
Proof The condition (i) in Proposition 2.1 is equivalent to C i 2 = 0 for any i ≥ 0 (see [12]).
Then by (iii), we can calculate that In general, we can prove that (ii) and (iii) are equivalent to: that is (ii). If (ii) and (iii) hold, then (2.2) holds for k = 2 . We assume that (2.2) holds for any fixed k ≥ 2 , then by (iii),
and only if there exists a nonzero solution for the following equations of
Proof of Theorem 1.7 By the proof of Theorem 8 in [12], we get that (2.5) has a nonzero solution {C i 3 } i≥0 satisfying: which is equivalent to being an eigenvalue of Toeplitz operator By Lemma 2.3, we get Theorem 1.7. ◻
Beurling type theorem for S z on N 1
In this section, we use the techniques in [12].
Proposition 3.1 S z is left invertible and analytic on N 1 [5].
Proof Note that an orthonormal basis of N 1 is: where Then, we get the matrix A of S z under the above basis: And one calculates A * and A * A , respectively: , and therefore, S z is left invertible. Since, for any f ∈ N 1 , there exists a sequence {a n } ∞ n=0 such that: Then the coordinate vector of S z f is Now if g ∈ ∩ ∞ n=1 S n z N 1 , there exists f n ∈ N 1 such that: For any fixed n ≥ 1 , comparing the coordinate vectors of g and S n z f n , we get that the fist n numbers of coordinate vector of g are zero, thus It is easy to verify that M is an invariant subspace for T z in H 2 ( 2 ) . In Theorem 6.8 of [8], Richter showed that the mapping ∶ M →M is one-to-one correspondence between invariant subspaces of S z and invariant subspaces of T z containing [z − w] . Let LM be the wandering space of T z on M , we easily get the following theorem: , we have the following decomposition: A(a 0 , a 1 , … , a n , …) Thus: The above implies that: Conversely, for any g in Since, then f ∈ K 1 ⊖ zK 1 . By above calculations, we have f = zP N 1 g − wg(w) and this completes the proof. where m k ∈ M 0 and u n = −S z B * M S z −1 P M T * z f n + f n ∈ M 00 . Since, q ∈ N and q ⟂ ∨ n≥0 S n z M 0 , taking inner product of q with S k zmk gives This implies that m k = 0 for all k ≥ 0 . Thus each function q in N has the following form: Let: Ñ = � ∑ ∞ n=0 z n u n ∶ u n ∈ M 00 � , by Corollary 3.6, we have N ⊂ � N . Let
Proof
Step 1. For any f in Ñ , let f = ∑ ∞ n=0 z n u n , where u n ∈ M 00 . On the one hand, by Theorem 3.2, we have f ∈M . Thus f ∈ M ⇔ f ⟂ K 1 . On the other hand, assume f ∈ M . Since for any k ≥ 0,m k ∈ M 0 , we have: ] . This implies that: Thus, we have the following equivalent relation: Step 2. It is obvious that f ⟂ K 1 is equivalent to And it is easy to verify that: Since f 0 n ⟂ K 1 , so we get: Note that f n ⟂ zK 1 and (z − w)z i−n w j ∈ zK 1 (n < i) , we have: Hence, f ∈ N is equivalent to: Step 3. When j = 0 , since 1 ∈ N 1 , so 1 ⟂ f n (n ≥ 0) , thus, we get: On the other hand, it is easy to verify that: Then, in this case, (3.13) is equivalent to C i 0 = 0(i ≥ 0). Step 4. When j ≥ 1 , it is easy to verify that: (3.14) and Then, in this case, (3.13) is equivalent to (ii).
is a nonzero solution for the above equations.
Conversely, assume {C i k } k,i≥0 is a nonzero solution for the above equations. Let: As we know, the Beurling type theorem holds for S z on N 1 , thus we have the following corollary through Theorem 3.8 which reflect the common property of invariant subspaces of S z . Corollary 3.9 Let M ⊂ N 1 and M ∈ Lat(S z ) , then the following equations of C i k (k ≥ 0, i ≥ 0) has only zero solution: where for any i ≥ 0 , and ẽ 0 ,ẽ 1 , … is an orthonormal basis of N 1 as mentioned above.
The proof of Theorem 1.8 Let: then {E k } k≥0 is an orthonormal basis of L 2 a . For any fixed a ∈ , the reproducing kernel of L 2 a is Define the operator U ∶ L 2 a → N 1 such that where (4.10) (4.14) And one calculates that: since, that is equivalent to: Then, Hence, and, By the above calculates, we can choose A a being any positive constant, such that: Note that: where C a,k is a positive constant depending only on a, k. Then, Proof For any fixed a ∈ , let M = M a in Theorem 3.8, we easily calculate that: and Then, Then (ii) in Theorem 3.8 is: If C i 0 = 0(i ≥ 0) , let k = 1 in (4.27), we get: Hence, (4.27) is equivalent to holding the following equations: By the induction, we easily get that Eq. (4.30) is equivalent to the following equations: (4.28) (4.31) where And 1 a n , when 0 ≤ m ≤ n − 1, n ≥ 1; x nm = 0, when 0 ≤ n − 1 < m.
where C is a positive constant, so we have: Then, (4.32) is equivalent to the following form: That is also equivalent to the following matrix form: where Since, the matrix: is the matrix of the Toeplitz operator with symbol (e i ) ∈ L ∞ ( ) with respect to the basis {e in } ∞ n=0 of H 2 , and let k be the kth Fourier coefficient of (e i ) , then Thus, we get: Note that (4.36) has a nonzero solution {C i 1 } i≥0 such that: which is equivalent to a being an eigenvalue of Toeplitz operator T ( f = ∑ ∞ i=0 C i 1 z i is an eigenvector). This completes the proof. ◻ . By Theorem 1.8, in the following process, we need to prove that a is not an eigenvalue of Toeplitz operator T for any 0 ≠ a ∈ , where, and Note that: then (T * ) = (T̄) =̄( ) . If a ∈ p (T ) , since a is real, then a ∈ (T * ) . Note that where b = |a| 2 , x = 1 1−āz . That is: Since 0 < b < 1 and |āz| < 1 for all z ∈ , then (4.42) fails for any z ∈ , i.e., a −̄(z) ≠ 0 on . But a −̄(z) is continuous on , hence This implies a ∉̄( ) , i.e., a ∉ (T * ) , and this completes the proof.
that is: Repeating the same process of the proof of Proposition 4.3, we get the following proposition: | 4,622.8 | 2021-10-30T00:00:00.000 | [
"Mathematics"
] |
COVID-19 as an Immune Complex Hypersensitivity in Antigen Excess Conditions: Theoretical Pathogenetic Process and Suggestions for Potential Therapeutic Interventions
Because of particular properties of SARS-Cov-2, such as an high infection speed, its antigenic nature, evolutionarily unknown to the human immune system, and/or a viral interference on the immune response mechanisms, this virus would determine in the subjects a delayed anomalous (slow and/or low) immune response, ineffective and, finally, self-damaging. The hypothetical pathogenetic process for covid-19 could occur in three phases: a) Viral phase, asymptomatic or weakly symptomatic, with an a-specific innate immune response; b) Immunological phase, intermediately symptomatic, with an anomalous specific immune response (delayed, slow and/or low synthesis of IgM and IgG) in antigen excess conditions, immune complex formation and complement activation with tissue damages; c) Hemo-vascular phase, severely symptomatic, where complement-mediated tissue damages would induce vascular inflammation and systemic alteration of the coagulation homeostasis. This hypothesis is well supported by the immune-histochemical and microscopic demonstration in severe patient lungs of co-localized spike viral proteins, terminal components of the activated complement system (C5b-9 membrane attack complex) and microvascular deposits of small fibrin thrombi. This picture could be aggravated by the involvement of neutrophils and macrophages, releasing additional lytic and inflammatory factors. Thus, covid-19 would arise as a simple viral infection, develop as a diffuse immune complex hypersensitivity and explode as a systemic hemo-vascular pathology. If this hypothesized process would be real, suitable therapeutic interventions might be carried out, able to interfere with or block the critical factors in the various phases.
INTRODUCTION
Recent data about therapeutic effects of "tocilizumab" (an anti-IL-6 mAb utilized in rheumatoid arthritis) in covid-19 cases, induce to hypothesize that the SARS-cov-2 pathology is due to endogen biological mechanisms, rather than to specific viral activities. Thus, it could be hypothesized that the pathogenetic mechanism in arthritis and covid-19 might have commons or correlated points. Likely, IL-6 might be a such point. From data well established in the scientific literature, it is known that IL-6 is an important factor of the immune response, released from the beginning by macrophages, the activity of which then involves neutrophils, B lymphocytes and antibody (Ab) synthesis, as well as, complement (C) activation in the classical way, with release of inflammatory factors (anaphylatoxins) and consequent potential lesions of the tissues where the target antigens (Ag) are located.
HYPOTHESIS OF COVID-19 AS IMMUNE COMPLEX HYPERSENSITIVITY IN ANTIGEN EXCESS CONDITIONS
It is known that in diseases such as rheumatoid arthritis, the pathological effect is the result of immune reactions occurring "in vivo" in an excess of Abs, that determine formation of soluble Ag-Ab immune complexes (IC). These, being no removable by phagocytes, remain in circulation, settle and persist in defined tissue places (such as articulations), where they induce reiterative C activation in the classical way, with persistent inflammation and increasing tissue damages. I speculate that something similar might occur in covid-19, for a synergic convergence of various factors with a viral, biological and environmental nature. Main viral factors would be a high infecting load, a very fast growthpropagation and an antigenic nature evolutionarily unknown to the human immune system (IS), as well as a possible viral interference on the immune response mechanisms. These factors would lead the IS to realize an anomalous immune response, delayed, slow and/or weak, favoring an increasing viral growth determining Ag excess conditions. This would cause IC formation and, thus, immune reactions involving a repeated C activation in the classical way, with persistent inflammation and tissue lesions, in a typical picture of IC hypersensitivity. This pathological picture could be strongly aggravated, made persistent and potentially lethal, by environmental pollutants. Stable virus adsorption on pollutant particles might lead to an increasing local virus concentration, able to induce a sort of "persistent viremia", that, long time, is known to determine a typical picture of IC hypersensitivity, where the classically activated complement cascade would be the crucial disease factor. Thus, it might be hypothesized that, in the absence of a very high viral load, the covid-19 course might normally evolve in a more or less severe pathological picture, solvable by natural and/or pharmacological factors. But, on the contrary, it can explode when other biological or environmental factors concentrate the virus, favoring Ag excess conditions and, thus, massively triggering the aforesaid hypersensitivity mechanisms.
THEORETICAL PATHOGENETIC PROCESS FOR COVID-19
Several recent emerging reports, strongly supporting the hypothesis that complement activation is related with the pathogenesis of covid-19, lead to suggest the following global scenario for this disease, that would develop in three distinct phases: viral, immunological and hemo-vascular.
a. Viral phase: early, the virus infection would localize in the high respiratory tracts, causing initial weak symptoms and inducing a non-specific innate immune response, through germline pattern recognition receptors (PRRs) detecting pathogen-associated molecular patterns (PAMPs). This response immediately involve phagocyte cells (macrophages, monocytes, dendritic cells, neutrophils), release of cytokines (IL-1, IL-6,TNFa), activation of the complement (C) system by alternative and lectin pathways (1, 2), C3d factor production and consequent B lymphocyte and neutrophil attraction "in loco." The result of this phase might be the virus elimination if the infecting load is low and the subject innate immune response is effective, or the virus persistence and proliferation, if the viral load is high and the subject innate immune response is ineffective. b. Immunological phase: the virus replication and propagation to the middle/low respiratory ducts via ACE-2 receptors (ACE-2R) (3) would cause intermediate symptoms and induce the specific immune response, in particular a humoral response, with involvement of B lymphocytes. Because of the peculiar immunological nature of the viral spike (S) antigen, as an evolutionarily unknown target for the human immune system (IS), and the high infection-growth speed of the virus, the immune response would be anomalous. Namely, Ab production could result absent, delayed or slow for IgM synthesis and anticipated or low for IgG synthesis (4,5).
In such a way, the virus would fast replicate, determining an Ag excess condition "in vivo" and a consequent formation of soluble Ag-Ab-Ag ICs. They could remain in circulation and settle on various tissue sites, particularly in the capillary endothelium (2, 6) of several organs, including lungs, heart, kidneys, brain, skin.
Here, ICs would bind the C1q factor, thus triggering the classical C system pathway. In this process, release of C4a, C3a and C5a anaphylatoxins would determine mast cell degranulation with histamine release, generating a "cytokine storm," thus promoting a systemic pro-inflammatory immune response. Consequent attraction and activity of neutrophils and monocytes/macrophages, together with the C5b-9 membrane attack complex (MAC) activity, would cause cell lysis and severe tissue damages (1,2,7,8). Since pulmonary alveoli are rich in ACE-2Rs, as targets of the viral S glycoproteins, the lungs would be the early damaged organs, but then a systemic involvement of different organs would occur at the level of their capillaries, also very rich in ACE-2Rs. The final result of this phase could be a symptom regression if a timely, quantitative and qualitative suitable Ab production leads, in some a way, to a low/absent IC formation, otherwise a symptom progression if a high IC formation occurs. In addition to soluble circulating ICs, other ICs directly fixed on the cell surfaces could form at the sites of ACE-2Rs. In fact, since ACE-2Rs and anti-S Abs are competitors for the RBD (receptor binding domain) present at the top of the S1 subunit of spikes, and since some anti-S Abs are able to bridge two separate RBDs (9), ICs of the ACE-2R/RBD/Ab/RBD/ACE-2R type could directly form on the cell surfaces, immediately activating the C1q and the C classical pathway until the C5b-9 MAC, with consequent tissue damages. c. Hemo-vascular phase: this phase would be a direct immediate consequence of the C-mediated tissue damages occurred in the immunological phase. It is well known that, in normal conditions, the blood coagulation homeostasis is the result of a multifactorial equilibrium between the fibrin-synthesis and fibrin-lysis processes, thanks to a physiological release in circulation of tissue-platelet factors (TF, PF) and tissue plasminogen activator (TPA), respectively regulating the two processes, together with several other factors. Now, when a tissue damage occurs, TFs and PFs immediately are released in circulation and the aforesaid balance naturally shifts toward the fibrin-synthesis process for controlling an eventual hemorrhagic status. Thus, C-mediated damages of the capillary endothelium would directly lead to a microvascular pro-coagulant thrombogenic status (1,8), that, from the emerging experimental reports, seems to be the main systemic pathological feature of the covid-19 disease. This scenario could be aggravated by pollutants, whose presence "in loco" and entrance the circulation could induce systemic neutrophil and macrophage activity, with a consequent release of their inflammatory and lytic factors, further worsening the clinical picture in severe covid-19 patients (1, 2, 10-12).
DISCUSSION
a. The hypothesized IC formation in Ag excess conditions is an event that, in nature, does not occur for antigenic structures evolutionarily known to the IS. Nevertheless, in some "persistent viremia" (and covid-19 might become such in the organism, owing to the aforesaid factors), ICs can continuously form, causing chronic inflammatory lesions in blood capillaries. This is what occurs in defined artificial immune reactions, such as the "serum disease," where a given substance, inoculated in large amount, act early as an immunogen and then as a reactive antigen, just in Ag excess conditions. It seems of interest and important the fact that, in the "serum disease," perivascular inflammatory lesions occur in the renal glomeruli and coronaries after 7 to 14 days, with concomitant phenomena of permeability increase (extravasation), thrombus formation, C activation and massive PMN infiltration: an histopathological picture very similar to that of the covid-19 syndrome (1, 10). b. With regard to the aspects of "immunization" arising in the covid-19 process, recent laboratory data (4) after the symptom onset, with the IgG level simultaneous or slightly earlier and higher than IgM level. This appears to be anomalous for a canonical immune response, where usually IgM synthesis is earlier and higher than that of IgG (4,5). 3) In 26 patients initially seronegative, during the observation period, three seroconversion types occurred: synchronous IgM/IgG (9 patients), earlier IgM (7 patients) and earlier IgG (10 patients): in all the cases, IgG level is always higher than IgM level, as previously observed in immune responses to SARS-coronavirus (5), where it is unclear if some patients have an anomalous primary immune response (IgM delayed or absent synthesis) or an anticipated secondary immune response (IgG accelerated synthesis) (4, 5). c. IgM and IgG titers in a severe patient group were higher (mainly for IgG) than those in a non-severe patient group, 2 to 3 weeks after the symptom onset. Previous observations suggested that an IgG upsurge for SARS-CoV correlates with a clinical worsening of pneumonia (5). In a pre-print study, SARS-Cov2 neutralizing Ab responses result more robust in 35 patients with severe disease, about 1 month after infection (13). Moreover, in five patients with severe Covid-19, three potent neutralizing antibodies against multiple epitopes on the SARS-Cov2 spike have been detected: one anti-RBD, one other anti-NTD (N-terminal domain) and a third able to bind two separate RBD (9). These data, all occurring in severe disease patients, could suggest that, before the Abs become protective against the virus, they could constitute an initial crucial factor of disease. More in particular, the aforesaid third Ab type might bridge separate RBDs on spikes already bound to ACE-2Rs on the cell surfaces, generating ACE-2R/ RBD/Ab/RBD/ACE-2R complexes, equivalent to ICs already fixed on cell surfaces, and thus able to directly activate C1q and the C classical pathway until C5b-9 MAC. This events might advance until the Ab titer becomes high enough to prevent spikes from binding ACE-2Rs. To this regard, the question could arise about the effect of anti-hypertensive drugs involving ACE-2 (ACE-2 inhibitors) or ACE-2R (sartans): theoretically, blocking the conversion AT1/AT2 by ACE-2, the ACE-2 inhibitors would increase the amount of ACE-2Rs free for spike, thus favoring the disease; on the contrary, sartans, competing with spike for ACE-2Rs would decrease the number of free receptors for spike, thus hindering the disease. This might be supported by data of a report of the Italian ISS, where among 1102 covid-19 individuals with pre-existent pathologies, 27% of dead patients used ACE inhibitors, and 16% sartans as therapy (14). d. Many of the aforesaid highly neutralizing Abs, unexpectedly, show V(D)J sequences close to germline sequences, without extensive somatic hyper-mutations (9). Since it is known that the somatic hyper-mutations generate many different B-cell clones, whose Abs bind the Ags with a highly variable affinity, could these hyper-mutated Abs have Fabs unable to react adequately with the viral Ags, so that their Fc piece assume a configuration not suitable to bind C1q? Otherwise, could they be unable to form circulating or fixed ICs? If so, could these be hypothetical reasons for which only some of the patients, just those producing potent germline neutralizing anti-RBD Abs, would become severe patients, which might, then, heal or die depending on the timely, qualitative and quantitative Ab synthesis? e. Well, in Covid-19, the immune response appears to be often anomalous. Could this be due to genetic or epigenetic viral interferences on the isotype IgM/IgG switch and lymphocyte affinity maturation? On this matter, it is known that some alterations of the lymphocyte maturation (such as hyper IgM syndrome, a-gamma-globulinemia and some SCID forms) are linked to genes on the X chromosome, where ACE-2, immunity, inflammation, and coagulation genes are also located (15). Thus, the doubt could arise that the anomalous immune response in covid-19 might also be due to a viral interference on such human genes. If so, an explanation might be given to the fact that, in covid-19, severe patients are about 2/3 males and 1/3 females (14): presumably, the presence of 2 X-chromosomes in females might protect the immune response functions better than 1 X-chromosome in males, thanks to the mosaic activity of both maternal and paternal X chromosomes (15). f. Therapeutic use of serum Abs from healed subjects (plasma therapy) might have positive effects: recent unofficial data would indicate that a therapy with hyper-immune sera in the initial covid-19 appears to prevent the disease progression, while for patients in advanced severe disease a significant improvement has been noted. This could indicate that Abs might have a favorable effect, but, presumably, only if in high quantities (13), as it occurs in hyper-immune sera therapy. In effect, besides a direct neutralization of viral activities, such as replication, that reduces the viral titer, high Ab amounts might also act leading the Ag/Ab rate closer to the equivalence, that reduce or avoid formation of soluble and fixed ICs. On the contrary, insufficient quantities of Abs (as often it occurs early in the covid-19 patients) (4) might sort an unfavorable increase of ICs and C activity. Thus, in covid-19, the use of mFabs or artificial micro-Abs could be more suitable than the use of whole Abs. In fact, it is shown that they can neutralize viral Ags (16) without C activation and without the undesired effects of heterologous sera in plasma therapy. g. Thus, I think that, in covid-19, the C activation by ICs might be the crucial point of disease. This hypothesis might be supported by recent observations that only a low occurrence of allergic asthmatics is present among admitted covid-19 cases (3 out of 275 individuals) (17). If so, this would be just in agreement with the proposed hypothesis, since at the secondary immune response in allergic subjects, IgG synthesis does not occur, whereas synthesis occurs of IgE, which are unable to bind C on their Fc piece. It is known that human IgG1-2-3 are highly reactive for the C, while IgG4 are not reactive: thus, it would be of interest to know if some correlations in the various patients could exist among their IgG class, C activation and a different covid-19 evolution. Moreover, it could be interesting to see if the anti-spike Abs endowed with hyper-mutated V(D)J sequences have a molecular structure unable to bind Ags in a correct way conferring to the Fc piece the configuration suitable for activate C1. If so, could an allergic condition (IgE synthesis) and hyper-mutated Abs be protective, while Abs with germline sequences favor the disease? h. The idea that C activation would be the crucial point in covid-19 is strongly supported by several reports about the C role in the Covid-19 disease (1, 2, 7, 10). In particular, it has been shown that: 1) The C activation contribute to the covid-19 pathogenesis regulating a systemic pro-inflammatory response: in a mouse model, mice deficient in C3 (C3 −/− ), the central component of the C system, exhibited significantly less respiratory disorders, fewer neutrophils and inflammatory monocytes, lover cytokine levels to C56BL/6J control mice, despite similar viral titers detected in the lungs (2). 2) Although the alternative and lectin C pathways are also involved, the classical C pathway activation would appear mainly responsible for the Covid-19 pathogenesis, as the release of 4a, 3a, 5a anaphylatoxins would indicate (1). Moreover, deposition of C components of the classical pathway has been observed in the lungs of SARS-Cov infected mice (2).
3) The C5b-9 MAC and C4d factor have been detected as deposits in skin lesions and alveolar micro-vasculature, as well as in normally appearing skin of patients (1). Again, colocalization of SARS-Cov-2 spike glyco-proteins with C4d and C5b-9 MAC in inter-alveolar septal capillaries and cutaneous microvasculature of some covid-19 patients examined has been shown (1). 4) Such a co-localization of viral spike Ags with C components necessarily implies that Ag-Ab ICs are settled or directly formed in those structures, strongly supporting the hypothesis that covid-19 would be an immune complex hypersensitivity in antigen excess conditions, and that the C cascade induced by the ICs is responsible for severe tissue lesions (1). i. In concomitance with C activation, the following pathological features have been detected in the different organs of severe patients: a systemic pro-coagulant state with generalized micro-vascular thrombosis, fibrin deposition in inter-alveolar septa and alveolar spaces, fibrin and thrombi in capillaries, red cell extravasation, neutrophil and monocyte collection in the septa and alveolar spaces, extensive endothelial and subendothelial deposits of C5b-9 complexes in thrombosed arteries (1). Because of these pathological features in severe Covid-19 patients, therapeutic interventions have been proposed utilizing anti-complement components (C3a, C5a) and/or anti-coagulant drugs (1,8,10). j. Recently, a pre-print paper (18) has reported number reduction and function exhaustion of T-cells in Covid-19 patients, directly correlated with the age and the clinical severity of the subjects, and inversely correlated to the serum concentration of IL-6, IL-10, and TNF-a cytokines (18). Significantly, these Tcells (CD8+ and CD4+) show an increasing expression of PD-1 and TIM3 markers during the worsening of the disease. Since TIM3 and PD-1 are related to cell exhaustion and apoptosis, one could speculate that a T-cell response might be useful in the initial viral phase of the disease, for destroying infected host cells and blocking the virus replication. But, then, when a B-cell humoral response becomes dominant, T-cells would be unnecessary or ineffective, and thus would become exhausted and/or apoptotic. k. The Kawasaki disease (KD) is a vasculitis of the children associated to concurrent respiratory viruses (19,20). Recent laboratory data suggest a causal relationship between Covid-19 and a Kawasaki-like hyper-inflammatory syndrome (21), presumably as a consequence of a post-viral immunological reaction, that could result in an acute myocarditis associated to an high IgG serum rate (22). This picture seems to be similar to that of a "persistent viremia," where a condition of "pseudotolerance" of the virus by the host induces continuous hyperproduction of IgG, thus causing an Ab excess that lead to formation of ICs responsible for the vasculitis and organ damages. This mechanism could be similar to that of some post-streptococcus diseases, where organ damages occur when the pathogen agent is already absent or present only in small foci. In conclusion, some important question could arise: a) Could the K-like disease in the children to be a sort of IC hypersensitivity due to a persistent viremia? b) Could Covid-19 in adults become a persistent post-disease infection inducing a continuous Ab production and IC formation responsible for tardive post-covid- 19
CONCLUSIONS AND SUGGESTIONS FOR POTENTIAL THERAPEUTIC INTERVENTIONS
In conclusion, the covid-19 syndrome would arise as a respiratory persistent viral infection, develop as a diffuse IC hypersensitivity and finally explode as a systemic microvascular injury. Therefore, if the pathogenetic process above proposed for Covid-19 is real, the following therapeutic interventions, targeting to block or control the critical factors in the progressive consequent phases, might be tested, as schematically suggested in Table 1. Note that the therapeutic interventions for the second phase might be opportunely associated to those of the first phase, as well as interventions for third phase might be associated to those of the precedent phases, in giving manner, doses and times to be defined.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/supplementary material; further inquiries can be directed to the corresponding author. THERAPEUTIC TARGETS THERAPEUTIC INTERVENTIONS 1) VIRAL PHASE (about 1-5°days) Inflammation of first air ducts and eyes (rhinitis, pharyngitis, conjunctivitis). Inflammation of middle air tracts (tracheitis, bronchitis, fever, cough) -Blocking inflammation factors and histamine (amplifying inflammatory effects) -Neutralizing the virus for preventing its replication and a consequent Ag excess condition -Anti-inflammation drugs -Anti-histamine drugs -Anti-Spike (RBD, NTD) mFabs 2) IMMUNOLOGICAL PHASE (about 6-10°days) Inflammation of the lower air tracts and pulmonary alveoli, dyspnea, tissue and vascular C-mediated lesions -Blocking the IC formation by neutralization of the viral Ags without involving the C system -Blocking the complement cascade activation and its systemic effects -Anti-spike(RBD) mFabs or -Anti-spike(RBD) micro-Abs -C1 INH (inhibitor of C1), already in use as a drug for hereditary angioedema and other extravasation diseases 3) HEMO-VASCULAR PHASE (about 11-15°days and more) Tissue and vascular lesions, thrombus formation, extravasation (interstitial pneumonia, renal, heart, brain, and skin damages) -Preventing platelet aggregation and activation -Blocking the fibrin synthesis process -Dissolving the systemic micro-vascular thrombi -Acetyl salicylic acid -Anti-histamine drugs -Anti-coagulant drugs -Fibrin-lytic agents | 5,111.4 | 2020-10-21T00:00:00.000 | [
"Biology",
"Medicine"
] |
A Review on Speaker Recognition
— Automatic speaker recognition system plays a vital role in verifying identity in many e-commerce applications as well as in general business interactions, forensics, and law enforcement. Today many employees have access to their company’s information system by logging in from home. Also Internet services and telephone banking are widely used by private and corporate sectors. Therefore to protect one’s resources or information confidentially with simple password is not consistent and secure in the technological world of today. There are two major applications of speaker recognition technologies. If the speaker claims to be of an assured identity and the voice is used to verify this claim, it is called as verification or authentication. On the other hand, identification is the work of determining an unknown speaker's identity. This paper depicts the fundamentals of automatic speaker recognition, concerning feature extraction and speaker modeling.
Fig.1.Automatic Speaker Recognition System
shows the components of an automatic speaker recognition system. In speaker recognition the initial process is feature extraction. In feature extraction module the raw signals are transformed into feature vectors in Decision which speaker specific properties are emphasized and statistical redundancies is concealed. In the enrollment mode, a speaker model is trained using feature vectors of the target speaker. In recognition mode, the feature vectors extracted from the unknown person's utterance of the individual are with the system database and a similarity score is generated. Final decision is made by the decision module based on the similarity score. Virtually all state-of-the-art speaker recognition systems use a set of background speakers or cohort speakers. This is done to enhance the robustness and computational efficiency of the recognizer. In the enrolment phase, background speakers are used as the negative examples in the training of a discriminative model [4], or in training phase a universal background model from which the target speaker models are adapted. In the recognition phase, background speakers are used in the normalization of the speaker match score [5] [8] A
. Classification of Speaker Recognition
Speaker recognition can be classified into a number of categories. Figure 2 below provides the various classifications of speaker recognition.
Open Set Vs Closed Set
Speaker recognition can be categorized into open set and closed set speaker recognition. This group of classification is based on the set of trained speakers available in a system. [5].
Open Set: An open set system can have any number of trained speakers. The speakers and the number of speakers can be anything greater than one in open set.
Closed Set: A closed set system has only a fixed number of users registered to the system.
Identification vs. Verification
Automatic speaker composed of identification and verification is often considered to be the most natural and economical methods for preventing unauthorized access to physical locations or computer systems [4].
Speaker identification: Speaker identification is the study of identifying a speaker of a given utterance amongst a set of known speakers. The unknown speaker is identified as the speaker whose model best matches the input utterance. [6] Speaker verification: Speaker verification is the method of accepting or rejecting the identity claim of a speaker. Speaker verification is a more direct and converged effort leading to either acceptance or rejection of the claimed identity of a speaker. To be precise, this analysis concludes whether a speaker is the one who he/she claims to be [4]. It can be considered as a true-or-false binary decision problem. Basically referred to as the open-set problem, because this task requires distinguishing a claimed speaker's voice known to the system from a
Text-Dependent vs. Text Independent
Text-Dependent: In text-dependent recognition the test utterance is the same to the text used in the training phase. The test speaker has prior knowledge of the system.
Text-Independent:
In, text-independent recognition the test speaker doesn't have any knowledge about the contents of the training phase and can speak anything [8].
III. FEATURE EXTRACTION
According to speaker recognition, feature extraction is the process of retaining useful relevant information of the speech signal by rejecting redundant and irrelevant information .It is a process of analysis of speech signal. Various techniques for extracting features for speaker recognition are Mel-Frequency Cepstrum Coefficients (MFCC), Linear Prediction Coding (LPC), Linear Predictive Cepstral Coefficients (LPCC) and Perceptual Linear Predictive Cepstral Coefficients (PLPC). [9] A. Mel Frequency Cepstral Coefficients (MFCC) MFCC is one of the most popular feature extraction technique used in speaker identification and verification process. It is based on the human peripheral auditory method. According to human perception of the frequency contents of sounds for speech signals, it does not track a linear scale. Because of human perception behavior which does not follow linear scale that is above 1000 Hz, a log scale above 1000Hz is taken which is called as Mel Scale. This Mel scale indicates linearity up to 1000Hz and logarithmic above 1000Hz. Hence for each tone of actual frequency, a subjective pitch is measured on different scale called as Mel Scale. Formula to calculate the estimated mels for a given frequency f in Hz: 2595 * 10 1 /700 The log (mel) spectrum is converted back to time. The end result is called the Mel frequency cepstrum coefficients (MFCC). The human ear is responsive to both the static and dynamic characteristic of a signal and the MFCC mainly focus on the static characteristics [8] [9].
B. Linear Predictive Coding (LPC)
In Linear Predictive Coding the analysis of the speech signal is achieved by estimation of the formants. Effects of formants from the speech signal are removed by LPC, and estimate the intensity and frequency of the remaining buzz. This method of removing the formants is known as inverse filtering, and the remaining signal is called the Residue. In LPC technique, each sample of the speech signal is conveyed as a linear combination of the previous samples [1] [11]. This is called a linear predictor and hence it is called as linear predictive coding.
C. Linear Predictive Cepstral Coefficients (LPCC)
LPCC is a popular technique and widely used to extract the features from speech signal. LPCC parameters can effectively describe energy and frequency spectrum for sound frames. The base of explaining acoustic signals spectrum, modelling and pattern recognition is set by the result of increasing logarithm which restrains the fast change of frequency spectrum, more centralized and superior for short-time character and it is because of cepstrum resulting from original spectrum. One of the common short term spectral measurements presently used are LPC derived cepstral coefficients (LPCC) and their regression coefficients LPCC shows the differences of the biological arrangement of human vocal tract and is computed through iteration from the LPC Parameters to the LPC Cepstrum [11].
D. Perceptual Linear Predictive Cepstral Coefficients (PLPCC)
This technique is based on magnitude spectrum of the speech analysis window. MFCC and LPC are cepstral techniques and PLPCC is a temporal technique in feature extraction phase. The steps followed to calculate the coefficients of the PLPCC are: First, compute the power spectrum of a windowed speech. Second, for sampling frequency of 8 kHz perform grouping of the results to 23 critical bands using bark scaling. Third, to simulate the power law of hearing, carry out loudness equalization and cube root compression. Fourthly, perform inverse Fast Fourier Transform (IFFT). Fifth, one is execute LP analysis by Levinson-Durbin algorithm. And final step is to convert LP coefficients into cepstral coefficients The relationship between frequency in Bark and frequency in Hz is specified as in [12] f bark 6 * arcsin h f Hz /600 IV. SPEAKER MODELING In Speaker modelling two types of models are extensively used in recognition systems: Stochastic models Template models The stochastic model exploits the advantage of probability theory. This is achieved by speech production process as a parametric random process. It assumes that the parameters of the essential stochastic process can be estimated accurately, in a definite manner. In parametric methods assumption is made about generation of feature vectors but the non-parametric methods are free from several assumptions about data generation. The template model (non-parametric method) attempts to generate a model for speech production process for a particular user in a non-parametric manner. This is done using sequences of feature vectors extracted from multiple utterances of the same word by the same person. Template models used to dictate early work in speaker recognition because it facilitates without making any prediction about how the feature vectors are being created. Hence the template model is naturally more reasonable. However, recent work in stochastic models has exposed them to be more flexible, thus allowing for generation of better models for speaker recognition process. The state-of-the-art in feature matching techniques used in speaker recognition includes Gaussian Mixture Modelling (GMM), Dynamic Time Warping (DTW) and Vector Quantization (VQ) and ANN.
In a speaker recognition system, the process of representing each speaker in an efficient and unique approach is known as vector quantization. It is the process of mapping vectors from a large vector space to a finite number of regions in that gap. Each region is abbreviated a cluster and represented by its centre called a code word. A codebook is a collection of all code words. Hence for multiple users there should be multiple codebooks each representing the corresponding speaker. The data is thus significantly compressed and accurately represented [13]. Without quantization of the feature vectors, computational complexity of a system would be very large as there would be large number of feature vectors. In a speaker recognition system, feature vectors are usually contained in vector space, which are obtained from the feature extraction described above. When vector quantization process goes to achievement, only remnants are a few representative vectors, and these vectors are collectively known as the speaker's codebook. The codebook then hand out as template for the speaker, and is involved when testing a speaker in the system [14] [11].
A. Vector Quantization
Vector quantization are template based models for text-independent and text-dependent speaker recognition A speaker recognition system must be capable to estimate probability distributions of the computed feature vectors. Storing every single vector that is generated from the training mode is impractical, since these distributions are distinct over a high-dimensional space. It is often easier to start by quantizing each feature vector to one of a moderately small number of template vectors, with a process called vector quantization. The technique of VQ deals with extracting a small number of representative feature vectors. It is the efficient means of characterizing the speaker specific features [12].
The training features are clustered to generate a codebook for each speaker [13]. In the recognition stage, the tested speaker is compared to the codebook of each speaker and the distance is measured to identify the speaker. The problem of speaker recognition belongs to a much broader topic in scientific engineering called pattern recognition. The main goal of pattern recognition is to organize objects of interest into one of a number of classes. The objects of interest are broadly called patterns and sequences of acoustic vectors are extracted from an input speech using the techniques of vector quantization. The classes refer to individual speakers. Since the classification procedure applied on extracted features, it is also designated as feature matching [13]. Figure 5 shows a conceptual diagram to demonstrate the recognition process. In the figure, only two speakers and two dimensions of the acoustic space are shown. The circles refer to the acoustic vectors from speaker 1 and triangles denotes speaker 2. In the training phase, a speaker-specific VQ codebook is created for each known speaker by clustering his/her training acoustic vectors. The centroids are shown in the figure by circles and triangles for speaker 1 and 2, respectively. The distance from a vector to the nearby codeword of a codebook is called a Vector Quantization distortion. In the recognition phase, an input utterance of an unknown voice is vector-quantized using each trained codebook and the total VQ distortion is calculated. The speaker corresponding to Vector Quantization codebook with smallest total distortion is recognized [13] [14].
B. Dynamic Time Warping
Dynamic time warping is basically template based models uses principle of dynamic programming [principle of optimality].This is used to compute overall distortion between the two speech templates. Comparing the template with incoming speech might be achieved via a pair-wise comparison of the feature vectors in each. The problem with this approach is that if constant windowpane spacing is used, the length of the input and stored sequences is unlikely to be the identical. Moreover, in case of word, there will be dissimilarity in the length of individual phonemes. The matching process needs to balance for length differences by considering the non-linear nature of the length differences within the words. This is achieved by dynamic time warping algorithm, this algorithm is used to find optimal alignment between two sequences of feature vectors, which allows for stretched and compressed sections of the sequence [7]. The two sequences of observations are positioned on the sides of a grid with the unknown sequence on down the grid and the stored template up on the left of the grid. Both sequences start position on the bottom left of the grid. Inside each cell we can establish a distance measure comparing the corresponding elements of the two sequences [14] [15].
C. Gaussian Mixture Model (GMM)
GMM is a parametric method best used to model speaker identities due to the fact that Gaussian components have the capability of representing some general speaker dependent spectral shapes. The Gaussian mixture model (GMM) is the most popular models for text-independent and text dependent speaker recognition, respectively. According to the training paradigm, models can also be categorized into generative model and discriminative model. The generative models such as GMM and VQ estimate the feature distribution within each speaker. The discriminative models such as artificial neural networks (ANNs) and support vector machines (SVMs. A Gaussian Mixture Model (GMM) is a parametric probability density function signifies as a weighted sum of Gaussian component densities. GMMs are generally utilized as a parametric model of the probability distribution of continuous measurements in a biometric system, such as vocal-tract interrelated spectral features in a speaker recognition system. GMM parameters are projected from training data by iterative Expectation-Maximization (EM) algorithm or Maximum A Posteriori (MAP) estimation from a well-trained prior model [8][15] [16]. A Gaussian mixture model is a weighted sum of M component Gaussian densities is represented by the equation where x denotes D-dimensional continuous-valued data vector (i.e. measurement or features), w i , i = 1, . . . , M, are the mixture weights, and g(x|µ i , Σ i ), i = 1, . . . , M, are the component Gaussian densities. Every component density is a D-variate Gaussian function of the form g x|μ , Σ 1 2π D/2|Σ | 1/2 exp -1/2 x μ ′ Σ 1 i x μ (2) with mean vector µi and covariance matrix Σ i . The mixture weights convince the constraint that PM i =1 w i = 1. The complete Gaussian mixture model is parameterized by the mean vectors, covariance matrices and mixture weights from all factor densities. These parameters are collectively represented by the notation, λ w , μ , Σ i 1, . . . , M.
(3) GMMs are often used in biometric systems, especially in speaker recognition systems, due to their capability of representing a large class of sample distributions. One of the powerful attributes of the GMM is its ability to form even approximations to arbitrarily shaped densities. The classical uni-modal Gaussian model denotes feature distributions by a position (mean vector) and a elliptic shape (covariance matrix) and a vector quantizer (VQ) or nearest neighbor model represents a distribution by a discrete set of characteristic templates [12][13] [14].
D. Artificial Neural Networks
The Artificial Intelligence approach is a fusion of the acoustic phonetic approach and pattern recognition approach. It exploits the ideas and concepts of Acoustic phonetic and pattern recognition methods. Knowledge based approach uses the information concerning linguistic, phonetic and spectrogram. The main benefits of ANN include their discriminant-training power, a flexible architecture that permits simple use of contextual information, and weaker hypothesis about the statistical distributions. The main drawback are that their optimal structure has to be selected by trial-and-error procedures, the need to partition the available train data in training and cross-validation sets, and the fact that the temporal structure of speech signals remains complicated to handle. It can be used as binary classifiers for speaker verification systems to separate the speaker and the non speaker classes. It is also used multi-category classifiers for speaker identification purposes [6], [8], [10]. V. SUMMARY This paper represents an overview of automatic speaker recognition. The recognition accuracy of speaker recognition systems under controlled conditions is high. In feature extraction, high level features highlights behavioral characteristics of speakers, such as prosody (pitch, duration, and energy) (phonetic information pronunciation, emotion, stress, idiolect word usage conversational patterns, or other acoustic events. These differences in the speaking habits result from the manner in which people have learned to use their speech mechanism, but at the same time the sociolinguistic background, the education and the socio-economic environment plays a vital role in these differences. The main problem, as reported in different studies of this kind of systems is it's essential for more information for both training and testing phases if compared to lowlevel feature systems and are also easily forged. However, in practical scenario many negative factors are encountered including mismatched handsets for training and testing, limited training data, unbalanced text, background noise and non-cooperative users. The well standard techniques of robust feature extraction, feature normalization, model-domain compensation and score normalization methods are required for speaker recognition. The technology advancement as denoted by NIST evaluations in the recent years has addressed several technical challenges such as text/language dependency, channel effects, speech durations, and cross-talk However, many research problems remain to be addressed and should be improved in human-related error sources like emotion variability, misspoken phrases, poorly recorded/noisy samples, insufficient number of comparable words, Extreme emotional states (e.g. stress or duress) Change in physical state of the speaker, Channel mismatch or mismatch in recording, different pronunciation speed speaker's health aging etc., should be carefully analyzed before implementing speaker recognition system.
VI. CONCLUSION
In today's technological world security places a great setback for confidential information. Speaker recognition is a multi-disciplinary branch of biometrics which can be used for speaker Identification and Verification for protecting confidential information. Therefore, in order to prevent un-authorized access there is a need to develop a voice based recognition system which provides a solution for financial transaction and personal data privacy that would reduce the high-tech computer theft. In this review paper various feature extraction techniques and modelling techniques used in speaker recognition is discussed which can be extended in future for developing a real time application towards speaker identification and verification system for securing confidential data. | 4,418.4 | 2017-06-30T00:00:00.000 | [
"Computer Science"
] |
High expression of oxidative phosphorylation genes predicts improved survival in squamous cell carcinomas of the head and neck and lung
Mitochondrial activity is a critical component of tumor metabolism, with profound implications for tumorigenesis and treatment response. We analyzed clinical, genomic and expression data from patients with oral cavity squamous cell carcinoma (OCSCC) in order to map metabologenomic events which may correlate with clinical outcomes and identified nuclear genes involved in oxidative phosphorylation and glycolysis (OXPHOG) as a critical predictor of patient survival. This correlation was validated in a secondary unrelated set of lung squamous cell carcinoma (LUSC) and was shown to be driven largely by over-expression of nuclear encoded components of the mitochondrial electron transport chain (ETC) coordinated with an increase in tumor mitochondrial DNA copy number and a strong threshold effect on patient survival. OCSCC and LUSC patients with a favorable OXPHOG signature demonstrated a dramatic (>2fold) improvement in survival compared to their counterparts. Differential OXPHOG expression correlated with varying tumor immune infiltrates suggesting that the interaction between tumor metabolic activity and tumor associated immunocytes may be a critical driver of improved clinical outcomes in this patient subset. These data provide strong support for studies aimed at mechanistically characterizing the interaction between tumor mitochondrial activity and the tumor immune microenvironment.
Mitochondrial activity is a critical component of tumor metabolism, with profound implications for tumorigenesis and treatment response. We analyzed clinical, genomic and expression data from patients with oral cavity squamous cell carcinoma (ocScc) in order to map metabologenomic events which may correlate with clinical outcomes and identified nuclear genes involved in oxidative phosphorylation and glycolysis (oXpHoG) as a critical predictor of patient survival. this correlation was validated in a secondary unrelated set of lung squamous cell carcinoma (LUSc) and was shown to be driven largely by over-expression of nuclear encoded components of the mitochondrial electron transport chain (etc) coordinated with an increase in tumor mitochondrial DnA copy number and a strong threshold effect on patient survival. OCSCC and LUSC patients with a favorable OXPHOG signature demonstrated a dramatic (>2fold) improvement in survival compared to their counterparts. Differential OXPHOG expression correlated with varying tumor immune infiltrates suggesting that the interaction between tumor metabolic activity and tumor associated immunocytes may be a critical driver of improved clinical outcomes in this patient subset. these data provide strong support for studies aimed at mechanistically characterizing the interaction between tumor mitochondrial activity and the tumor immune microenvironment.
The role of mitochondria in tumorigenesis and cancer treatment response remains enigmatic. The initial description of the Warburg effect lead to a parallel conclusion that mitochondria played a minor role in the maintenance and survival of tumor cells and might even be absent [1][2][3][4] . Over the intervening decades, we have learned that mitochondria are both present and functional in tumor cells and may in fact play a critical role in cancer development and response to treatment [1][2][3][4][5][6][7][8][9][10] . However, our existing understanding of the relationship between mitochondrial activity and tumorigenesis remains mixed, at best.
On one hand, mitochondrial activity appears to be critical to tumor cell survival. This conclusion is supported by the anti-tumor activity of mitochondrial inhibitors and the difficulty associated with generating tumor cells completely void of mitochondria 1,[9][10][11][12] . On the other hand, high mitochondrial activity and low glycolytic activity are associated with indolent tumor behavior and a relatively favorable response to treatment. Conversely, highly glycolytic tumors have also been found to behave more aggressively across multiple tumor types including OCSCC 8,9,13,14 . These findings are complicated by two experimental limitations. Most functional metabolic studies targeting mitochondria can only be performed in the context of preclinical models 8,9,[12][13][14] . Data regarding mitochondrial activity in patient tumors therefore is largely inferred most often from evaluation of mitochondrial number, appearance, mitochondrial (mt) DNA copy number and integrity. Unfortunately, mitochondrial heteroplasmy makes it nearly impossible to ascertain functional implications of mtDNA mutations and variations in copy number 15,16 .
We previously showed that high glycolytic activity, and impaired mitochondrial respiration are linked to OCSCC development through loss of activity of the tumor suppressor p53 8,9,13,14,17 . At the same time, we demonstrated that inhibition of residual mitochondrial activity resulted in substantial and significant potentiation of chemotherapy and radiation effectiveness in preclinical OCSCC models 8,9,13,14,17 . This paradoxical finding warrants further study for two reasons. First, it is important to understand how mitochondrial activity and tumorigenesis are linked in OCSCC, particularly how the former contributes to the latter. Second, in order to develop effective metabolic strategies, we must better understand the potential impact of mitochondrial targeting in OCSCC. In the current study, we sought to evaluate the relationship between inferred mitochondrial activity and tumorigenesis and treatment response in patients with OCSCC, in order to further contextualize our previous preclinical data. Given the inherent limitations associated with inferring mitochondrial function from analysis of mtDNA we chose to focus on nuclear encoded genes critical to mitochondrial functionality using the OCSCC dataset available in The Cancer Genome Atlas (TCGA). When nuclear encoded mitochondria genes were considered together with genes regulating glycolysis, a metabolic RNA profile was identified that was associated with prognosis in OCSCC and in lung squamous cell carcinoma (LUSC).
Materials/Subjects and Methods
Selection of metabolic genes. Genes involved in glycolysis, mitochondrial oxidative phosphorylation, and the pentose phosphate pathway (PPP) were initially manually curated from the literature (Supplementary Tables 1 and 2). Briefly, we focused on 98 genes encoding metabolic enzymes and/or direct regulators of metabolic enzymes. In subsequent analysis, Gene Ontology (GO) search terms glycolysis, glycolysis positive regulation, and oxidative phosphorylation were used to generate a network-based list of 118 genes involved in glycolysis and mitochondrial oxidative phosphorylation, referred to throughout the manuscript as OXPHOG (Supplementary Table 3).
Analysis of mutations and RnA expression for metabolic genes.
We utilized the two largest datasets of squamous cell carcinoma currently available within the TCGA database, specifically the OCSCC and LUSC datasets. A MAF file annotating mutations in the OCSCC TCGA cohort was downloaded from cbioportal (http://www.cbioportal.org/); whereas mutations present in the TCGA LUSC cohort were obtained from Campbell et al. 18 . Mutations were considered impactful if their SIFT 19 score was deleterious or their PolyPhen 20 score was probably or possibly damaging. RSEM normalized gene expression files, as well as clinical parameters including tissue histology and survival data were downloaded directly from the Broad firehose site (https:// gdac.broadinstitute.org/) for all cohorts. RNA-Seq data for noncancerous normal tissues including lung, esophagus, and skeletal muscle were downloaded from the Genotype-Tissue Expression (GTEX) database hosted on the University of California Santa Cruz Xena public data hub (https://xena.ucsc.edu/public-hubs/). Differential expression of the 118 OXPHOG genes between clusters 1 and 2 was analyzed with multiple t-tests using the Benjamini and Hochberg adjustment to control the FDR = 0.05. Chi-square tests to examine possible associations between mutations and patient clusters were performed using Microsoft Excel software (V1903), and adjusted P-values calculated with the Benjamini and Hochberg correction at an FDR = 0.01.
Survival analysis and hierarchical clustering.
Kaplan-Meier curves, median survival (MS) times, and P-values (Log-rank/Mantel-Cox) were generated with GraphPad Prism software (V.7). Hierarchical consensus clustering was based on a re-sampling model previously described by Monti et al. 21 with some modification and selection of optimal cluster numbers (i.e., "c") based on a novel algorithm we developed using Euclidean distances to measure closeness to theoretical perfection (see Supplementary Methods and Supplementary Fig. 1).
Single sample gene set enrichment analysis (ssGSeA). The Broad GenePattern cloud website (https:// cloud.genepattern.org/gp/pages/login.jsf) host was used to run ssGSEA, using linear values of RNA expression as inputs, along with gene lists for muscle, hypoxia, or immune subsets (Supplementary Table 4). The hypoxia gene set file was downloaded from the Broad Hallmark pathways lists. Genes elevated in skeletal muscle were downloaded from the human protein Atlas (https://www.proteinatlas.org/humanproteome/tissue/ skeletal+muscle) and manually filtered to 28 genes with at least a 10-fold differential expression between skeletal muscle and maximum expression in 45 different non-muscle normal tissues available from the GTEX database. Gene lists for 15 different immune subsets were obtained from the publication by Senbabaoglu et al. 22 and filtered to remove genes with low specificity for leukocytes as described in Supplementary Methods, and illustrated by Supplementary Fig. 2, and Supplementary Table 5. Differences in ssGSEA values for each immune subset among different patient OXPHOG clusters were examined with a two-way ANOVA followed by a Tukey multiple comparison test with adjusted P values, using GraphPad Prism. To examine relative enrichment of CD8 or cytotoxic cells to Treg, ssG-SEA values were transformed into Z scores and the Treg value was subtracted from the CD8 or cytotoxic value for each patient.
Results
Metabolically targeted mutational and expression analysis. OCSCC tumors demonstrated a low mutational frequency in core metabolic genes (Supplementary Table 1; Supplementary Fig. 3A) either individually, or when organized by basic metabolic pathway. Metabolic pathway alterations defined by mutation did not significantly impact survival of OCSCC patients ( Supplementary Fig. 3B). However, there was a trend towards reduced survival in OCSCC patients with nuclear encoded mitochondrial gene mutations (MS = 20.73 months) compared to those without such alterations (MS = 53.91 months). The same analysis was performed in a separate dataset of LUSC. Overall, mutational frequency of metabolic genes was higher (Supplementary Table 2; Supplementary Fig. 4A) compared to OCSCC tumors; there was no significant correlation between mutations and overall survival ( Supplementary Fig. 4B), despite a trend for reduced survival in patients with impactful mutations in nuclear encoded mitochondrial genes. Transketolase like 2 (TKTL2) had a mutational frequency above 5% in LUSC, but failed to reach statistical significance by MutSiq analysis (Q-value= 0.06) 18 and demonstrated a random missense mutational pattern across the length of the gene ( Supplementary Fig. 5), making it unclear whether the gene was a driver or passenger.
We performed unsupervised two-way hierarchical consensus clustering to stratify OCSCC patients based on their expression of nuclear-encoded mitochondrial oxidative phosphorylation (OXPHOS), glycolysis, or pentose phosphate pathway (PPP) genes using TCGA RNA expression data. Using OXPHOS or glycolysis gene expression, 8 and 7 patient clusters were identified ( Supplementary Fig. 6A), respectively, based upon optimal NED scores determined from their transformed similarity matrices (Supplementary Fig. 7 and Supplementary Methods). OXPHOS cluster 1 patients had increased expression of OXPHOS genes (i.e., black gene cluster) compared to patient clusters 3-8 ( Supplementary Fig. 6A, left) and had significantly better MS than patients in clusters 2-4 (P < 0.04) Supplementary Fig. 6B, left). Patients with lower expression of glycolysis genes in cluster 1 ( Supplementary Fig. 6A,B, right panels) also had significantly improved MS (169.2 months) compared to patients in cluster 2 (24.3 months, p < 0.003), or clusters 6 and 7 (54.9 and 29.0 months, p < 0.04). In contrast, clustering by PPP pathway genes (Supplemental Fig. 6A, bottom) revealed no survival differences (not shown). Because our data suggested that downregulation of glycolysis genes and upregulation of OXPHOS genes in tumors led to improved survival, we combined the gene lists and re-clustered OCSCC patients ( Supplementary Fig. 8A). As expected, patient cluster 1 with highest expression of mitochondrial genes and reduced expression for a subset of glycolysis genes had significantly better survival compared to patient clusters 2,3, and 4 combined (p < 0.04, Supplementary Fig. 8A, right) with a MS of 169 months compared to 32.6 months.
Robustness of the metabolic expression phenotype across disease site and gene list. LUSC
shares some genomic drivers and pathway alterations with OCSCC that are common to squamous carcinomas 21 . Analysis of the 115 mitochondrial and glycolytic genes in TCGA LUSC patients identified a similar cluster of patients with higher nuclear-encoded mitochondrial gene expression but reduced glycolysis gene expression in their tumors (patient cluster 1, Supplementary Fig. 8B). Cluster 1 LUSC patients also had significantly longer overall MS (103.4 months) compared to other patient clusters. For example, the MS for patient cluster 2 with slightly less mitochondrial gene expression was only 39 months (p < 0.05) and patient cluster 7 with the lowest mitochondrial gene expression but higher glycolysis gene expression had a MS of just 30.6 months (p < 0.002).
Next, we examined if increased survival in patient cluster 1 for both disease sites was highly dependent upon the specific metabolic genes chosen for analysis. Using the Gene Ontology (GO) knowledgebase and GO search terms we generated a second list of 118 genes involved in oxidative phosphorylation and glycolysis (OXPHOG) (Methods, Supplementary Table 3). The OXPHOG gene list derived from GO search terms only partially overlapped with the manually curated genes from our prior analysis (sharing just 65 genes) but included 53 new ones ( Supplementary Fig. 9). We performed unsupervised hierarchical consensus clustering of OCSCC tumors with the new OXPHOG gene set (Fig. 1, Supplementary Table 6) and again identified a subset of patients (cluster 1, Fig. 1A) with elevated expression of nuclear-encoded mitochondrial genes (involved in oxidative phosphorylation) that demonstrated improved survival compared to their counterparts (Fig. 1B). The difference in MS for patient cluster 1 (169.2 months) was significant (p < 0.02) when compared to patient cluster 2, (36.6 months), cluster 3 (32.8 months), and cluster 6 (28.3 months). A prominent feature of patient cluster 1 was overexpression of multiple genes involved in oxidative phosphorylation. Patient survival did not have a simple linear relationship to expression levels of oxidative phosphorylation genes. Cluster 1 (highest expression) had the best survival followed by patients in clusters 4 and 7 with intermediate expression; whereas patients with the second highest expression (cluster 2) and the absolute lowest expression (cluster 5) had poor survival suggesting a threshold effect ( Supplementary Fig. 10). There was no significant association between OXPHOG clustering and patients' TP53-mutational status or race (Fig. 1A).
As further validation, we repeated the OXPHOG gene analysis in the LUSC dataset ( Fig. 2A). Again, patient cluster 1 had the highest expression levels of genes involved in oxidative phosphorylation and had significantly longer survival times (MS = 103.4 months) compared to other patients (>2 fold, Fig. 2B). The same threshold effect found for OCSCC was evident for LUSC. Patient cluster 2 (second highest oxidative phosphorylation gene expression) and patients with lowest expression (now cluster 7, Fig. 2) had significantly worse MS of just 44 months (p < 0.02) and 39.1 months (p < 0.04), respectively. Similar to OCSCC, LUSC patients with intermediate expression levels of oxidative phosphorylation genes had MS somewhere between these extremes. No significant association was seen between OXPHOG clustering and TP53-mutational status or race in LUSC as well. OXPHOG clustering reached statistical significance on univariate analysis of both datasets and approached significance on multivariate analysis in both datasets (Supplementary Tables 8 and 9). The relative effect size of OXPHOG clustering (Exp(B)) was greater than the presence of extranodal extension in OCSCC (one of the most important prognostic indicators for survival in this disease site) and nearly as large as the presence of positive surgical margins in LUSC 23 .
Oropharyngeal squamous cell carcinoma (OPSCC) localized to the base of tongue and tonsillar region is associated with the human papillomavirus (HPV), has distinct biology, and demonstrates considerably better survival compared to stage-matched OCSCC [24][25][26][27][28] . We tested the relationship between the OXPHOG gene set and survival in this subsite of HNSCC. Two-way hierarchical consensus clustering identified a subset of patients with high expression of oxidative phosphorylation genes, but surprisingly these patients had the worst survival in this site even though that they were predominately HPV+ (data not shown). Therefore, we focused our cluster analysis solely on the HPV+ OPSCC patients (Supplementary Fig. 11A). Although only 5 patients were in cluster 1 with higher expression of oxidative phosphorylation genes, 2 patients experienced very early death (<5months) leading to significantly worse MS (p < 0.0001, Supplementary Fig. 11B). Owing to the small number of patients, it is unclear if this was just a statistical anomaly. Consequently, we grouped together all HPV+ OPSCC with OCSCC (2020) 10:6380 | https://doi.org/10.1038/s41598-020-63448-z www.nature.com/scientificreports www.nature.com/scientificreports/ patients to see where the former would cluster (Supplementary Fig. 12). Most of the HPV+ OPSCC patients with higher oxidative phosphorylation (i.e., OPSCC HPV+ cluster 1 from Supplementary Fig. 11) clustered together with OCSCC patients that were originally from patient cluster 2 with poorer survival when just OCSCC was analyzed ( Supplementary Fig. 12). However, closer inspection revealed that the major reason HPV+ OPSCC patients did not co-cluster with OCSCC patients in cluster 1 was because of differences in expression of a few glycolysis related genes, rather than because they had lower expression of oxidative phosphorylation genes (fold changes annotated in vertical bar, Supplementary Fig. 12). Therefore, it is difficult to discern if the anomalous results observed for HPV+ OPSCC are due to small sample size or factors associated with the distinct biology of HPV. possible factors contributing to the oXpHoG gene signature. We performed a three-tier analysis of OCSCC and LUSC tumor sets to identify putative explanations for cluster 1 biological and clinical behavior. First, www.nature.com/scientificreports www.nature.com/scientificreports/ we evaluated for possible contamination of cluster 1 specimens by higher levels of skeletal muscle tissue which might explain differential expression of OXPHOG genes. We utilized a panel of 28 genes with greater expression in normal muscle than 45 other normal tissues (Methods, Supplementary Fig. 13). Neither OCSCC nor LUSC cluster 1 specimens were enriched for skeletal muscle compared to the other clusters (Fig. 3, Supplementary Table 11). Second, we introduced matching normal tissue data into the analysis (Supplementary Tables 6 and 7) to examine if contamination with normal squamous mucosa could be a factor and to better interpret gene expression levels. Patient cluster 1 was not enriched for normal tissue samples in either OCSCC (Fig. 4) or LUSC ( Supplementary Fig. 14). The majority of genes with increased expression of RNA (i.e. ≥1.4-fold) in OCSCC compared to normal samples were involved in oxidative phosphorylation, while the majority of genes with decreased expression in OCSCC compared to normal samples (i.e. ≥1.4-fold reduction) were involved in glycolysis (Fig. 4). Collectively, the data demonstrate that OCSCC and LUSC patients in cluster 1 expressed supra-physiological levels of oxidative phosphorylation genes.
Mitochondrial copy number. Mitochondrial gene expression requires coordination between nuclear
encoded components and mitochondrially encoded components. In order to contextualize the OXPHOG signature, we utilized previously published mitochondrial copy number data derived from OCSCC TCGA samples 15 . There was a progression of mitochondrial DNA abundance across the clusters (Fig. 6) with cluster 1 demonstrating the highest abundance, that reached statistical significance for some patient clusters (i.e., p < 0.05 for clusters 3 and 4).
Analysis of gene expression differences. Earlier analysis of normal tissue suggested that patients in cluster 1 from both OCSCC and LUSC with better survival have supraphysiological expression levels of genes www.nature.com/scientificreports www.nature.com/scientificreports/ involved in oxidative phosphorylation. Because MS dropped precipitously in the adjacent patient cluster with moderately high expression of the same genes, we quantitatively analyzed the parameters of this threshold effect with respect to the OXPHOG genes. T-tests (FDR controlled, alpha = 0.05) identified 53 genes (i.e., 47 mitochondrial and 6 glycolytic pathway genes) differentially upregulated in OCSCC patient cluster 1 compared to OCSCC patient cluster 2, and 53 genes (i.e., 46 mitochondrial and 7 glycolytic pathway genes) increased in LUSC patient cluster 1 relative to LUSCC patient cluster 2 ( Supplementary Fig. 15A, Supplmentary Table 12). A majority of genes (32 out of 35) commonly upregulated in patient cluster 1 in both cancer types were involved in the mitochondrial/oxidative phosphorylation pathway. Considerably fewer genes were downregulated in patient cluster 1, with no common ones between cancer types. More than 75% of the differentially regulated oxidative phosphorylation genes in both cancer types were increased at levels ≥1.4 fold in patient cluster 1 compared to cluster 2 ( Supplementary Fig. 15B).
To identify possible candidate genes that may be contributing to the increased number of mitochondria and/ or upregulation of mitochondrial genes, we searched for other cellular genes that correlated with expression levels of the oxidative phosphorylation genes from the OXPHOG gene list. This was done by first developing a ssGSEA score that robustly represented key genes in the OXPHOG phenotype based on expression of the 32 Table 4). As expected, the OXPHOG ssGSEA scores for the 7 different clusters declined with increasing cluster numbers for both cancers with cluster 1 samples having the highest average ssGSEA values (p < 0.0001, Supplementary Fig. 16A). Next, we interrogated expression from> 20,000 genes from each tumor sample for correlations with the OXPHOG ssGSEA score. Most of the genes highly correlated with OXPHOG ssGSEA for both cancers were involved in mitochondrial function. However, one of the highly correlated genes in common www.nature.com/scientificreports www.nature.com/scientificreports/ was SSBP1 (single stranded DNA binding protein 1), a key protein involved in mitochondrial DNA replication, biogenesis, and copy number [29][30][31][32] , which had a correlation of 0.73 in OCSCC and 0.68 in LUSC (p < 0.0001, Supplementary Fig. 16B). 22,33 . Consensus clustering using the ssGSEA scores for 16 different types of leukocytes identified 7 OCSCC immune patient clusters (Supplementary Fig. 17A). The highest MS (156.4 months) was in patient immune cluster 6 ( Supplementary Fig. 17B), which had the strongest expression of cytotoxic cells, Th1 cells, and CD56dim cells. Immune clustering in the LUSC cohort also produced similar patient clusters; however, there was no obvious association with MS ( Supplementary Fig. 18).
correlation of oXpHoG signature with tumor immune microenvironment (tiMe). Using gene lists (Methods, Supplementary
We used ssGSEA scores for the 16 different leukocyte subtypes to examine if differences in the tumor microenvironment (TIME) could be identified between OXPHOG patient clusters (e.g., Figs. 1 and 2) for both OCSCC and LUSC, which might account for differences in survival. In OCSCC, the only significant difference between clusters 1 and 2 was found was for cytotoxic cells (p < 0.0001, Supplementary Table 15), where a 110% decrease in the cytotoxic cell average ssGSEA score was observed for cluster 2 compared to cluster 1 (Fig. 7A). Levels of cytotoxic cells continued to decrease even further among remaining OXPHOG clusters in a gradient-like manner (Fig. 7A).
To examine if the proportion between cytotoxic or CD8+ cells to regulatory T-cells (Treg) could also explain survival differences between OXPHOG clusters in OCSCC, individual ssGSEA values were transformed to Z-scores and variables combined by subtracting the Treg Z score from either the cytotoxic or the CD8+ Z scores for each sample. Differences in these combined Z scores between all clusters were examined by ANOVA and a Tukey multiple comparison test (Supplementary Tables 16 and 17). In OCSCC, OXPHOG cluster 1 patients had significantly higher proportions of cytotoxic to Treg cells (p = 0.0001) and CD8+ to Treg cells (p = 0.0001) than all remaining OXPHOG patient clusters (Fig. 7B,C).
A parallel analysis performed for LUSC failed to detect any statistically significant differences between OXPHOG patients clusters 1 and 2 for any specific leukocyte subtypes; however, there was a trend for decreased cytotoxic cells in cluster 2 that became highly significant for remaining patient clusters (Supplementary Table 18) when compared to OXPHOG cluster 1. Like OCSCC, the proportion of cytotoxic cells to Tregs was significantly higher in cluster 1 compared to cluster 2 (p < 0.03) in LUSCC, which became highly significant compared to remaining LUSC OXPHOG patient clusters (Supplementary Table 19, Fig. 7D). Although the difference between CD8 to Treg proportion between LUSC OXPHOG clusters 1 and 2 did not reach significance (Supplementary Table 20), there was a trend of decreasing values that was significant compared to remaining clusters (Fig. 7E).
Increased T-cell infiltrate is unlikely to explain higher expression of oxidative phosphorylation genes. To examine whether expression of OXPHOG genes in infiltrating cytotoxic T cells could explain elevated gene expression in OXPHOG patient cluster 1 tumor samples, we leveraged the single cell RNA seq dataset (scRNA-seq) published for HNSCC tumors available online (GSE103322) 34 . First, we split the scRNA samples into either tumor or T-cells based on published annotation. Then we used our previous immune gene list (Supplementary Table 4) to cluster individual T-cell samples and identify cytotoxic T-cell subsets ( Supplementary Fig. 19). Next we found the average OXPHOG gene expression for cytotoxic T-cells and tumor cells and modeled expected fold changes to gene expression that would arise solely due to differences in the presence of T-cells under the very conservative assumption that the OXPHOG cluster 1 samples had 50% cytotoxic cell contamination compared to 0% in other clusters. The expected fold-changes for each gene due to presence of T-cells were then plotted against the actual fold changes observed and displayed graphically (Fig. 8). The actual expression levels of oxidative phosphorylation genes were on average >2 fold higher in OXPHOG cluster 1 patients compared to other patient clusters; whereas expression levels of glycolysis genes in OXPHOG cluster 1 showed the opposite trend and were on average at least 1.5-fold lower (Fig. 8). The majority of significant changes in OXPHOG gene expression found in OCSCC patient cluster 1 would not be explainable by differences in the presence of cytotoxic T-cells (i.e. yellow zones, Fig. 8).
correlation of genotype with oXpHoG phenotype. We analyzed whether there was a difference in distribution of the 48 driver mutations identified for HNSCC or the 22 driver mutations found in LUSC among clusters 1 and 2. A slight enrichment for increased mutations in KDM6a and KMT2B was found in OCSCC cluster 1 (Supplementary Table 20), but none of the other driver genes in either OCSCC or LUSC (not shown) had significantly different mutation frequencies between OXPHOG cluster 1 and other clusters (Supplementary Tables 21 and 22).
Discussion
Ever since Warburg first described a differential metabolic phenotype in cancer cells, scientists have hoped to translate this basic fact of cancer biology into viable therapeutic strategies. The natural starting point was to inhibit tumor glycolytic activity, while minimizing normal tissue toxicity 3,35 . Despite extensive pre-clinical and clinical work, glycolytic targeting has failed to translate into clinical practice for the majority of solid tumors including SCC 8,9,[36][37][38] . Over the last decade, our group along with other investigators have shifted focus toward modulation of mitochondrial activity to generate de novo anti-tumor activity, or to potentiate chemo-radiation effects in solid tumors 10,13,39,40 . Although supported by preclinical studies and retrospective clinical data, inhibition of mitochondrial respiration to improve treatment response is somewhat counterintuitive, since high levels of mitochondrial activity are generally linked to improved clinical outcomes and indolent tumor behavior 9,14-16,39,41-43 .
Our study highlights the fact that the role of energy metabolism and mitochondrial function in cancer biology and response is complex. We found that OCSCC and LUSCC mutations in metabolic genes encoding enzymes and proteins involved in either glycolysis or oxidative phosphorylation are infrequent and likely to be random, unlike in other tumor types 4,11,44 . However, we did observe a strong association between expression levels of a subset of oxidative phosphorylation genes and patient outcomes in two independent cohorts of SCC, which included OCSCC and LUSC. Very high expression levels of genes encoding core proteins from all four ETC complexes as well as regulatory and structural proteins involved in oxidative phosphorylation identified subsets of OCSCC and LUSC patients with significantly better survival. Interestingly, patient survival did not have a simple linear relationship to expression levels of oxidative phosphorylation genes but varied predictably according to distinct thresholds.
Previous investigations have linked mitochondrial copy number to cancer (non-SCC) survival 15 . We found a clear trend of increasing mitochondrial copy number with increasing expression of oxidative phosphorylation genes, although differences only reached statistical significance for OCSCC cluster 5 which had roughly three times less mitochondria than cluster 1, the lowest expression of oxidative phosphorylation genes, and poor survival. Consistent with the role of increased mitochondrial copy number as a driver of differences in nuclear encoded OXPHOG genes, we found expression of SSBP1-which regulates mitochondrial biogenesis and copy number-to be highly correlated with the OXPHOG gene signature in both OCSCC and LUSC. The threshold effect we observed with respect to OXPHOG genes, MS, and associated mitochondrial copy number could easily explain why past studies may not agree regarding the prognostic significance of mitochondrial levels in tumors.
At present it is unknown exactly why tumors in cluster 1 had the best prognosis. It has been speculated that indolent tumors whose metabolic phenotype remains closer to that of normal tissue may display a less aggressive behavior. Our data do not support this model as normal tissue from lung and the oral cavity had substantially lower OXPHOG gene expression, except for the normal oral samples with evidence of high muscle contamination. However, cluster 1 tumors showed no elevated muscle contamination arguing mitochondrial associated genes were at super-physiological levels compared to normal squamous mucosa. This is further supported by the much lower mitochondrial DNA levels in the normal tissue. Moreover, no differences in normoxia as estimated by the hypoxia gene signature were found between clusters 1,2, or 3 that would account for better survival in cluster 1.
Figure 8.
Increased contaminating cytotoxic T-cells are unlikely to account for altered expression of OXPHOG gene levels in cluster 1 tumor samples. The fold-change in geometric means for all 118 OXPHOG genes expected for a mixed theoretical sample consisting of 50% cytotoxic-T cells and 50% tumor cells compared to a theoretical sample of pure tumors, based on single cell RNA-seq data (see methods and results) is plotted on the Y-axis (top and bottom panels). The actual observed ratio of group geometric means from tumors in cluster 1 compared to cluster 2 (top panel) or clusters 3-7 (bottom panel) for all 118 OXPHOG genes was plotted on the X-axis. Ratios for genes with symbols beneath the identity function (dotted line) in quadrants II and IV, or above the identity function in quadrants I and III are highly unlikely to be explained by increased cytotoxic T-cells in cluster 1. Yellow zones in the plot are regions where contaminating T-cells could possibly contribute to differences in gene expression found for patient cluster 1. Blue symbols represent genes that were significantly different in cluster 1 compared to other clusters, and black symbols represent no significant differences in gene expression levels. Circles correspond to genes that function in oxidative phosphorylation, and square symbols represent genes that regulate glycolysis.
Scientific RepoRtS |
(2020) 10:6380 | https://doi.org/10.1038/s41598-020-63448-z www.nature.com/scientificreports www.nature.com/scientificreports/ Tumor metabolic activity has previously been shown to be a powerful driver of immune infiltration, primarily through differential production of lactate, a critical suppressor of leukocyte differentiation and activity 45,46 . It is intriguing that that expression of OXPHOG genes generated a differential tumor immune microenvironment (TIME) imputed from gene expression analysis. An increase in cytotoxic and CD8+ cells observed in OCSCC would be consistent with an improvement in patient survival based on our understanding of TIME effects in SCC 47 . Furthermore in both OCSCC and LUSC, the ratio of cytotoxic cells to Treg cells was most favorable in cluster 1 tumors and declined with increasing cluster number supporting the idea that the metabolic environment of the tumor can functionally impact the immunosuppressive nature of the TIME. If the metabolic profile of cluster 1 tumors is permissive for a preferentially immunoreactive TIME, it would represent an exciting finding. To date, there is no clear explanation for differential TIME across SCC and there is no consistently applicable biomarker of effectiveness for existing immunomodulatory agents.
There are obvious limitations to our study as gene expression may not completely reflect protein levels or enzymatic activity. However, the data summarized in the current study strongly suggest that nuclear encoded mitochondrial genes may indeed be a good surrogate and that mitochondrial function and glucose utilization plays a very complex role in SCC tumorigenesis and treatment response. Therefore, targeting of mitochondrial activity should be carefully considered from multiple perspectives. | 7,392 | 2020-04-14T00:00:00.000 | [
"Medicine",
"Biology"
] |
Effect of Multiply Twinned Ag(0) Nanoparticles on Photocatalytic Properties of TiO2 Nanosheets and TiO2 Nanostructured Thin Films
Ag-decorated TiO2 nanostructured materials are promising photocatalysts. We used non-standard cryo-lyophilization and ArF laser ablation methods to produce TiO2 nanosheets and TiO2 nanostructured thin films decorated with Ag nanoparticles. Both methods have a common advantage in that they provide a single multiply twinned Ag(0) characterized by {111} twin boundaries. Advanced microscopy techniques and electron diffraction patterns revealed the formation of multiply twinned Ag(0) structures at elevated temperatures (500 °C and 800 °C). The photocatalytic activity was demonstrated by the efficient degradation of 4-chlorophenol and Total Organic Carbon removal using Ag-TiO2 nanosheets, because the multiply twinned Ag(0) served as an immobilized photocatalytically active center. Ag-TiO2 nanostructured thin films decorated with multiply twinned Ag(0) achieved improved photoelectrochemical water splitting due to the additional induction of a plasmonic effect. The photocatalytic properties of TiO2 nanosheets and TiO2 nanostructured thin films were correlated with the presence of defect-twinned structures formed from Ag(0) nanoparticles with a narrow size distribution, tuned to between 10 and 20 nm. This work opens up new possibilities for understanding the defects generated in Ag-TiO2 nanostructured materials and paves the way for connecting their morphology with their photocatalytic activity.
Introduction
In 2016, the World Health Organization (WHO) reported that air pollution occupied the sixth position among all factors leading to death globally [1]. Heterogeneous photocatalysis based on titania (TiO 2 ) and modified titania is believed to be an appropriate differently shaped nanocavities in the Ti-O layers due to the removal of ice/water through sublimation. Likewise, the laser ablation techniques yielded TiO 2 (anatase) NSTF decorated with Ag (0) NPs after 1 h heat treatment of the as-deposited film at a temperature of up to 500 • C. Additionally, the photocatalytic activity of Ag (0) _TiO 2 nanosheets was tested under UV light in terms of its ability to degrade 4-chlorophenol (4-CP) and perform total organic carbon (TOC) removal. The photoelectrochemical (PEC) properties of Ag (0) _TiO 2 NSTFs were examined under visible light irradiation. The photocatalytic activity of Ag (0) _TiO 2 materials, even prepared via completely different techniques, was found to be correlated with the multiply twinned structure formed in Ag (0) NPs with diameters in the 10-20 nm range. We evaluated the catalytic activities of the Ag (0) _TiO 2 materials against the commercial TiO 2 and standard TiO 2 _P25 catalysts.
Sample Preparation 2.2.1. Ag_TiO 2 Nanosheets
In a typical experiment (see Scheme 1 and Figure S1), 4.80 g of titanyl sulfate (TiOSO 4 ) was dissolved in 150 mL of distilled water at 35 • C (step 1). Then, 0.05 g of AgNO 3 solution was added, yielding a calculated value of 3 wt. % of silver to give a colorless solution. This solution was cooled for approximately 1 h in the freezer, and thereafter the precipitation was carried out at~0 • C in aqueous ammonia until the pH reached 8. The precipitate was filtered and washed several times to remove sulfate anions formed during the reaction. The precipitate as obtained was transferred into a beaker and resuspended into 350 mL of distilled water. The pH of the resulting suspension was reduced by adding 20 mL of 30% hydrogen peroxide (H 2 O 2 ), and it was stirred at ambient temperature until the solution turned from turbid yellowish to transparent yellow. The PPTA solution was added dropwise into Petri dishes immersed in liquid nitrogen (LN 2 ) (step 5). Frozen droplets were immediately lyophilized at 10 mTorr, at −54 • C for 48 h by using VirTis Benchtop K, Core Palmer UK lyophilizer (step 6). After lyophilization, a yellowish lyophilized precursor (lyophilized foam-like cake) was isolated (step 7) [25][26][27][28]. The lyophilized precursor labeled Ti_Ag_LYO was further heat-treated under air at 500, 650, 800 and 950 • C for 1 h in each case (with rate 3 • C/min), and four new samples denoted as Ag_TiO_LYO/500, Ag_TiO_LYO/650, Ag_TiO_LYO/800 and Ag_TiO_LYO/950 were isolated (see Figure S1/Scheme 1).
Ag_TiO 2 Nanostructured Thin Films (NSTFs)
The samples were prepared by ArF laser ablation (wavelength 193 nm, 100 mJ/pulse) of TiO 2 and elemental Ag targets. The ablation of the sintered TiO 2 was carried out in a turbomolecular vacuum (10 −3 Pa) using a focused laser beam (9 min, 10 Hz) to prepare a thin film. Subsequently, the thin film was covered by Ag NPs, prepared by laser ablation under argon (4 Pa). The deposits were grown on NaCl, quartz and Cu substrates, and FTO glass. The samples were annealed at 500 • C/1 h under air to crystallize the TiO 2 thin film and to form Ag NPs on the surface. The two samples (1) labeled Ag_TiO 2 _AP, as prepared before annealing, and (2) annealed at 500 • C-Labeled Ag_TiO 2 _500 were the subjects for further characterization.
Ag_TiO2 Nanostructured Thin Films (NSTFs)
The samples were prepared by ArF laser ablation (wavelength 193 nm, 100 mJ/pulse) of TiO2 and elemental Ag targets. The ablation of the sintered TiO2 was carried out in a turbomolecular vacuum (10 −3 Pa) using a focused laser beam (9 min, 10 Hz) to prepare a thin film. Subsequently, the thin film was covered by Ag NPs, prepared by laser ablation under argon (4 Pa). The deposits were grown on NaCl, quartz and Cu substrates, and FTO glass. The samples were annealed at 500 °C/1 h under air to crystallize the TiO2 thin film and to form Ag NPs on the surface. The two samples (1) labeled Ag_TiO2_AP, as prepared before annealing, and (2) annealed at 500 °C-Labeled Ag_TiO2_500 were the subjects for further characterization.
Characterization Methods
Powder diffraction patterns of samples Ag_TiO_LYO/500, Ag_TiO_LYO/650, Ag_TiO_LYO/800, and Ag_TiO_LYO/950 were collected using a PANalytical X'ıPertPRO diffractometer equipped with a conventional X-ray tube (CuK _40 kV, 30 mA, line focus) in transmission mode. An elliptic focusing mirror, a divergence slit of 0.5°, an anti-scatter slit of 0.5°, and a Sollerslit of 0.02 rad were used in the primary beam. A fast-linear position-sensitive detector PIXcel with an anti-scatter shield and a Soller slit of 0.02 rad was used in the diffracted beam. All patterns were collected in the range of 18-88 2theta with a step size of 0.013 and 400 s/step, producing a scan of about 2.5 h. Qualitative analysis was performed with the HighScorePlus software package (PANalytical, Almelo, The Netherlands, version 3.0e) and the DiffracPlus software package (Bruker AXS, Karlsruhe, Germany, version 8.0) [29]. For quantitative phase analysis, we used DiffracPlus Topas (Bruker AXS, Karlsruhe, Germany, version 4.2) [30]. The estimation of the size of crystallites was performed based on the Scherrer formula [31] (Equation (1)).
Characterization Methods
Powder diffraction patterns of samples Ag_TiO_LYO/500, Ag_TiO_LYO/650, Ag_TiO_LYO/800, and Ag_TiO_LYO/950 were collected using a PANalytical X'ıPertPRO diffractometer equipped with a conventional X-ray tube (CuK _40 kV, 30 mA, line focus) in transmission mode. An elliptic focusing mirror, a divergence slit of 0.5 • , an anti-scatter slit of 0.5 • , and a Sollerslit of 0.02 rad were used in the primary beam. A fast-linear positionsensitive detector PIXcel with an anti-scatter shield and a Soller slit of 0.02 rad was used in the diffracted beam. All patterns were collected in the range of 18-88 2theta with a step size of 0.013 and 400 s/step, producing a scan of about 2.5 h. Qualitative analysis was performed with the HighScorePlus software package (PANalytical, Almelo, the Netherlands, version 3.0e) and the DiffracPlus software package (Bruker AXS, Karlsruhe, Germany, version 8.0) [29]. For quantitative phase analysis, we used DiffracPlus Topas (Bruker AXS, Karlsruhe, Germany, version 4.2) [30]. The estimation of the size of crystallites was performed based on the Scherrer formula [31] (Equation (1)).
where K stands for Scherrer's constant, λ corresponds to the X-ray wavelength irradiation, β is the half-width of the diffraction peak and θ corresponds to the scattering angle. The model was derived from anatase TiO 2 structure with Ti atoms replaced by 1 at. % Ag. Transmission electron microscopy (TEM) of Ag_TiO 2 nanosheets was carried out on an FEI Tecnai TF20 X-twin microscope operated at 200 kV (FEG, 1.9Å point resolution) equipped with an EDAX Energy Dispersive X-ray (EDX) detector (FEI company, Hillsboro, OR, USA). The microscope was used in scanning mode (STEM) with High-Angle Annular Dark Field Detector (HAADF). TEM images were recorded on a Gatan CCD camera with a resolution of 2048 × 2048 pixels using the Digital Micrograph software package. Selected area electron diffraction (SAED) patterns were evaluated using the Process Diffraction software package [32]. For the TEM, the thin film deposits were prepared on NaCl substrates (at the same experiment with the other substrates) to facilitate the sample preparation Nanomaterials 2022, 12, 750 5 of 24 which was simply done by dissolving the substrate and placing the thin film deposit on a copper grid.
High-angle annular dark-field scanning transmission electron microscopy (HAADF-STEM) images of Ag_TiO 2 NSTFs were acquired using cold-field-emission double aberrationcorrected JEOL JEM-ARM200CF microscope operated at 200 kV. The inner-collection semiangle of the HAADF detector was set to 90 mrad, and the probe convergence semi-angle was 22 mrad. X-ray energy dispersive spectroscopy (EDS) mappings were acquired using 0.98-steradian solid-angle windowless silicon drift-detector JED-2300 (JEOL, Tokyo, Japan) mounted in the STEM instrument. For the atomic-resolution EDS, the probe current was set to 200 pA, 0.1-0.2 msec pixel dwell time. Wiener filtering and sample drift correction was applied, 5-6 sweeps were accumulated for the atomic-resolution EDS images.
The surface composition of the samples, chemical states, and electronic states of the elements was inspected by XPS using Kratos ESCA 3400 furnished with a polychromatic Mg X-ray source of Mg Kα radiation (energy: 1253.4 eV). The base pressure was kept at 5.0 × 10 −7 Pa. The spectra were fitted using a Gaussian-Lorentzian line shape, Shirley background subtraction, and a damped non-linear least square procedure. Spectra were taken over Ti 2p, O 1 s, C 1 s, and Ag 3d regions. The samples were sputtered with Ar + ions at 1 kV with a current of 10 µA for 60 s to remove superficial layers. Spectra were calibrated to C 1 s line centered at 284.8 eV.
The surface nanostructure of the synthesized Ag_TiO 2 nanosheets was analyzed by atomic force microscopy (AFM) using NTEGRA AFM system (NT-MDT). The ex-situ AFM measurements were carried out in tapping mode under ambient conditions (room temperature, air environment, normal atmospheric pressure). Atomic Force Microscopy (AFM) measurements of Ag_TiO 2 NSTFs were carried out at room temperature on an ambient AFM (Bruker, Dimension Icon, Ettlingen, Germany) in Peak Force Tapping mode with ScanAsyst Air tips (Bruker; k = 0.4 N/m; nominal tip radius 2 nm). The measured topographies had a resolution of 512 × 512 points 2 .
UV-Vis spectroscopy was carried out using a Shimadzu spectrophotometer UV 1800 (Kyoto, Japan). The sample was dispersed in distilled water in a quartz cell (10 mm optical path) in an ultrasound bath for 2 min. The spectra were recorded in the range 190-800 nm. Diffusion reflectance spectra of the powders were measured after fixing the samples on double-sided carbon tape. The thin films were measured as prepared and annealed on quartz substrates.
Pin-hole Small-Angle Neutron Scattering (SANS) measurements of the samples Ag_TiO_LYO/500, Ag_TiO_LYO/650, Ag_TiO_LYO/800 and Ag_TiO_LYO/950 were conducted using a conventional SANS V4 instrument (Helmholtz-Zentrum Berlin für Materialien und Energie. (2016). V4: The Small Angle Scattering Instrument (SANS) at BER II [33] located at BER-II reactor of Helmholtz-Zentrum Berlin). Neutron scattering curves were measured using a collimated neutron beam with a wavelength of 0.5 nm (±10%) at distances from the sample to the detector of 15.78 m, 6.8 m, and 2 m. The data were treated using a standard procedure using "empty cell" and "cadmium background" measurements. The samples were installed for the SANS measurements in quartz cells with a flight path of 1 mm.
Photocatalytic Degradation of 4-CP
The prepared photocatalysts (0.15 g L −1 ) were added in an aqueous 4-chlorophenol solution (4-CP, 1.0 × 10 −4 mol L −1 ). The reactant mixtures were irradiated under identical conditions in a photoreactor equipped with 10 lamps (Sylvania Blacklight 8 watt, λ max = 368 nm, intensity 6.24 mW cm −2 ). The volume of the reactant mixture was 175 mL. The photocatalytic activity of the samples was monitored by measuring the concentration of 4-CP in water (HPLC, Agilent Technologies 1200 Series, column LiChrospher RP-18, 5 µm, mobile phase mixture of methanol and water, UV-Vis absorption detection) as well as the total organic carbon (TOC-LCPH, Shimadzu, Kyoto, Japan). The instrumental method employs the incineration of the sample at 680 • C, resulting in the formation of CO 2 . Inor-ganic carbon presented as carbonate is transformed with 0.1% HCl to CO 2 . Formed CO 2 was detected by implementing IR absorption spectroscopy. Water samples (20 mL) were taken periodically at 60, 120, and 240 min of irradiation.
The reaction rate of 4-CP reduction was fitted to the pseudo-first-order kinetic model (Equation (2)): where C and C 0 are the summed 4-CP concentrations at the time (t) and t = 0, respectively, and k is the pseudo-first-order kinetic rate constant [34]. The method of UV-Vis diffuse reflectance spectroscopy was employed to estimate band-gap energies of the prepared Ag decorated TiO 2 nanosheets. Diffuse reflectance UV-Vis spectra were recorded in the diffuse reflectance mode (R) and transformed to a magnitude proportional to the extinction coefficient (K) through the Kubelka-Munk function. A PerkinElmer Lambda 35 spectrometer equipped with a Labsphere RSA-PE-20 integration sphere using BaSO 4 was used as a standard. The band-gap energy E bg [35] was calculated by the extrapolation of the linear part according to Equation (3) λ bg = 1240/E bg (eV)
Photoelectrochemical Measurements
The photocatalytic activity for water splitting was tested by cyclic voltammetry on samples deposited on the FTO glass substrates. Cyclic voltammetry (CV) was carried out under irradiation by a visible light sun simulator (100 W, Oriel LCS 100). CV was performed using a scan rate of 20 mV/s at potentials −0.5 and +1.5 V with Pt counter electrode, Ag/AgCl reference electrode in 0.5 M H 2 SO 4 as an electrolyte. To demonstrate water splitting performance, the samples were studied by linear sweep voltammetry (LSV) between −0.2 and +1.2 V by applying 10 s light/10 s dark cycles at 1 mV/s scan rate.
Microstructure and Surface Characterization of Ag_TiO 2 Nanosheets
X-ray powder diffraction analysis and Rietveld refinement were used for characterization of the microstructure and evaluation of TiO 2 phase transition. The characteristic parts of XRD patterns of precursor Ag_TiO_LYO and post-synthesis annealing products Ag_TiO_LYO/500, Ag_TiO_LYO/650, Ag_TiO_LYO/800, and Ag_TiO_LYO/950 are shown in Figure S2. XRD pattern of the lyophilized precursor Ag_TiO_LYO indicated amorphous material. The XRD patterns of samples Ag_TiO_LYO/500, Ag_TiO_LYO/650, and Ag_TiO_LYO/800 revealed anatase phase only; the positions of diffraction peaks and distribution of intensities correspond to anatase TiO 2 (ICDD PDF No 21-1272) [28]. Increasing the temperature resulted in more refined reflection peaks i.e., reflections became narrower as the crystallites grew from 35.37 nm at 500 • C to 78.41 nm at 800 • C (Table 1) [36]. It is evident that the Ag_TiO_LYO/950 sample already contains 66.53% of rutile (JCPDS PDF No. 21-1276) [28] and a small fraction (0.7 %) of cubic (Ag 0 ) with JCPDS PDF No. 04-783 [37]. The difference in the crystal structure in our 2D TiO 2 materials with different anatase/rutile ratios is due to thermal treatment and spontaneous transformation from metastable anatase to stable rutile [38]. XRD results confirmed that the metallic (Ag 0 ) form of silver stabilized the anatase phase up to 800 • C and retarded phase transformation in comparison with pristine TiO 2 nanosheets prepared by using the same method [39,40].
It was found that the lattice parameter a was not changed, whereas the lattice parameter c increased linearly with an increase in temperature. The visible extension along the c-axis and the resulting lattice expansion could be dependent on the nanoparticle size and thermal treatment. A considerable increase in the anatase crystallite size was observed after annealing at an overall temperature interval of 500-950 • C. The crystallite size of sample Ag_TiO_LYO/950, as determined from the XRD, was 238.7 (anatase) vs. 251.1 (rutile), since the transformation of anatase to rutile is usually accompanied by crystal growth through the coalescence process [41]. The presence of metallic (Ag 0 ) silver was evidenced by the high-resolution XPS spectra, as depicted in Figure 1. It was found that the lattice parameter a was not changed, whereas the lattice parameter c increased linearly with an increase in temperature. The visible extension along the c-axis and the resulting lattice expansion could be dependent on the nanoparticle size and thermal treatment. A considerable increase in the anatase crystallite size was observed after annealing at an overall temperature interval of 500-950 °C. The crystallite size of sample Ag_TiO_LYO/950, as determined from the XRD, was 238.7 (anatase) vs. 251.1 (rutile), since the transformation of anatase to rutile is usually accompanied by crystal growth through the coalescence process [41].
The presence of metallic (Ag 0 ) silver was evidenced by the high-resolution XPS spectra, as depicted in Figure 1. Two well-defined peaks centered at 368.0 and 374.0 V were observed in the Ag 3d region of all samples. The bands were narrow (FWHM = 1.2 eV), which means that silver was present in one chemical state. Both bands had an asymmetric shape, which is typical for the metallic (Ag 0 ) form of silver. Loss features usually observed due to the higher binding energy side of each spin-orbit component were not visible due to low surface concentrations of elemental Ag (0) silver. Ag-O bonds, which could be expected as a result of exposure of the samples to ambient air, were not visible in either the prepared or sputtered samples. Ag-O bands had binding energy about 0.5 eV lower than elemental Ag (0) , and therefore they were observed as a broadened feature on the lower-binding-energy side of the Ag 3d bands.
The Ti 2p region showed broad bands and a Ti 2p 3/2 band separated into three contributions, with the highest content typical for Ti 4+ in TiO 2 (458.8 eV). Lower energy bands were ascribed to Ti 3+ valence states in titanium suboxides. Titanium represents an element with a constant surface concentration, and the intensity of the Ag 3d peaks shows that the silver concentration changed significantly with annealing temperature. The surface concentration of silver was lowest at 500 • C (1.7% relative to Ti), while heating to 650 and 800 • C led to an increase in Ag (0) surface concentration (1.8 and 6.9%, respectively, to Ti). Heating to 950 • C again resulted in lower Ag (0) surface concentration, probably due to the evaporation of elemental silver at the highest temperature. The O 1 s spectra of the sample were deconvoluted into three peaks. The most intense one, centered at 530.2 eV, is typical for metal oxides. The higher binding energy contributions were assigned to hydroxide and carbonate (531.6 eV) and adsorbed water (533.2 eV).
The morphology of annealed precursor Ag_TiO_LYO was observed by SEM, and the results for Ag_TiO_LYO/500, Ag_TiO_LYO/650, Ag_TiO_LYO/800, and Ag_TiO_LYO/950 are presented in Figure 2a-d. The annealing process led to the formation of network-like joined nanosheets, which are distinguishable by their shape and size [36]. SEM observation shows that crystallization took place only in the 2D direction, and 2D nanosheets with different degrees of crystallinity were acquired. Even at 950 • C, when the anataserutile transformation took place, the 2D nanosheet morphology did not collapse, and its morphology was preserved ( Figure 2d). In-depth morphology evaluation was performed by HAADF-STEM monitoring. Figure 3a verifies the nanosheet morphology of the precursor Ag_TiO_LYO. It is worth noting that the cubic Ag (0) NP structure was preserved; indeed, there was an evolution in particle shape from a spheroid to a spherical morphology during annealing at 650 °C. Ag (0) NPs oriented along their five-fold decahedral symmetry with five merged subunits (Figure 4e) were tightly attached on the surface of well-crystallized TiO2 (Figure 4e). Indeed, the measured d(101) = 0.358 nm was attributed to tetragonal anatase with space group I41/amd (see Figure 4f) and PDF JCPDS 21-1272. Figure 4f1 shows the HRTEM image obtained by DigitalMicrograph analysis (provided from the area high- It is worth noting that the cubic Ag (0) NP structure was preserved; indeed, there was an evolution in particle shape from a spheroid to a spherical morphology during annealing at 650 • C. Ag (0) NPs oriented along their five-fold decahedral symmetry with five merged subunits ( Figure 4e) were tightly attached on the surface of well-crystallized TiO 2 (Figure 4e). Indeed, the measured d (101) = 0.358 nm was attributed to tetragonal anatase with space group I41/amd (see Figure 4f) and PDF JCPDS 21-1272. Figure 4f 1 shows the HRTEM image obtained by DigitalMicrograph analysis (provided from the area highlighted in yellow in Figure 4f), suggesting a lattice spacing of 0.202 nm, corresponding to the (101) lattice planes of cubic Ag (0) . Additionally, the uniform d-spacing along different 100 directions is consistent with the face-centered cubic (fcc) phase of metallic Ag (0) NPs, confirming that our observations are in line with the statement that the fivefold twinning phenomenon is very common for metal Ag [42][43][44]. Additionally, the HRTEM findings corroborated the observed XRD patterns of Ag_TiO_LYO/500 and Ag_TiO_LYO/650 (see Table 1 and Figure S2). Figure 5 shows the aberration-corrected bright-field (BF) and HAADF STEM micrographs of Ag_TiO_LYO/800 acquired at different magnifications. Two regions can be distinguished in the HAADF-STEM images in Figure 5a-b: a matrix with lower intensity, and well-crystallized spherical NPs that present higher intensity. This contrast is associated with the atomic weight dependence of the constituted elements. Because the brightness of an individual atomic column in a HAADF-STEM image (Figure 5b) is approximately proportional to the square of average atomic number, the contrast of Ag (Z = 47) appears brighter than that of Ti (Z = 22) in the HAADF electron scattering regime. Therefore, the spherical NPs that were brighter, in contrast, indicated enrichment of Ag (0) . As a result of increasing temperature, the size of Ag (0) NPs changed, appearing as smaller, evenly distributed Ag (0) NPs on the surface of the matrix (TiO 2 nanosheets), coexisting with a small number of larger Ag (0) NPs segregated at the grain boundaries between two adjacent (TiO 2 ) matrix grains. Indeed, the zoomed BF STEM image in Figure 5c taken from the yellow boxed area in Figure 5a comprises an atomic-resolution image of the matrix, verifying the well-crystallized nanograins (NGs) with an interlayer spacing of 0.35 nm between the (101) planes proposed for anatase TiO 2 . In Figure 5d, we present the BF STEM view of a 〈110〉-oriented Ag (0) NP with the characteristic five-fold twinned nanostructure. Five subunits, labeled T1−T5, are joined together, sharing their {111} planes. In regular fcc metal Ag (0) with five ideal single-crystalline grains joined in a fivefold twinned nanostructure without distortion, the angle between two (111) faces is known to be 70.53 • [45]. However, by applying CrytalMaker software [46], we found that the angle along the [110] zone axis was 74.5. Therefore, a significant angular mismatch between T1−T5 occurred in the Ag_TiO_LYO/800 sample upon annealing at 800 • C. Such angle subtending should be compensated by simultaneously adjusting the bond lengths and the introduction of lattice defects such as dislocations and stacking faults (SF) in Ag (0) NPs [43]. If we accepted the model that any T1−T5subunits can be regarded as a single sub-crystal merged in a pentagonal bipyramidal geometry, we could suggest that lattice strain and distortion may be generated from the matching adjacent {111} twin boundaries (TBs) upon annealing. From Figure 5c, it appears that the atomic arrangement of the TiO 2 grains with d (101) = 0.35 nm is preserved, whereas the atomic arrangement of Ag (0) NPs at the surface is highly distorted (Figure 5f).
Detailed examination of Ag-Ag bond lengths by CrystalMaker demonstrates the presence of distortion within the exanimated pentagonal bipyramidal core (highlighted by the red circle in Figure 5f). It was observed that Ag-Ag bonds are longer (0.291 nm) than standard 0.287 nm for bulk Ag (0) . The analysis suggested that the geometric distortion in the decahedral core was compensated by concomitant distortions along with the fivefold T1-T5 axis. Twinning by a mirror plane, where the twinned Ag (0) lattice is obtained by a homogeneous simple shear of the original cubic lattice, is well visible in subunit T2 (Figure 5f). The two mirror regions are referred to as twin domains 1 and 2, respectively. Therefore, the Ag (0) NPs showed a strong tendency to form multiply twinned face-centered cubic superlattices with decahedral symmetry when their size was reduced to 10 nm (Figure 5f) [47,48]. Detailed examination of Ag-Ag bond lengths by CrystalMaker demonstrates the presence of distortion within the exanimated pentagonal bipyramidal core (highlighted by the red circle in Figure 5f). It was observed that Ag-Ag bonds are longer (0.291 nm) than standard 0.287 nm for bulk Ag (0) . The analysis suggested that the geometric distortion Figure 5f also includes subunit T5, where the SF implies the dislocation lines, observed as a result of TB interaction. The HAADF-STEM examination confirmed that higher temperature was able to facilitate the progressive coalescence of two Ag (0) NPs into one larger Ag (0) NP with a fivefold twinned structure [49]. Additionally, we could suggest that the perfect anatase TiO 2 (101) may provide nucleation sites for the growth of Ag (0) NPs with a pentagonal bipyramidal structure (Figure 5c), which, unlike Ag (0) NPs with a planar (flat) geom-etry in Ag_TiO_LYO)/500 (Figure 5a-c), could support the improvement of the properties of Ag_TiO_LYO)/800 as a photocatalyst [50]. The STEM image of Ag_TiO_LYO/950 with corresponding SAED ( Figure S3) confirms single rutile nanocrystals with a size larger than 200 nm, which is in line with the XRD results (Table 1). Figure S4 presents a visualization of the surface topology of the annealed Ag_TiO_LYO/800 material using the AFM technique. Taking into account the geometrical similarities of the NPs, it is reasonable to also expect their identical chemical origin. Careful analysis of the nanostructure allowed us to distinguish the formation of the 2D nanosheets between the NPs, suggesting the formation of a new phase, which is likely to be different from the phase of the NPs (see Figure S4b). Therefore, the performed analysis evidences the formation of two different phases. Figure S5 presents the quantitive analysis of Ag_TiO_LYO/800 using surface profile plots. An NP height of 100 nm was estimated.
SANS analysis was further performed in order to examine the NP size of the Ag_TiO_LYO/500, Ag_TiO_LYO/650, Ag_TiO_LYO/800, and Ag_TiO_LYO/950 samples. The obtained results show a prominent local increase in intensity at Q~0.2 ÷ 0.35 nm −1 (see Figure 6). by a homogeneous simple shear of the original cubic lattice, is well visible in subunit T2 (Figure 5f). The two mirror regions are referred to as twin domains 1 and 2, respectively. Therefore, the Ag (0) NPs showed a strong tendency to form multiply twinned face-centered cubic superlattices with decahedral symmetry when their size was reduced to 10 nm (Figure 5f) [47,48]. Figure 5f also includes subunit T5, where the SF implies the dislocation lines, observed as a result of TB interaction. The HAADF-STEM examination confirmed that higher temperature was able to facilitate the progressive coalescence of two Ag (0) NPs into one larger Ag (0) NP with a fivefold twinned structure [49]. Additionally, we could suggest that the perfect anatase TiO2(101) may provide nucleation sites for the growth of Ag (0) NPs with a pentagonal bipyramidal structure (Figure 5c), which, unlike Ag (0) NPs with a planar (flat) geometry in Ag_TiO_LYO)/500 (Figure 5a-c), could support the improvement of the properties of Ag_TiO_LYO)/800 as a photocatalyst [50]. The STEM image of Ag_TiO_LYO/950 with corresponding SAED ( Figure S3) confirms single rutile nanocrystals with a size larger than 200 nm, which is in line with the XRD results (Table 1). Figure S4 presents a visualization of the surface topology of the annealed Ag_TiO_LYO/800 material using the AFM technique. Taking into account the geometrical similarities of the NPs, it is reasonable to also expect their identical chemical origin. Careful analysis of the nanostructure allowed us to distinguish the formation of the 2D nanosheets between the NPs, suggesting the formation of a new phase, which is likely to be different from the phase of the NPs (see Figure S4b). Therefore, the performed analysis evidences the formation of two different phases. Figure S5 presents the quantitive analysis of Ag_TiO_LYO/800 using surface profile plots. An NP height of 100 nm was estimated.
SANS analysis was further performed in order to examine the NP size of the Ag_TiO_LYO/500, Ag_TiO_LYO/650, Ag_TiO_LYO/800, and Ag_TiO_LYO/950 samples. The obtained results show a prominent local increase in intensity at Q~0.2 ÷ 0.35 nm −1 (see Figure 6). The data were fitted using a model of spherical particles with a log-normal distribution of radius as follows: The data were fitted using a model of spherical particles with a log-normal distribution of radius as follows: The fitted size parameters were r 0 = 2.3 nm and r 0 = 1.92 nm for the Ag_TiO_LYO/500 and Ag_TiO_LYO/650 samples, respectively. The calculated size distributions were quite wide (σ = 0.7 ÷ 0.8), showing the high polydispersity of the scattering objects. In contrast to this outcome, the SANS data for the samples treated at higher temperatures-Ag_TiO_LYO/800 and Ag_TiO_LYO/950-did not show any deviation from the Porod scattering [39] (Figure 6b), and nanosized objects were not detected by SANS. This observation is in agreement with the XPS experiment, confirming that heating to 800 • C leads to increasing of Ag surface concentration with 6.9% suggesting that Ag NPs, with their increased surface free energy during annealing, can subsequently promote sintering and agglomeration of Ag (0) on the surface of 2D TiO 2 nanosheets [41].
Our detailed HAADF-STEM analysis further demonstrated that the geometric nonideal decahedral Ag (0) core and defected T1-T5 sub-NCs progressively coalesce into one larger Ag (0) NP with a fivefold twinned structure in order to minimize the total potential energy of Ag (0) NPs.
The (UV-Vis) light absorption spectra (Figure 7) revealed that the loading of Ag could enhance the light absorption of the TiO 2 nanosheets. The UV-Vis spectra of the Ag_TiO_LYO/500, Ag_TiO_LYO/650, Ag_TiO_LYO/800, and Ag_TiO_LYO/950 samples were measured in distilled water, having no absorption.
Our detailed HAADF-STEM analysis further demonstrated that the geometric nonideal decahedral Ag (0) core and defected T1-T5 sub-NCs progressively coalesce into one larger Ag (0) NP with a fivefold twinned structure in order to minimize the total potential energy of Ag (0) NPs.
The (UV-Vis) light absorption spectra (Figure 7) revealed that the loading of Ag could enhance the light absorption of the TiO2 nanosheets. The UV-Vis spectra of the Ag_TiO_LYO/500, Ag_TiO_LYO/650, Ag_TiO_LYO/800, and Ag_TiO_LYO/950 samples were measured in distilled water, having no absorption. All samples were highly homogeneous, and repeated measurements afforded reproducible spectra. Samples annealed at 500 and 650 °C show the most intense bands in the UV region, with maxima at 296 nm and 322 nm, respectively. Both samples are grey. The sample annealed at 800 °C with dark grey color shows a broad absorption band at 350 nm, with pronounced absorption in the visible region. The last sample, annealed at the highest temperature, exhibited absorption only in the visible region beginning at 395 nm. The differences observed are caused by structural changes in TiO2 and both structural and concentration changes in metallic Ag (0) NPs. Diffuse reflectance spectra of the samples ( Figure Figure 7. Diffuse reflectance spectra of the Ag_TiO_LYO/500, Ag_TiO_LYO/650, Ag_TiO_LYO/800, and Ag_TiO_LYO/950 samples, and corresponding Tauc plot (inset).
All samples were highly homogeneous, and repeated measurements afforded reproducible spectra. Samples annealed at 500 and 650 • C show the most intense bands in the UV region, with maxima at 296 nm and 322 nm, respectively. Both samples are grey. The sample annealed at 800 • C with dark grey color shows a broad absorption band at 350 nm, with pronounced absorption in the visible region. The last sample, annealed at the highest temperature, exhibited absorption only in the visible region beginning at 395 nm. The differences observed are caused by structural changes in TiO 2 and both structural and concentration changes in metallic Ag (0) NPs. Diffuse reflectance spectra of the samples (Figure 7) were measured against PTFE, showing a reflectance greater than 99% throughout the measured range (220-700 nm). The spectra of the samples evidenced a major drop in the reflection in the interval 350-420 nm, which is connected to structural changes due to thermal treatment.
The diffuse reflectance spectra of samples annealed at 650-900 • C consist of a flat, highly reflective region at long wavelengths that abruptly transforms into a steeply falling reflectance edge at shorter wavelengths. On the other hand, the sample annealed at 500 • C exhibited reflection spectra with two maxima, due to its low crystallinity, numerous defects, and the presence of carbon impurities. The diffuse reflectance spectra were transformed to a Tauc plot (Figure 7, inset), proving an indirect allowed transition in the prepared materials annealed at temperatures 650 • C and higher. Samples Ag_TiO_LYO/650 and 800 possessed band gaps at 3.23 eV, which is typical for an anatase structure, but the highertemperature sample (Ag_TiO_LYO/800) had a much higher silver surface concentration, which explains it's having the highest photocatalytic activity. A bandgap of about 3.01 eV was observed in the Ag_TiO_LYO/950 sample due to transformation to rutile, known for lower photocatalytic activity.
Photocatalytic Decomposition of 4-CP and TOC
HPLC and TOC measurements were applied to obtain a better understanding of the photocatalytic performance of 4-CP under UV irradiation. We observed that the highest 4-CP removal was obtained in the presence of the Ag_TiO_LYO/800 photocatalysts, followed by Ag_TiO_LYO/650 and Ag_TiO_LYO/500, and Ag_TiO_LYO/950 (Figure 8).
formed to a Tauc plot (Figure 7, inset), proving an indirect allowed transition in the prepared materials annealed at temperatures 650 °C and higher. Samples Ag_TiO_LYO/650 and 800 possessed band gaps at 3.23 eV, which is typical for an anatase structure, but the higher-temperature sample (Ag_TiO_LYO/800) had a much higher silver surface concentration, which explains it's having the highest photocatalytic activity. A bandgap of about 3.01 eV was observed in the Ag_TiO_LYO/950 sample due to transformation to rutile, known for lower photocatalytic activity.
Photocatalytic Decomposition of 4-CP and TOC
HPLC and TOC measurements were applied to obtain a better understanding of the photocatalytic performance of 4-CP under UV irradiation. We observed that the highest 4-CP removal was obtained in the presence of the Ag_TiO_LYO/800 photocatalysts, followed by Ag_TiO_LYO/650 and Ag_TiO_LYO/500, and Ag_TiO_LYO/950 (Figure 8). The calculated degradation rate constants k (s −1 ) are shown in Table 1. The obtained Eg values ( Figure S6) were correlated based on UV-Vis analysis (Figure 7) and the Kubelka-Munk function.
The best performance in 4-CP photocatalytic decomposition was achieved with the Ag_TiO_LYO/800 material, which had the lowest Ebg and redshift of optical absorption ( Figure S6). The Eg value of Ag_TiO_LYO/800 was lower than that of the reference anatase sample (Eg = 3.24 eV), and pristine 2D TiO2 nanosheets obtained by the same method were The calculated degradation rate constants k (s −1 ) are shown in Table 1. The obtained E g values ( Figure S6) were correlated based on UV-Vis analysis (Figure 7) and the Kubelka-Munk function.
The best performance in 4-CP photocatalytic decomposition was achieved with the Ag_TiO_LYO/800 material, which had the lowest E bg and redshift of optical absorption ( Figure S6). The E g value of Ag_TiO_LYO/800 was lower than that of the reference anatase sample (E g = 3.24 eV), and pristine 2D TiO 2 nanosheets obtained by the same method were reported in our previous article [39]. Briefly, the E g values for the pristine samples were estimated as TiO_LYO (3.19 eV), TiO_LYO/500 (3.24 eV), TiO_LYO/650 (3.24 eV), TiO_LYO/800 (3.24 eV), and TiO_LYO/950 (3.23 eV). The higher photocatalytic efficiency of the Ag_TiO_LYO/800 catalyst can be explained by the even distribution of Ag (0) on the TiO 2 (anatase) nanosheet surface, with an average size of 5-10 nm, as well as fivefold twinned Ag (0) , which has a clear positive effect on the photocatalytic activity under UV light irradiation [51,52]. Additionally, at a temperature of 800 • C, the Ag (0) NPs stabilized the anatase structure, decreasing the size of the crystallites compared to pristine TiO 2 nanosheets. Further annealing at 950 • C leads to a decrease in photocatalytic efficiency of the Ag_TiO_LYO/950 material under UV light. The differences in the particle shape affect the redistribution of the Ag (0) NPs on the Ag_TiO_LYO/950 surface, resulting in its lower photo efficiency. Annealing at a temperature of 950 • C resulted in a full conversion from anatase to rutile and a change in all microstructural characteristics and photocatalytic activity. Herein, we used the TOC removal as an inexpensive test to evaluate the Ag_TiO_LYO/500, Ag_TiO_LYO/650, Ag_TiO_LYO/800, and Ag_TiO_LYO/950 materials. The TOC removal over time is illustrated in Figure 9.
nanosheets. Further annealing at 950 °C leads to a decrease in photocatalytic efficiency of the Ag_TiO_LYO/950 material under UV light. The differences in the particle shape affect the redistribution of the Ag (0) NPs on the Ag_TiO_LYO/950 surface, resulting in its lower photo efficiency. Annealing at a temperature of 950 °C resulted in a full conversion from anatase to rutile and a change in all microstructural characteristics and photocatalytic activity. Herein, we used the TOC removal as an inexpensive test to evaluate the Ag_TiO_LYO/500, Ag_TiO_LYO/650, Ag_TiO_LYO/800, and Ag_TiO_LYO/950 materials. The TOC removal over time is illustrated in Figure 9. Degradation of 4-CP in the presence of Ag_TiO_LYO/500, Ag_TiO_LYO/650, Ag_TiO_LYO/800, and Ag_TiO_LYO/950 photocatalysts is rapid in the first 120 min of treatment but decreases thereafter. TOC removal in the range of 2.7 to 12% was achieved in the measured samples after 240 min of photocatalytic treatment. The highest degree of mineralization (12% TOC removal) was obtained in the presence of the Ag_TiO_LYO/650 photocatalyst. Degradation of 4-CP in the presence of Ag_TiO_LYO/500, Ag_TiO_LYO/650, Ag_TiO_LYO/800, and Ag_TiO_LYO/950 photocatalysts is rapid in the first 120 min of treatment but decreases thereafter. TOC removal in the range of 2.7 to 12% was achieved in the measured samples after 240 min of photocatalytic treatment. The highest degree of mineralization (12% TOC removal) was obtained in the presence of the Ag_TiO_LYO/650 photocatalyst.
Structural Analysis and Surface Characterization of Ag_TiO 2 _AP and Ag_TiO 2 _500 NSTFs
Nanostructured thin films (NSTF) prepared by laser ablation have extremely high adhesion to all substrates. The as-prepared (Ag_TiO 2 _AP) deposit was greyish, but after annealing at 500 • C (Ag_TiO 2 _500), it changed color and the samples became pink with a metallic luster.
To evaluate the microstructure and phase composition of Ag_TiO 2 _AP and Ag_TiO 2 _500 NSTFs, the samples were studied using various TEM techniques: imaging (including HRTEM), SAED, STEM-HAADF, and EDX mapping. In addition, the surface and thickness of the deposits were investigated using AFM. Figure 10 shows TEM observations of Ag _TiO 2 _AP and Ag_TiO 2 _500 NSTFs acquired at low magnification, displaying the different morphologies of the nanoparticles on the surfaces of the NSFTs. Ag_TiO 2 _AP contains nanoparticles with irregular shapes with sizes within a wide range from 10 to 100 nm. In contrast, the annealed sample contained spherical nanoparticles with sizes within the narrower range of 20-40 nm, homogeneously distributed on the TiO 2 surface. The corresponding SAED diffraction patterns were used to determine the presence of Ag and TiO 2 and their structural form. The SAED examination of Ag _TiO 2 _AP (Figure 10b) confirmed the mixture of two phases: a broad halo coexisting with several sharp concentric diffraction rings. The disordered phase belonged to the amorphous TiO 2 (matrix), whereas the diffraction rings corresponded to polycrystalline Ag (0) with a cubic structure in the Fm-3m space group and JCPDS PDF No. 87-0718 (Figure 10c). The SAED pattern of the annealed Ag_TiO 2 _500 sample (Figure 10e) also confirmed the mixture of two phases; however, these were slightly different from Ag _TiO 2 _AP. The diffraction rings corresponding to polycrystalline Ag (Figure 10f) were still present, but the broad halo of the amorphous TiO 2 was replaced by intense spots corresponding to the single-crystal orientation of anatase viewed down [110]. In addition, the TiO 2 film had crystallized into large single-crystal anatase grains that were slightly bent, as evidenced by the bending contours in Figure 10d. Therefore, it can be inferred that microstructural evolution took place during the annealing process; the Ag_TiO 2 _500 NSTFs consisted of well-crystallized anatase TiO 2 (after the transition from amorphous TiO 2 ) and uniformly dispersed Ag (0) NPs. and thickness of the deposits were investigated using AFM. Figure 10 shows TEM observations of Ag _TiO2_AP and Ag_TiO2_500 NSTFs acquired at low magnification, displaying the different morphologies of the nanoparticles on the surfaces of the NSFTs. Ag_TiO2_AP contains nanoparticles with irregular shapes with sizes within a wide range from 10 to 100 nm. In contrast, the annealed sample contained spherical nanoparticles with sizes within the narrower range of 20-40 nm, homogeneously distributed on the TiO2 surface. The corresponding SAED diffraction patterns were used to determine the presence of Ag and TiO2 and their structural form. The SAED examination of Ag _TiO2_AP (Figure 10b) confirmed the mixture of two phases: a broad halo coexisting with several sharp concentric diffraction rings. The disordered phase belonged to the amorphous TiO2 (matrix), whereas the diffraction rings corresponded to polycrystalline Ag (0) with a cubic structure in the Fm-3m space group and JCPDS PDF No. 87-0718 (Figure 10c). The SAED pattern of the annealed Ag_TiO2_500 sample (Figure 10e) also confirmed the mixture of two phases; however, these were slightly different from Ag _TiO2_AP. The diffraction rings corresponding to polycrystalline Ag (Figure 10f) were still present, but the broad halo of the amorphous TiO2 was replaced by intense spots corresponding to the single-crystal orientation of anatase viewed down [110]. In addition, the TiO2 film had crystallized into large single-crystal anatase grains that were slightly bent, as evidenced by the bending contours in Figure 10d. Therefore, it can be inferred that microstructural evolution took place during the annealing process; the Ag_TiO2_500 NSTFs consisted of well-crystallized anatase TiO2 (after the transition from amorphous TiO2) and uniformly dispersed Ag (0) NPs. The TEM results were further corroborated by the STEM/HAADF images and STEM/EDX mapping of Ag_TiO 2 _AP and Ag_TiO 2 _500 NSTFs ( Figure 11). The white contrast in the HAADF images indicates the heavy Ag on the light support of TiO 2 , displayed in dark grey or black. Moreover, the EDX maps show the Ag, Ti, and O elemental distribution over the film in detail. It can be seen from the maps of Ti and O that both elements become interwoven and distributed all over the film, whereas the Ag map suggests that Ag is not interconnected with either Ti or O. This observation indicates that Ag forms a separate phase, which was also evidenced by the SAED observations ( Figure 10).
A detailed understanding of the Ag_TiO 2 _500 NSTF growth and morphology was further investigated by HRTEM analysis (Figure 12). The high-magnification HRTEM image in Figure 12b corresponds to Ag _TiO 2 _AP, in which Ag single-crystal NPs with quasi-spherical shape and an average particle size close to 20 nm can be easily recognized. Figure 12d-f shows the results for Ag_TiO 2 _500. Upon annealing at 500 • C, the original Ag NPs were completely rearranged into spherical NPs. In addition, these spherical NPs were very often composed of the intergrowth of multiple individuals with twinning planes {111}. The most common was the well-known pentagonal intergrowth, with a common zone axis [110] for all the individuals [42][43][44]. The formation of such NPs could be the result of the "dissolution-recrystallization" process of Ag agglomerates formed by the gathering of smaller quasi-spherical Ag NPs upon annealing. This is further supported by the determination of particle size redistribution by the ImageJ software ( Figure S7).
The obtained results indicate that annealing at 500 • C yielded significantly different particle size distributions for Ag_TiO 2 _500 than for Ag_TiO 2 _AP. These different particle sizes may represent the mass balance between the constituent element (crystallization of anatase from amorphous TiO 2 ), and the formation of smaller, more stable spherical Ag NPs. Therefore, it can be concluded that the annealing process can modify the entire morphology of Ag _TiO 2 _AP. Ag_TiO2_500; (f) interpretation of (e) showing the crystalline anatase film (spots in blue circles and blue marks) decorated with nanoparticles of cubic Ag (green arcs and green marks).
The TEM results were further corroborated by the STEM/HAADF images and STEM/EDX mapping of Ag_TiO2_AP and Ag_TiO2_500 NSTFs (Figure 11). The white contrast in the HAADF images indicates the heavy Ag on the light support of TiO2, displayed in dark grey or black. Moreover, the EDX maps show the Ag, Ti, and O elemental distribution over the film in detail. It can be seen from the maps of Ti and O that both elements become interwoven and distributed all over the film, whereas the Ag map suggests that Ag is not interconnected with either Ti or O. This observation indicates that Ag forms a separate phase, which was also evidenced by the SAED observations ( Figure 10). A detailed understanding of the Ag_TiO2_500 NSTF growth and morphology was further investigated by HRTEM analysis (Figure 12). The high-magnification HRTEM image in Figure 12b corresponds to Ag _TiO2_AP, in which Ag single-crystal NPs with quasispherical shape and an average particle size close to 20 nm can be easily recognized. Figure 12d-f shows the results for Ag_TiO2_500. Upon annealing at 500 °C, the original Ag NPs were completely rearranged into spherical NPs. In addition, these spherical NPs were very often composed of the intergrowth of multiple individuals with twinning planes {111}. The most common was the well-known pentagonal intergrowth, with a common zone axis [110] for all the individuals [42][43][44]. The formation of such NPs could be the result of the "dissolution-recrystallization" process of Ag agglomerates formed by the gathering of smaller quasi-spherical Ag NPs upon annealing. This is further supported by the determination of particle size redistribution by the ImageJ software ( Figure S7). The obtained results indicate that annealing at 500 °C yielded significantly different particle size distributions for Ag_TiO2_500 than for Ag_TiO2_AP. These different particle sizes may represent the mass balance between the constituent element (crystallization of anatase from amorphous TiO2), and the formation of smaller, more stable spherical Ag NPs. Therefore, it can be concluded that the annealing process can modify the entire morphology of Ag _TiO2_AP. The surface topography of Ag_TiO2_AP and Ag_TiO2_500 NSTFs was studied by AFM ( Figure 13). It was observed that continuous and uniform films were produced without a visible change in topography. From the surface profile plots measured along the The surface topography of Ag_TiO 2 _AP and Ag_TiO 2 _500 NSTFs was studied by AFM ( Figure 13). It was observed that continuous and uniform films were produced without a visible change in topography. From the surface profile plots measured along the horizontal lines in the AFM images, it can be seen that annealing led to an increase in roughness and to a small increase in thickness from 40 to 43 nm in Ag_TiO 2 _500 (Figure 13c,f). The different color of the NPs reflects their different height, suggesting the formation of agglomerates upon annealing. Analysis of the magnified images reveals the size distribution of the NPs, with a lateral size of 18-19 nm for the Ag_TiO 2 _AP NSTF. Larger agglomerates with sizes of 26-27 nm appeared on the Ag_TiO 2 _500 NSTFs surface. The thickness was measured at the sharp edge of the film, which was produced by applying a mask on the substrate during the deposition. The thickness was about 40 and 43 nm for Ag_TiO 2 _AP and Ag_TiO 2 _500 NSTFs, respectively. The higher spatial density of the agglomerations in Ag_TiO 2 _500 NSTF and the shift of the size distribution towards a larger NP size were further supported by ImageJ software estimation ( Figure S6). The surface topography of Ag_TiO2_AP and Ag_TiO2_500 NSTFs was studied by AFM ( Figure 13). It was observed that continuous and uniform films were produced without a visible change in topography. From the surface profile plots measured along the horizontal lines in the AFM images, it can be seen that annealing led to an increase in roughness and to a small increase in thickness from 40 to 43 nm in Ag_TiO2_500 (Figure 13c,f). The different color of the NPs reflects their different height, suggesting the formation of agglomerates upon annealing. Analysis of the magnified images reveals the size distribution of the NPs, with a lateral size of 18-19 nm for the Ag_TiO2_AP NSTF. Larger agglomerates with sizes of 26-27 nm appeared on the Ag_TiO2_500 NSTFs surface. The thickness was measured at the sharp edge of the film, which was produced by applying a mask on the substrate during the deposition. The thickness was about 40 and 43 nm for Ag_TiO2_AP and Ag_TiO2_500 NSTFs, respectively. The higher spatial density of the agglomerations in Ag_TiO2_500 NSTF and the shift of the size distribution towards a larger NP size were further supported by ImageJ software estimation ( Figure S6). Ag_TiO 2 _AP and Ag_TiO 2 _500 NSTFs have distinct microstructures and phase compositions. Ag_TiO 2 _AP is composed of amorphous TiO 2 decorated with cubic Ag particles with irregular shapes with the size in the range 10-100 nm. In contrast, Ag_TiO 2 _500 is composed of crystalline TiO 2 (anatase) thin-film decorated with spherical Ag nanoparticles with sizes in the range 20-40 nm, which are homogeneously distributed on the TiO 2 surface (Figure 11). These particles very often exhibit multiple twinning on {111} planes ( Figure 12). The film thickness was 40 nm for the Ag_TiO 2 _AP sample and 43 nm for the Ag_TiO 2 _500 NSTFs, as observed by AFM ( Figure 13). The maximum height above the TiO 2 film of the nanoparticles is was 16 nm for the as-prepared sample and 26 nm for the annealed sample. Thus, it can be concluded that the annealing process not only leads to phase transformation of TiO 2 from amorphous to crystalline anatase and redistribution of Ag into spherical nanoparticles with more homogeneous distribution in size as well as spatial distribution on the surface of the thin film, but also an increase in the overall thickness of the Ag_TiO 2 _500 NSTFs.
The UV-Vis spectra of the Ag_TiO 2 _AP (Figure 14a) and Ag_TiO 2 _500 (Figure 14b) TNSFs were measured on the quartz substrates using the transmission technique in a range of 190-1100 nm. The undoped TiO 2 layer showed high absorption at 330 nm. The film with deposited Ag (0) NPs showed a shoulder at 320 nm that can be regarded as interfacial charge transfer absorption and a very broad bump above 350 nm. The broad feature had a flat maximum centered at about 850 nm, which can be attributed to localized plasmon resonance (LSPR) absorption. As the LSPR is tunable and strongly dependent on the morphology (shape and size), composition, and interaction between Ag (0) NPs and the substrate, the broad feature was regarded to be a result of size distribution. Both of these absorptions could be considered to have the potential to improve the photocatalytic properties of the Ag_TiO 2 NSTFs. Indeed, in Ag_TiO 2 _AP TNSF, the TiO 2 layer showed an intensity maximum in the UV spectrum due to partial crystallization, while Ag_TiO 2 _500 TNSF demonstrated intense LSPR at about 730 nm. The absorption band was narrowed because annealed NPs are spherical and have a narrow diameter distribution, as proved by STEM and ImageJ analysis ( Figure S6). The band, connected to interfacial charge transfer, observed in Ag_TiO 2 _AP vanished due to the small contact area between Ag NPs and anatase TiO 2 . 1 eV), but corresponding Ag 3d bands were broadened due to the nanosize effect (FWHM = 1.21 eV). The shape of the elemental Ag 3d bands was asymmetrical, as seen in Figure 15. Photoelectrochemical measurements were conducted with deposits on FTO glass substrates. Cyclic voltammetry was studied between bias potentials of −0.5 and +1.5 V with a Pt counter electrode and an Ag/AgCl reference electrode in 0.5 M H₂SO₄ as an electrolyte. Under irradiation by visible light (100 W solar lamp), hydrogen generation on Pt and oxygen generation on working electrodes were visible, and the gas products were confirmed by mass spectrometry. Above 1.2 V, rapid degradation of the Ag_TiO2 samples was observed, and further LSV measurements were conducted under this potential (Figure 16) with fresh electrodes. The corresponding photocurrent-potential relationships were tested under illumination and in the dark at a low scan rate (1 mV/s). Both as-prepared and annealed samples exhibited an anode current when exposed to light compared to experiments performed in the dark. The photocurrent performance of the Ag_TiO2_500 photoanode started at 0.23 V vs. Ag/AgCl, which corresponded to the water oxidation potential. A much higher photochemical activity was observed in the annealed Ag_TiO2_500 TNSF sample, as was expected due to the intense (LSPR) optical absorption and better crystallinity, which are two important factors in improved PEC performance in TiO2 materials. Photoelectrochemical measurements were conducted with deposits on FTO glass substrates. Cyclic voltammetry was studied between bias potentials of −0.5 and +1.5 V with a Pt counter electrode and an Ag/AgCl reference electrode in 0.5 M H 2 SO 4 as an electrolyte. Under irradiation by visible light (100 W solar lamp), hydrogen generation on Pt and oxygen generation on working electrodes were visible, and the gas products were confirmed by mass spectrometry. Above 1.2 V, rapid degradation of the Ag_TiO 2 samples was observed, and further LSV measurements were conducted under this potential ( Figure 16) with fresh electrodes. The corresponding photocurrent-potential relationships were tested under illumination and in the dark at a low scan rate (1 mV/s). Both as-prepared and annealed samples exhibited an anode current when exposed to light compared to experiments performed in the dark. The photocurrent performance of the Ag_TiO 2 _500 photoanode started at 0.23 V vs. Ag/AgCl, which corresponded to the water oxidation potential. A much higher photochemical activity was observed in the annealed Ag_TiO 2 _500 TNSF sample, as was expected due to the intense (LSPR) optical absorption and better crystallinity, which are two important factors in improved PEC performance in TiO 2 materials. It has been well documented that when TiO2 particles are illuminated with UV light, Figure 16. Results of PEC water splitting: LSV of Ag_TiO 2 _AP and Ag_TiO 2 _500 TNSFs.
The Key Role of Multiply Twinned Ag (0) NPs on the Photocatalytic Properties of TiO 2 Nanosheets and TiO 2 NSTFs
It has been well documented that when TiO 2 particles are illuminated with UV light, electron/hole (e−/h + ) pairs can be easily created. The separation of (e−/h + ) is a crucial step, and the low quantum yield of any photocatalytic reactions is due to the high rate of recombination between the (e−/h + ) pairs. In the absence of suitable (e−/h + ) scavengers, possible recombination can occur within a few nanoseconds. A promising alternative for avoiding (e−/h + ) recombination is surface defects, which are capable of trapping surface (e−/h + ), and further preventing their recombination. The observed improvement in 4-CP degradation with Ag_TiO_LYO/800 and PEC activity with Ag_TiO 2 _500 NSTFs was attributed to the crucial role of multiply twinned Ag (0) characterized by a {111} TBs. We propose that the presence of TBs in decahedral Ag (0) leads to an increase in the number of immobilized Ag atoms with dangling bonds on the anatase TiO 2 surface [53]. It is generally known that such immobilized atoms are simultaneously highly reactive and highly unstable. After UV-light illumination, a transfer of photoexcited electrons from TiO 2 to immobilized Ag could be achieved. The Ag could trap the electrons, and further interplay might (i) avoid (e−/h + ) recombination, and (ii) contribute to the generation of •O − 2 and •OH radicals. Complementarily, the protonation of •O − 2 species would generate HO • 2 radicals, resulting in extreme instability over UV-illuminated H 2 O 2 , which could also give rise to hydroxyl •OH radicals [54]. The great involvement of active •OH radicals could lead to the easy formation of [OH-mono-chlorophenol]-intermediated species, which have been reported to be a major step in 4-CP photocatalytic degradation [55]. When Ag (0) is incorporated into the TiO 2 nanocrystals through the defect twin boundaries, the interaction between oxygen and the electron cloud of the adjacent Ag atoms can positively shift the chemical state of Ag atoms and activate the subsequent oxidation reaction. Therefore, the presence of multiply twinned Ag (0) on the surface of nanostructured TiO 2 , even when introduced by completely different synthetic routes, can provide a better opportunity for the adsorption and degradation of organic pollutants and can boost PEC activity.
Conclusions
In summary, the CLT and laser ablation techniques were used to form Ag_TiO 2 nanosheets and Ag_TiO 2 nanostructured thin films (NSTFs), respectively. The efficiency of both methods was compared in terms of morphology and photocatalytic activity. A common advantage concerning the morphology is a single multiply twinned Ag (0) characterized by {111} twin boundaries. The Ag_TiO_LYO/800 nanosheets exhibited enhanced photocatalytic activity under UV light in the decomposition of 4CP and TOC followed by Ag_TiO_LYO/650. Ag_TiO_LYO/800 possessing the best photoactivity could be explained by the evenly distributed multiply twinned Ag (0) NPs with pentagonal bipyramidal geometry and the reduced E bg of 3.07 eV, compared with standard TiO 2 _P25 and Ag_TiO 2 nanosheets annealed at lower temperatures, and pristine 2D TiO 2 nanosheets obtained by CLT. The Ag (0) stabilized defect-free anatase (101) TiO 2 grains at temperatures of up to 800 • C, which in turn may serve as nucleation sites for the growth of Ag (0) NPs with pentagonal bipyramidal geometries. Ag-TiO 2 NSTFs decorated with multiply twinned Ag (0) achieved improved photoelectrochemical water splitting in the visible light region due to an additionally induced plasmonic effect. The CLT and laser ablation technique methods supported the formation of immobilized multiply twinned Ag (0) on the TiO 2 surface, which was able to avoid (e−/h + ) recombination and contribute to the generation of •O − 2 and •OH radicals favoring the photocatalytic activity of Ag_TiO 2 materials.
Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/nano12050750/s1, Figure S1: Synthetic route for the preparation of freeze-dried precursor for Ag-modified TiO 2 nanosheets. Figure S2: Powder XRD diffraction patterns of Ag modified 2D TiO 2 nanosheets. Figure S3: S/TEM images. Figure S4: AFM 3D-images of the Ag_TiO_LYO/800 sample deposited on a glass substrate. Figure S5: Profiling analysis of the 2D AFM image of the Ag_TiO_LYO/800 deposited on a glass substrate. Figure | 13,806.8 | 2022-02-23T00:00:00.000 | [
"Materials Science",
"Chemistry"
] |
Chitosan@Carboxymethylcellulose/CuO-Co2O3 Nanoadsorbent as a Super Catalyst for the Removal of Water Pollutants
In this work, an efficient nanocatalyst was developed based on nanoadsorbent beads. Herein, carboxymethyl cellulose–copper oxide-cobalt oxide nanocomposite beads (CMC/CuO-Co2O3) crosslinked by using AlCl3 were successfully prepared. The beads were then coated with chitosan (Cs), Cs@CMC/CuO-Co2O3. The prepared beads, CMC/CuO-Co2O3 and Cs@CMC/CuO-Co2O3, were utilized as adsorbents for heavy metal ions (Ni, Fe, Ag and Zn). By using CMC/CuO-Co2O3 and Cs@CMC/CuO-Co2O3, the distribution coefficients (Kd) for Ni, Fe, Ag and Zn were (41.166 and 6173.6 mLg−1), (136.3 and 1500 mLg−1), (20,739.1 and 1941.1 mLg−1) and (86.9 and 2333.3 mLg−1), respectively. Thus, Ni was highly adsorbed by Cs@CMC/CuO-Co2O3 beads. The metal ion adsorbed on the beads were converted into nanoparticles by treating with reducing agent (NaBH4) and named Ni/Cs@CMC/CuO-Co2O3. Further, the prepared nanoparticles-decorated beads (Ni/Cs@CMC/CuO-Co2O3) were utilized as nanocatalysts for the reduction of organic and inorganic pollutants (4-nitophenol, MO, EY dyes and potassium ferricyanide K3[Fe(CN)6]) in the presence of NaBH4. Among all catalysts, Ni/Cs@CMC/CuO-Co2O3 had the highest catalytic activity toward MO, EY and K3[Fe(CN)6], removing up to 98% in 2.0 min, 90 % in 6.0 min and 91% in 6.0 min, respectively. The reduction rate constants of MO, EY, 4-NP and K3[Fe(CN)6] were 1.06 × 10−1, 4.58 × 10−3, 4.26 × 10−3 and 5.1 × 10−3 s−1, respectively. Additionally, the catalytic activity of the Ni/Cs@CMC/CuO-Co2O3 beads was effectively optimized. The stability and recyclability of the beads were tested up to five times for the catalytic reduction of MO, EY and K3[Fe(CN)6]. It was confirmed that the designed nanocomposite beads are ecofriendly and efficient with high strength and stability as catalysts for the reduction of organic and inorganic pollutants.
Introduction
Water pollution and its treatment is one of the most serious issues worldwide. Effluent discharge is produced from human activities including industrialization, which introduces a considerable amount of wastewater into nature [1]. Organic contaminates are the most noticeable water pollutants (e.g., organic dyes and nitrophenol compounds), which have deteriorating effects on human health and other living organisms due to their carcinogenic effect on nature [2]. Beside organic contaminations, inorganic pollutants are also considered as the most serious environmental problem including heavy metal ions. According to the literature, the accumulation of toxic metal ions in wastewater can cause genetic alteration, which can affect hormone metabolism and lead to serious diseases such as cancer and fetal 6 ]. The prepared nanocomposite beads and the beads were characterized by various analytical techniques, including FT-IR, SEM, EDX and XRD. The surface morphology of the prepared materials, CuO-Co 2 O 3 , CMC, CMC/CuO-Co 2 O 3 , Cs@CMC/CuO-Co 2 O 3 and Ni/Cs@CMC/CuO-Co 2 O 3 , was examined by utilizing FE-SEM, as seen in Figure 1. The low-magnification and high-magnification images for the prepared nanocomposite beads are represented on the left and right sides in Figure 1. The CuO-Co 2 O 3 picture indicates the particles of CuO-Co 2 O 3 , in Figure 1a,b. The pure CMC beads showed flat surfaces with less porosity [4,35], as seen in Figure 1c,d. On the other hand, CMC/CuO-Co 2 O 3 images illustrated that CuO-Co 2 O 3 was planted very well on the CMC surface, as seen in Figure 1e,f. As seen in Figure 1g,h for Cs@CMC/CuO-Co 2 O 3 , chitosan was coated and filled the porous surface of the CMC/CuO-Co 2 O 3 . Moreover, Figure 1i,j illustrates that Ni was rooted and dispersed on Cs@CMC/CuO-Co 2 O 3 , covering most of the Cs@CMC/CuO-Co 2 O 3 surface. The surface area of the metal increased due to the presence of some functional groups such as COO-on CMC, and N-H 2 and O-H on chitosan [36].
Energy Dispersive X-ray (EDX) Analysis
For more confirmation of the prepared nanocomposite beads, EDX analysis was applied. As clearly seen from Figure 2, the Cu and Co elements were present in the powder CuO-Co 2 O 3 and in all prepared beads (CMC/CuO-Co 2 O 3 , Cs@CMC/CuO-Co 2 O 3 and Ni/Cs@CMC/CuO-Co 2 O 3 ), confirming the successful synthesis of proposed materials. The carbon and oxygen elements were present in EDX images for CMC/CuO-Co 2 O 3 , Cs@CMC/CuO-Co 2 O 3 and Ni/Cs@CMC/CuO-Co 2 O 3 due to chitosan and CMC functional groups [37]. Al element was observed in all beads because AlCl 3 was used as a cross-linking agent in the formation of the beads.
X-ray Diffraction (XRD) Analysis
The crystal structures and phase purity of nanocomposite beads were examined by XRD analysis. The X-ray diffraction patterns were collected in the 2θ range of 10-80 • , as clearly indicated in Figure 3. As seen in Figure 3a, the XRD pattern of pure CMC showed a broad peak located at 2θ = 22 owing to the amorphous CMC crystal [4]. The spectrum of XRD for the CuO-Co 2
Energy Dispersive X-Ray (EDX) Analysis
For more confirmation of the prepared nanocomposite beads, EDX analysis was applied. As clearly seen from Figure 2, the Cu and Co elements were present in the powder CuO-Co2O3 and in all prepared beads (CMC/CuO-Co2O3, Cs@CMC/CuO-Co2O3
X-Ray Diffraction (XRD) Analysis
The crystal structures and phase purity of nanocomposite beads were examined by XRD analysis. The X-ray diffraction patterns were collected in the 2θ range of 10-80°, as clearly indicated in Figure 3. As seen in Figure 3a, the XRD pattern of pure CMC showed a broad peak located at 2 = 22 owing to the amorphous CMC crystal [4]. The spectrum of XRD for the CuO-Co2O3 nanocomposite ( [4]. The diffraction peaks at (311), (400), (511), (220) and (440) are responsible for the Co2O3 phase [38]. Therefore, CMC/CuO-Co2O3, Cs@CMC/CuO-Co2O3 and Ni/CMC/CuO-Co2O3 have the same diffraction peaks of the CuO-Co2O3 nanocomposite, indicating the successful preparation of the beads, as clearly seen in Figure 3b-d. Moreover, the XRD patterns of Ni/Cs@CMC/CuO-Co2O3 beads showed peaks located at (111), (200) and (220), which are attributed to the Ni phase on the surface of Ni/Cs@CMC/CuO-Co2O3 beads. (Figure 4a) illustrated a band at 400-600 cm −1 , which was assigned to the metal oxides (M-O); besides a broad band for O-H (bending and stretching) [4]. FT-IR spectrum of CMC (Figure 4b) exhibited broad bands at 3300-3500 cm −1 , 1422 and 1607, and 1000 and 1200 cm −1 , which were assigned to the stretching of -OH groups, symmetrical and asymmetrical stretching vibrations of the COOgroups and -C-O stretching on the polysaccharide skeleton, respectively [4,39]. All the bands were present in CMC/CuO-Co 2 O 3 (Figure 4c), while for chitosan coating beads Cs@CMC/CuO-Co 2 O 3 (Figure 4d) new bands at 1155 and 1654 cm −1 appeared, which were attributed to the saccharide and (-NH 2 ) amine group in chitosan polymer [40]. All the peaks that appeared in the Ni/Cs@ CMC/CuO-Co 2 O 3 beads are clearly seen in Figure 4e. Figure 4 represents the FT-IR spectra of all prepared beads, pure CMC, CMC/CuO-Co2O3, Cs@CMC/CuO-Co2O3 and Ni/Cs@CMC/CuO-Co2O3 along with CMC/CuO-Co2O3 powder. The spectrum of CuO-Co2O3 ( Figure 4a) illustrated a band at 400-600 cm −1 , which was assigned to the metal oxides (M-O); besides a broad band for O-H (bending and stretching) [4]. FT-IR spectrum of CMC (Figure 4b) exhibited broad bands at 3300-3500 cm −1 , 1422 and 1607, and 1000 and 1200 cm −1 , which were assigned to the stretching of -OH groups, symmetrical and asymmetrical stretching vibrations of the COO-groups and -C-O stretching on the polysaccharide skeleton, respectively [4,39]. All the bands were present in CMC/CuO-Co2O3 (Figure 4c), while for chitosan coating beads Cs@CMC/CuO-Co2O3 ( Figure 4d) new bands at 1155 and 1654 cm −1 appeared, which were attributed to the saccharide and (-NH2) amine group in chitosan polymer [40]. All the peaks that appeared in the Ni/Cs@ CMC/CuO-Co2O3 beads are clearly seen in Figure 4e.
Metal Uptake Study
To evaluate the amount of metal adsorbed on the surface of Cs@CMC/CuO-Co 2 O 3 and CMC/CuO-Co 2 O 3 beads, distribution coefficient (K d ) and uptake capacity (q e ) were calculated using the following Equations (1) and (2) [41]: where C i and C e are the concentration of the metal ions before and after adsorption by Cs@CMC/CuO-Co 2 O 3 and CMC/CuO-Co 2 O 3 nanocomposite beads, respectively. V refers to the solution volume (L), and m is the mass of beads (g). As can be seen from Table 1, Ni has the highest removal percentage with Cs@CMC/CuO-Co 2 O 3 compared to other metals, (66,86.06, 70 and 79%) for Ag(I), Ni(II), Zn(II) and Fe(II), respectively. When compared the adsorbed metals, Ag(I) has the highest adsorption (%) with CMC/CuO-Co 2 O 3 up to 95%. This result indicates that Ag(I) was more adsorbed by CMC/CuO-Co 2 O 3 beads, and Ni(II) was more adsorbed by Cs@CMC/CuO-Co 2 O 3 beads. However, the Cs@CMC/CuO-Co 2 O 3 adsorbent was more effective than the CMC/CuO-Co 2 O 3 beads toward all metals, as shown in Figure 5. The reason for this is that chitosan has strong chelating proprieties toward metal ions due to the high content of amino and hydroxyl groups present in the composition of chitosan, which act as active sites [15,42,43].
where Ci and Ce are the concentration of the metal ions before and after adsorption by Cs@CMC/CuO-Co2O3 and CMC/CuO-Co2O3 nanocomposite beads, respectively. V refers to the solution volume (L), and m is the mass of beads (g). As can be seen from Table 1, Ni has the highest removal percentage with Cs@CMC/CuO-Co2O3 compared to other metals, (66,86.06, 70 and 79%) for Ag(I), Ni(II), Zn(II) and Fe(II), respectively. When compared the adsorbed metals, Ag(I) has the highest adsorption (%) with CMC/CuO-Co2O3 up to 95%. This result indicates that Ag(I) was more adsorbed by CMC/CuO-Co2O3 beads, and Ni(II) was more adsorbed by Cs@CMC/CuO-Co2O3 beads. However, the Cs@CMC/CuO-Co2O3 adsorbent was more effective than the CMC/CuO-Co2O3 beads toward all metals, as shown in Figure 5. The reason for this is that chitosan has strong chelating proprieties toward metal ions due to the high content of amino and hydroxyl groups present in the composition of chitosan, which act as active sites [15,42,43]. As shown in Figure 6a, the influence of different concentrations of Ni(II) ions was tested to evaluate the adsorption isotherms. According to literature, the adsorption efficiency decreases with the increase in Ni(II) concentration as proved in most studies on heavy metals removal. The possible explanation is that the adsorbent surface sites are enough to accommodate the metal ions in solution and the sorption rate gets faster. However, when the metal concentrations are increased, the adsorbent surface sites will not be enough to capture all the metal ions present in the solution [44].
Effect of pH of Ni(II) Solution
The effect of pH was the most significant controlling parameter in the adsorption study. Adsorption of heavy metals depends on the pH and the type of ions solution. For heavy metal adsorption, a pH range of 5.0-8.0 is usually sufficient. Due to the decrease in H + concentration, heavy metal ions exist as free ions with an initial pH range of 4.0-5.0 and can be adsorbed onto chitosan at higher pH values. Because H + concentration is high at lower pH values, protonation of amino groups can cause electrostatic repulsion between protonated group and heavy metal ions. The net negative charge on the surface of chitosan increases when the pH value rises, and the ionic point of ligands such as -COOH, -OH and -NH 2 groups becomes free, enhancing binding with the heavy metal ions. It has been reported that at pH values less than 6.5, chitosan has strong cationic charges and strong anionic charges at pH higher than 6.5. In our adsorbent beads, the carboxylic (-COOH) and amino (-NH 2 ) groups existing in the beads are responsible for the binding of Ni(II). Therefore, various values of pH for Ni ions solution were examined using the adsorbent Cs@CMC/CuO-Co 2 O 3 nanocomposite beads, as illustrated in Figure 6b. It was clearly found that the adsorption of Ni ions increased with an increase in pH from 3 to 7 (neutral); and the removal efficiency was 1%, 29% and 83% for pH values 3, 5 and 7, respectively. However, the adsorption then reduced greatly with an increase in the pH from 7 to 9. The reduction in adsorption at high pH may be either due to aggregation of chitosan polymer because of the hard protonation of its amino groups, or may be due to the precipitation of metal ions or Ni ions in the alkaline medium as Ni(OH) 2 . [45]. Therefore, the neutral pH was selected as an optimal condition for further experiments. The same effect was reported in the literature [26,46,47].
Effect of Initial Concentration of Ni(II) Solution
As shown in Figure 6a, the influence of different concentrations of Ni(II) ions w tested to evaluate the adsorption isotherms. According to literature, the adsorpt efficiency decreases with the increase in Ni(II) concentration as proved in most studies heavy metals removal. The possible explanation is that the adsorbent surface sites enough to accommodate the metal ions in solution and the sorption rate gets fas However, when the metal concentrations are increased, the adsorbent surface sites w not be enough to capture all the metal ions present in the solution [44].
Effect of pH of Ni(II) Solution
The effect of pH was the most significant controlling parameter in the adsorpt study. Adsorption of heavy metals depends on the pH and the type of ions solution. F heavy metal adsorption, a pH range of 5.0-8.0 is usually sufficient. Due to the decrease H + concentration, heavy metal ions exist as free ions with an initial pH range of 4.0and can be adsorbed onto chitosan at higher pH values. Because H + concentration is h at lower pH values, protonation of amino groups can cause electrostatic repuls between protonated group and heavy metal ions. The net negative charge on the surf of chitosan increases when the pH value rises, and the ionic point of ligands such a COOH, -OH and -NH2 groups becomes free, enhancing binding with the heavy me ions. It has been reported that at pH values less than 6.5, chitosan has strong catio
Effect of Ni(II) Adsorption Contact Time
The effect of contact time is a significant factor in adsorption study. In fact, the adsorption property is very dependent on the time required for equilibrium between the adsorbent and adsorbate. The adsorption of Ni ions was carried out by applying different contact times (10, 30, 60, 120, 240 min). As clearly seen from Figure 6c, 48.6% of Ni(II) was removed in 10 min and reached 88.7% in 60 min before it gradually decreased to 62% at 240 min. The explanation for this phenomenon is that in the first 60 min, the surface of the Cs@CMC/CuO-Co 2 O 3 was filled with the Ni(II), and the whole surface was occupied, but the adsorption process was gradually reduced after 60 min due to the saturation of active sites on the beads' surface. This result is in accordance with previous published findings [48].
Effect of Adsorbent Dose
The adsorbent dose is also an important factor in adsorption. To evaluate the effect of the adsorbent amount on Ni removal, three different doses of CS@CMC/CuO-Co 2 O 3 beads (2.5, 5 and 10 mg) were used for a fixed initial Ni(II) concentration (5 mg L −1 ) at 25 • C with a contact time of 60 min. As clearly seen in Figure 6d, the removal percentage of Ni(II) was increased from 53% to 83% when the number of beads increased from 2.5 to 5 mg, respectively. However, the removal percentage was then decreased with an increase in bead dosage from 5 to 10 mg. The best explanation for this phenomenon is that the CS@CMC/CuO-Co 2 O 3 beads have more active sites, which remined unsaturated during the adsorption procedure. Therefore, 5 mg was fixed for further study as the optimum adsorbent dose.
Moreover, the isotherm and kinetic adsorption were studied as follows: The two isotherm models, Langmuir and Freundlich, are given in Table 2 (Equations (4) and (5)) to model the adsorption process of our system. According to the data obtained, the correlation coefficient (R 2 ) values for isotherm models, Langmuir was discovered to be a suitable model to represent the sorption system, which assumed a monolayer of analyte establisher on a homogeneous surface of the adsorbent. Linearity of plotting C e /q e vs. C e was achieved with R 2 of 0.9433. The Langmuir constant (q m ) was calculated to be 12.00 mg g −1 , which is close to the experimental value of the adsorption capacity (11.00 mg g −1 ). The Langmuir constant (b) is equal to 0.08 L mg −1 , which explains the strong affinity of the Ni(II) ions to the adsorbent beads. The essential factor R L was calculated using Equation (3) [44].
where C o is the initial concentration of Ni(II) (mgL −1 ), and b is the Langmuir constant. The calculated R L was found to be 0.50, which is in the range of 0 < R L > 1, referring to favorable adsorption. Thus, this confirmed that the adsorption of Ni(II) ions by Cs@CMC/CuO-Co 2 O 3 is favorable, as shown in Table 3. Table 2. Mathematical equations and isotherm models [49][50][51] used in this study.
Models Linear Equations Plot
Isotherm Langmuir vs. t Table 3. Data of isotherm models for Ni(II) adsorption using Cs@CMC/CuO-Co 2 O 3 .
Langmuir Model Freundlich Model
Metal Ion 12.00 0.943 0.50 0.08 0.553 9.10 5.06 The pseudo-first-order model and second-order model were applied to the adsorption system for an explanation of kinetics, as shown in Table 2 (Equations (6) and (7)). Both slope and intercept were calculated using the plot log (q e − q t ) vs. t, and t/qt vs. t for the pseudo-first order and the pseudo-second order, respectively. Based on the calculated values, the pseudo-first-order more suitably explained the adsorption. The pseudo-secondorder model was close to the adsorption capacities obtained from experiments; Langmuir isotherm and pseudo-second-order kinetic models were compatible, as illustrated in Table 4. The obtained data were compared with other studies for removal of Ni(II), as represented in Table 5.
Adsorption Mechanism
The possible adsorption mechanism is illustrated in Figure 7. The adsorption of heavy metals with Cs@CMC/CuO-Co 2 O 3 might be due to strong attraction of metal ions to nanocomposite beads, which contain active sites (COO-, OH, Cu-O, Co-O, -O-and -NH 2 ). These groups can easily attract and combined with metal ions. However, the amino group in chitosan has a significant role in the adsorption because the chitosan completely coats the surface of CMC/CuO-Co 2 O 3 . The chemical nature of chitosan, hydrophilicity due to the large number of (-OH) and the presence of (-NH 2 ) can determine the adsorption of chitosan toward the heavy metal. According to the literature, the adsorption of heavy metal ions by chitosan functional groups can occur based on different mechanisms (e.g., electrostatic attraction and chelation). Chitosan (NH 2 ) groups are responsible for the adsorption of metal cations by a chelation mechanism. In fact, the adsorption can be affected by the pH of the metal ion solution where the NH 2 group (free-electron doublet on nitrogen) can attract cations at neutral pH [24,54,55].
Catalytic Reduction Study
The catalytic ability of all prepared catalysts, including CuO-Co 2 O 3 , CMC/CuO-Co 2 O 3 , Cs@CMC/CuO-Co 2 O 3 and Ni/Cs@CMC/CuO-Co 2 O 3 , was examined for the reduction of two anionic dyes (MO and EY). MO dye was chosen as a model dye for this study. MO solution (0.01 mM) was placed into a UV-cuvette and mixed with a reducing agent, NaBH 4 . Further, each catalyst was added to the mixture of MO and NaBH 4 as mentioned previously. Afterward, the reduction reaction of MO was examined by the UV-Vis spectrophotometer every min as the reaction proceeded. Initially, pure MO solution (0.01 mM) was recorded by UV-Vis, and two absorbance bands appeared at λ max = 460 nm and 270 nm, as shown in Figure 8. There was no change observed when the reducing agent (NaBH 4 ) was added because NaBH 4 cannot reduce the dye even with an excess amount, as reported in the literature [56]. However, after the addition of both reducing agent and catalysts, the color of MO dye disappeared gradually from orange to colorless. The reason for this that MO was converted to hydrazine derivatives by breaking the azo bond (-N=N-) and transforming it to -NH 2 (amino). During the reduction, the peak at l max = 460 nm was decreased gradually [57,58]. The reduction percentage of MO was 92 % in 4 min with Ni/Cs@CMC/CuO-Co 2 O 3 while it reached to 85%, 90% and 33.5 % with CuO-Co 2 O 3 , CMC/CuO-Co 2 O 3 and Cs@CMC/CuO-Co 2 O 3 in 5, 20 and 14 min, respectively. Moreover, the reduction of MO was studied by using the Ag/CMC/CuO-Co 2 O 3 under the same conditions. It was found that Ag/Cs@CMC/CuO-Co 2 O 3 can reduce 88% of MO in 6 min. chitosan toward the heavy metal. According to the literature, the adsorption of heavy metal ions by chitosan functional groups can occur based on different mechanisms (e.g., electrostatic attraction and chelation). Chitosan (NH2) groups are responsible for the adsorption of metal cations by a chelation mechanism. In fact, the adsorption can be affected by the pH of the metal ion solution where the NH2 group (free-electron doublet on nitrogen) can attract cations at neutral pH [24,54,55].
Catalytic Reduction Study
The catalytic ability of all prepared catalysts, including CuO-Co2O3, CMC/CuO-Co2O3, Cs@CMC/CuO-Co2O3 and Ni/Cs@CMC/CuO-Co2O3, was examined for the reduction of two anionic dyes (MO and EY). MO dye was chosen as a model dye for this study. MO solution (0.01 mM) was placed into a UV-cuvette and mixed with a reducing agent, NaBH4. Further, each catalyst was added to the mixture of MO and NaBH4 as mentioned previously. Afterward, the reduction reaction of MO was examined by the UV-Vis spectrophotometer every min as the reaction proceeded. Initially, pure MO solution (0.01 mM) was recorded by UV-Vis, and two absorbance bands appeared at max = 460 nm and 270 nm, as shown in Figure 8. There was no change observed when the reducing agent (NaBH4) was added because NaBH4 cannot reduce the dye even with an excess amount, as reported in the literature [56]. However, after the addition of both reducing agent and catalysts, the color of MO dye disappeared gradually from orange to colorless. The reason for this that MO was converted to hydrazine derivatives by breaking the azo bond (-N=N-) and transforming it to -NH2 (amino). During the reduction, the peak at lmax = 460 nm was decreased gradually [57,58]. The reduction percentage of MO was 92 % in 4 min with Ni/Cs@CMC/CuO-Co2O3 while it reached to 85%, 90% and 33.5 % with CuO-Co2O3, CMC/CuO-Co2O3 and Cs@CMC/CuO-Co2O3 in 5, 20 and 14 min, respectively. Moreover, the reduction of MO was studied by using the Ag/CMC/CuO-Co2O3 under the same conditions. It was found that Ag/Cs@CMC/CuO-Co2O3 can reduce 88% of MO in 6 min. The kinetic behavior of four prepared catalysts toward the reduction of MO dye was evaluated by applying the pseudo-first-order kinetics. The rate constants were calculated from the slope of lnC t/ C 0 vs. time, as seen in Figure 8f. The rate constant K value per second and R 2 correlation coefficient of decolorization of MO dye with Ni/Cs@CMC/CuO-Co 2 O 3 was 1.02 × 10 −2 s −1 and 0.964, respectively, which is higher than other catalysts, using Cs@CMC/CuO-Co 2 O 3 (4.4 × 10 −4 and 0.953), CMC/CuO-Co 2 O 3 (2.6 × 10 −3 and 0.915) and CuO-Co 2 O 3 (6.11 × 10 −3 and 0.868). This clearly indicates that Ni/Cs@CMC/CuO-Co 2 O 3 is the most active catalyst among other prepared catalysts toward the reduction of MO dye.
Moreover, the catalytic reduction was tested toward the degradation of EY. The decolorization of EY was conducted by using the same procedure described previously for catalytic reduction of MO (Figure 9). EY had an absorbance band at 510 nm, which gradually decreased. During the reduction, the EY color changed from orange to pale yellow and then turned to colorless, indicating the formation of ESH 2 [59]. Ni/Cs@ CMC/CuO-Co 2 O 3 was the highest efficient catalyst toward EY. According to data obtained, around 90% of EY was decolorized in 9 min by Ni/Cs@CMC/CuO-Co 2 O 3 , while reduction of 71%, 94% and 35% were obtained in 24, 15 Figure 10 shows the possible reduction mechanism of MO and EY. The reduction occurs mainly through the transfer of electrons via nanocatalyst facilitation. Firstly, the NaBH4 dissociates to BH4 -ions and Na + , in which BH4 − acts as a source of eand H + . Further, the catalyst Ni/Cs@CMC/CuO-Co2O3 transfers efrom the BH4 -ion to dye molecules for catalytic reduction. For MO dye, the azo bonds are activated by the electron transfers by BH4 -ion via Ni/Cs@CMC/CuO-Co2O3 nanocomposite beads. The MO molecules are bounded to the nanocomposite beads by oxygen and sulfur atoms. Thus, the first step is the conversion of the -N=N-bond into -HN-NH-bond followed by bondbreaking to form aromatic amines. In fact, this happens because eare accepted from nanocomposite beads catalyst and H + from BH4 − . Thus, the orange color of the MO dye is turned colorless, indicating the completion of the reduction of MO. On the other hand, the EY is adsorbed on the surface of Ni/Cs@CMC/CuO-Co2O3 nanocomposite beads because of the electrostatic attractive force between Ni/Cs@CMC/CuO-Co2O3 nanocomposite and anionic dye. Afterward, the electron is transferred by Ni/Cs@CMC/CuO-Co2O3 from BH4to EY for its catalytic reduction [4]. Figure 10 shows the possible reduction mechanism of MO and EY. The reduction occurs mainly through the transfer of electrons via nanocatalyst facilitation. Firstly, the NaBH 4 dissociates to BH 4 − ions and Na + , in which BH 4 − acts as a source of e − and H + . Further, the catalyst Ni/Cs@CMC/CuO-Co 2 O 3 transfers e − from the BH 4 − ion to dye molecules for catalytic reduction. For MO dye, the azo bonds are activated by the electron transfers by BH 4 − ion via Ni/Cs@CMC/CuO-Co 2 O 3 nanocomposite beads. The MO molecules are bounded to the nanocomposite beads by oxygen and sulfur atoms. Thus, the first step is the conversion of the -N=N-bond into -HN-NH-bond followed by bond-breaking to form aromatic amines. In fact, this happens because e − are accepted from nanocomposite beads catalyst and H + from BH 4 − . Thus, the orange color of the MO dye is turned colorless, indicating the completion of the reduction of MO. On the other hand, the EY is adsorbed on the surface of Ni/Cs@CMC/CuO-Co 2 O 3 nanocomposite beads because of the electrostatic attractive force between Ni/Cs@CMC/CuO-Co 2 O 3 nanocomposite and anionic dye. Afterward, the electron is transferred by Ni/Cs@CMC/CuO-Co 2 O 3 from BH 4 − to EY for its catalytic reduction [4]. Additionally, an evaluation of CuO-Co 2 O 3 , CMC/CuO-Co 2 O 3 , Cs@CMC/CuO-Co 2 O 3 and Ni/Cs@CMC/CuO-Co 2 O 3 as catalysts toward the catalytic reduction of 4-NP was performed. By using the same procedure mentioned previously, the reaction of 4-NP was performed by utilizing CuO-Co 2 O 3, CMC/CuO-Co 2 O 3 , Cs@CMC/CuO-Co 2 O 3 and Ni/Cs@CMC/CuO-Co 2 O 3 as catalysts in the presence of NaBH 4 . In the beginning, the absorbance band of 4-NP appeared at λ max = 317 nm. As observed, the yellow color of 4-NP changed directly to dark yellow in the presence of (0.5 mL) NaBH 4 with a new UV-Vis band appearing at 400 nm. This indicates the transformation of 4-NP to 4-nitrophenolate. Then, the CuO-Co 2 O 3, CMC/CuO-Co 2 O 3 , Cs@CMC/CuO-Co 2 O 3 and Ni/Cs@CMC/CuO-Co 2 O 3 catalysts were added and tested separately for the reduction of 4-nitrophenol. The band at λ max = 400 nm disappeared gradually, and a new absorbance band at λ max = 320 nm appeared along with the disappearance of dark yellow color, proving the formation of 4-AP because of 4-NP reduction. It was found that among all the catalysts, the prepared Ni/Cs@CMC/CuO-Co 2 O 3 was the most effective catalyst because 4-NP was completely reduced to 4-AP in 13 min, while it was reduced in 19 and 20 min by using CuO-Co 2 O 3 and CMC/CuO-Co 2 O 3 , respectively. However, the reduction of 4-NP by Cs@CMC/CuO-Co 2 O 3 took 20 min (Figure 11a). Additionally, an evaluation of CuO-Co2O3, CMC/CuO-Co2O3, Cs@CMC/CuO-Co2O3 and Ni/Cs@CMC/CuO-Co2O3 as catalysts toward the catalytic reduction of 4-NP was performed. By using the same procedure mentioned previously, the reaction of 4-NP was performed by utilizing CuO-Co2O3, CMC/CuO-Co2O3, Cs@CMC/CuO-Co2O3 and Ni/Cs@CMC/CuO-Co2O3 as catalysts in the presence of NaBH4. In the beginning, the absorbance band of 4-NP appeared at max = 317 nm. As observed, the yellow color of 4-NP changed directly to dark yellow in the presence of (0.5 mL) NaBH4 with a new UV-Vis band appearing at 400 nm. This indicates the transformation of 4-NP to 4nitrophenolate. Then, the CuO-Co2O3, CMC/CuO-Co2O3, Cs@CMC/CuO-Co2O3 and Ni/Cs@CMC/CuO-Co2O3 catalysts were added and tested separately for the reduction of 4-nitrophenol. The band at max = 400 nm disappeared gradually, and a new absorbance band at max = 320 nm appeared along with the disappearance of dark yellow color, proving the formation of 4-AP because of 4-NP reduction. It was found that among all the catalysts, the prepared Ni/Cs@CMC/CuO-Co2O3 was the most effective catalyst because 4-NP was completely reduced to 4-AP in 13 min, while it was reduced in 19 and 20 min by using CuO-Co2O3 and CMC/CuO-Co2O3, respectively. However, the reduction of 4-NP by Cs@CMC/CuO-Co2O3 took 20 min (Figure 11a).
The rate constant and R 2 were found to be 4.26 × 10 −3 s −1 and 0.912, 17 × 10 −5 s −1 and 0.824, 2.62 × 10 −3 s −1 and 0.842 and 2.7 × 10 −3 s −1 and 0.855 for Ni/Cs@CMC/CuO-Co2O3, Cs@CMC/CuO-Co2O3, CMC/CuO-Co2O3 and CuO-Co2O3, respectively, as shown in Table 6 and Figure 11b. The data for 4-NP reduction was compared with other catalysts and is illustrated in Table 7. Figure 12a. Based on findings, the catalytic reduction reaction of K 3 [Fe(CN) 6 ] follows the pseudofirst-order, as seen in Figure 12b. Subsequently, the rate constant and R 2 were found to be 5. Table 6 and Figure 11b. The data for 4-NP reduction was compared with other catalysts and is illustrated in Table 7.
The catalytic reduction of K 3 [Fe(CN) 6 ] was also examined to evaluate the catalytic activity of CuO-Co 2 O 3 , CMC/CuO-Co 2 O 3 , Cs@CMC/CuO-Co 2 O 3 and Ni/Cs@CMC/CuO-Co 2 O 3 . The UV-vis absorption of the catalytic reduction of K 3 [Fe(CN) 6 ] was monitored every minute to check the progress of the K 3 [Fe(CN) 6 ] reduction. As the catalytic reaction proceeded in the presence of NaBH 4 and the catalyst, the absorption band of K 3 [Fe(CN) 6 ] at λ max = 420 nm gradually decreased in 6 min when using Ni/Cs@CMC/CuO-Co 2 O 3 along with disappearance of the yellow color, indicating the reduction of K 3 Figure 12a. Based on findings, the catalytic reduction reaction of K 3 [Fe(CN) 6 ] follows the pseudo-first-order, as seen in Figure 12b. Subsequently, the rate constant and R 2 were found to be 5.1 × 10 −3 s −1 and 0.975, 1.8 × 10 −3 s −1 and 0.909, 1.69 × 10 −3 s −1 and 0.835 and 3.6 × 10 −3 s −1 and 0.964 for Ni/Cs@CMC/CuO-Co 2 O 3 , Cs@CMC/CuO-Co 2 O 3 , CMC/CuO-Co 2 O 3 and CuO-Co 2 O 3, respectively, as shown in Table 6.
The catalytic reduction of K3[Fe(CN)6] was also examined to evaluate the catalytic activity of CuO-Co2O3, CMC/CuO-Co2O3, Cs@CMC/CuO-Co2O3 and Ni/Cs@CMC/CuO-Co2O3. The UV-vis absorption of the catalytic reduction of K3[Fe(CN)6] was monitored every minute to check the progress of the K3[Fe(CN)6] reduction. As the catalytic reaction proceeded in the presence of NaBH4 and the catalyst, the absorption band of K3[Fe(CN)6] at max = 420 nm gradually decreased in 6 min when using Ni/Cs@CMC/CuO-Co2O3 along with disappearance of the yellow color, indicating the reduction of K3[Fe(CN)6] to K4[Fe(CN)6] [69]. In contrast, the reduction reaction took longer times of 8, 13 and 18 min when using CuO-Co2O3, CMC/CuO-Co2O3 and Cs@CMC/CuO-Co2O3, respectively. The efficient transformation of K3[Fe(CN)6] to K4[Fe(CN)6] was obtained by applying the Ni/Cs@CMC/CuO-Co2O3 catalyst (91%), while 85, 73.5 and 83% reduction were obtained by using Cs@CMC/CuO-Co2O3, CMC/CuO-Co2O3 and CuO-Co2O3, as shown in Figure 12a. Based on findings, the catalytic reduction reaction of K 3 [Fe(CN) 6 ] follows the pseudofirst-order, as seen in Figure 12b. Subsequently, the rate constant and R 2 were found to be 5.1 × 10 −3 s −1 and 0.975, 1.8 × 10 −3 s −1 and 0.909, 1.69 × 10 −3 s −1 and 0.835 and 3.6 × 10 −3 s −1 and 0.964 for Ni/Cs@CMC/CuO-Co2O3, Cs@CMC/CuO-Co2O3, CMC/CuO-Co2O3 and CuO-Co2O3, respectively, as shown in Table 6. The possible mechanism for the reaction of K 3 [Fe(CN) 6 ] in the presence of both catalyst beads and NaBH 4 is illustrated in Figure 10. Based on the reported studies, the catalytic reduction of [Fe(CN) 6 ] −3 to form [Fe(CN) 6 ] −4 is an electron-transfer route, as shown in the reaction below [3]. Therefore, the catalytic reaction mechanism of K 3 [Fe(CN) 6 The effect of concentration was examined for all compounds (4-NP, EY, MO and K 3 [Fe(CN 6 )]) by using a certain catalyst (Ni/Cs@CMC/CuO-Co 2 O 3 ) since it is more effective than others (Cs@CMC/CuO-Co 2 O 3, Cs@CMC/CuO-Co 2 O 3 , CMC/CuO-Co 2 O 3 and CuO-Co 2 O 3 ). As clearly seen in all figures below (Figure 13), when the concentration of pollutants was increased the time taken for reduction was increased. This finding indicates that the concentration of the pollutants has an essential role and Ni/Cs@CMC/CuO-Co 2 O 3 was found to be more efficient catalyst with a low concentration of MO, EY, 4-NP and K 3 [Fe(CN) 6 ], which was found to have similar effect as reported in literature [70]. effective than others (Cs@CMC/CuO-Co2O3, Cs@CMC/CuO-Co2O3, CMC/CuO-Co2O3 and CuO-Co2O3). As clearly seen in all figures below (Figure 13), when the concentration of pollutants was increased the time taken for reduction was increased. This finding indicates that the concentration of the pollutants has an essential role and Ni/Cs@CMC/CuO-Co2O3 was found to be more efficient catalyst with a low concentration of MO, EY, 4-NP and K3[Fe(CN)6], which was found to have similar effect as reported in literature [70].
Effect of NaBH4 Concentration
The impact of NaBH4 concentration on the catalytic reduction of pollutants is a very important parameter. Therefore, a range of NaBH4 concentrations (0.2, 0.1 and 0.05 M) were used to evaluate its effect on the reduction of target pollutants in the presence of
Effect of NaBH 4 Concentration
The impact of NaBH 4 concentration on the catalytic reduction of pollutants is a very important parameter. Therefore, a range of NaBH 4 concentrations (0.2, 0.1 and 0.05 M) were used to evaluate its effect on the reduction of target pollutants in the presence of Ni/Cs@CMC/CuO-Co 2 O 3 beads as a catalyst. As the data demonstrate, the reducing agent has an important role in the reduction reaction of pollutants in the presence of an effective catalyst. However, this reducing agent has no activity or ability to reduce toxic compounds even at high concentrations. Therefore, an effective catalyst should be added to enhance the reduction. A high concentration of NaBH 4 (0.2 M) in addition to an effective catalyst such as Ni/Cs@CMC/CuO-Co 2 O 3 can promote the reduction reaction of MO at a faster rate, decolorizing it in only 2 min, as shown in Figure 14a. However, when the reducing agent (NaBH 4 ) concentration was decreased, the reduction reaction rate was influenced and decreased. This means that the reduction reaction requires more time to complete. Indeed, MO was reduced in 4 min and 12 min when the NaBH 4 concentration was 0.1 and 0.05 M, respectively, as shown in Figure 14a. EY was also reduced in 6 min by using a high NaBH 4 concentration, while this reduction took 9 min and 14 min using 0.1 and 0.05 M NaBH 4 , respectively, as shown in Figure 16b. The same effect was observed for K 3 [Fe(CN) 6 ] (Figure 14c) and 4-NP (Figure 14d), and a similar impact was reported in literature [70].
influenced and decreased. This means that the reduction reaction requires more time to complete. Indeed, MO was reduced in 4 min and 12 min when the NaBH4 concentration was 0.1 and 0.05 M, respectively, as shown in Figure 14a. EY was also reduced in 6 min by using a high NaBH4 concentration, while this reduction took 9 min and 14 min using 0.1 and 0.05 M NaBH4, respectively, as shown in Figure 16b. The same effect was observed for K3[Fe(CN)6] (Figure 14c) and 4-NP (Figure 14d), and a similar impact was reported in literature [70].
Effect of Number of Ni/Cs@CMC/CuO-Co2O3 Beads
The influence of amount of Ni/Cs@CMC/CuO-Co2O3 was tested by utilizing three different amounts of Ni/Cs@CMC/CuO-Co2O3 bead catalyst (3 mg, 5 mg and 8 mg) in the presence of reducing agent (0.2 M NaBH4). This effect was tested using 0.01 mM MO and 0.05 mM K3[Fe(CN6)], as seen in Figure 15. In fact, the amount of catalyst is an important factor in the reduction reaction. The results demonstrate that a larger catalyst amount (8 mg of Ni/Cs@CMC/CuO-Co2O3) could enhance the reaction of MO causing decolorization by 98% in 2 min and 92% in 2 min using an amount of 5 mg. However, MO was reduced by 97% in 6 min by using a low amount of catalyst, as shown in Figure 15a. A similar effect was found for K3[Fe(CN6)], as clearly seen in Figure 15b.
Effect of Number of Ni/Cs@CMC/CuO-Co 2 O 3 Beads
The influence of amount of Ni/Cs@CMC/CuO-Co 2 O 3 was tested by utilizing three different amounts of Ni/Cs@CMC/CuO-Co 2 O 3 bead catalyst (3 mg, 5 mg and 8 mg) in the presence of reducing agent (0.2 M NaBH 4 ). This effect was tested using 0.01 mM MO and 0.05 mM K 3 [Fe(CN 6 )], as seen in Figure 15. In fact, the amount of catalyst is an important factor in the reduction reaction. The results demonstrate that a larger catalyst amount (8 mg of Ni/Cs@CMC/CuO-Co 2 O 3 ) could enhance the reaction of MO causing decolorization by 98% in 2 min and 92% in 2 min using an amount of 5 mg. However, MO was reduced by 97% in 6 min by using a low amount of catalyst, as shown in Figure 15a. A similar effect was found for K 3 [Fe(CN 6 )], as clearly seen in Figure 15b.
Recyclability of Ni/Cs@CMC/CuO-Co2O3 beads
Recyclability of the catalyst is a significant factor during a catalytic reduction study. Most of the catalysts become deactivated after the first or second use. In our study, Ni/Cs@CMC/CuO-Co2O3 was able to be reused up to five times without deactivation or any loss of the catalyst beads. Consequently, Ni/Cs@CMC/CuO-Co2O3 beads were tested
Recyclability of Ni/Cs@CMC/CuO-Co 2 O 3 Beads
Recyclability of the catalyst is a significant factor during a catalytic reduction study. Most of the catalysts become deactivated after the first or second use. In our study, Ni/Cs@CMC/CuO-Co 2 O 3 was able to be reused up to five times without deactivation or any loss of the catalyst beads. Consequently, Ni/Cs@CMC/CuO-Co 2 O 3 beads were tested in the reduction of MO, EY and K 3 [Fe(CN) 6 ] for several time to check the recyclability of the catalyst. As mentioned previously, the same procedures were followed, except the beads were washed by deionized water and then MeOH, followed by deionized water several times, and then dried for the next use. This process was repeated five times to assess the reusability of the Ni/Cs@CMC/CuO-Co 2 O 3 beads. Figure 16
Recyclability of Ni/Cs@CMC/CuO-Co2O3 beads
Recyclability of the catalyst is a significant factor during a catalytic reduction study. Most of the catalysts become deactivated after the first or second use. In our study, Ni/Cs@CMC/CuO-Co2O3 was able to be reused up to five times without deactivation or any loss of the catalyst beads. Consequently, Ni/Cs@CMC/CuO-Co2O3 beads were tested in the reduction of MO, EY and K3[Fe(CN)6] for several time to check the recyclability of the catalyst. As mentioned previously, the same procedures were followed, except the beads were washed by deionized water and then MeOH, followed by deionized water several times, and then dried for the next use. This process was repeated five times to assess the reusability of the Ni/Cs@CMC/CuO-Co2O3 beads. Figure 16 shows the time taken for each reduction cycle of MO, 4-NP and K3[Fe(CN)6] using Ni/Cs@CMC/CuO-Co2O3 beads.
Application to Real Samples
The catalytic activity of Ni/Cs@CMC/CuO-Co 2 O 3 beads was also assessed in four types of real samples with spiked MO (0.06 mM). The real samples used for this study were full-fat milk and three juice samples (orange, pineapple and apple), and they were obtained from a local market (Jeddah, Saudi Arabia). The preparation of real samples was performed by taking around 1 mL of each sample and diluting it in 100 mL of deionized water individually. Further, 2.5 mL of each real sample was placed into a UV-vis cuvette, and 0.5 mL of 0.06 mM MO was added followed by addition of 0.5 mL of 0.1 M NaBH 4 . Finally, 5 mg of Ni/Cs@CMC/CuO-Co 2 O 3 was added. The catalytic degradation of MO was monitored by a UV-vis spectrophotometer. As clearly seen from data presented in Table 8, full-fat milk was the only sample that took a longer time (15 min) with a very low reduction % (65%). This is due to the high interference found in the milk, which can influence the reduction of MO. In contrast, the reduction of MO in the three juice samples occurred in 5-6 min with 91-97%. The data confirm that the Ni/Cs@CMC/CuO-Co 2 O 3 is effective and reliable since it was able to decolorize and effectively reduce MO in real samples. O)) and 4-nitrophenol (4-NP, (≥99%)) were obtained from Sigma-Aldrich. Potassium hexacyanoferrate (III), (99%) and the reducing agent (sodium borohydride, (99%)) were obtained from Sigma-Aldrich. In all preparations, deionized water was utilized.
Preparation of CuO-Co 2 O 3 Nanocomposite
The nanocomposite, CuO-Co 2 O 3 was prepared by co-precipitation method. Firstly, 0.1M CuSO 4 ·5H 2 O was mixed with 0.1 M CoSO 4 ·6 H 2 O in a (50:50) ratio. Then, NaOH was added dropwise to adjust the pH, 10-11. The preparation was carried out at 80 • C with stirring for 4 h. Finally, the precipitate was collected via filtration, then washed several times with deionized water and dried overnight. Afterward, the CuO-Co 2 O 3 nanocomposite was calcined at 500 • C for 5 h.
Synthesis of Cs@CMC/CuO-Co 2 O 3 Beads
The novel nanocomposite beads were synthesized in two main steps. The method was reported by our group with some modifications [4,21]. The first step, CMC (0.5 g) was dissolved in (25 mL) with stirring for 2 h at 50 • C. Meanwhile, 60 mg of CuO-Co 2 O 3 powder was dissolved in 5 mL of deionized water and further sonicated for around 10 min for suspension of CuO-Co 2 O 3 . Subsequently, CuO-Co 2 O 3 dispersed solution was added into CMC solution with continuous stirring for 1 h at 50 • C, and half an hour at RT (24 • C). The mixture was transferred to 3 mL syringe and dropped into 0.2 M AlCl 3 solution for crosslinking and formation of beads. The beads were kept in AlCl 3 solution for 12 h and then collected and washed three times using deionized water. Secondly, Cs solution was prepared in 1% acetic acid distilled water mixture and stirred for 3 h at 50 • C. The washed beads were transferred to Cs solution and kept for 1 h. Finally, the Cs@CMC/CuO-Co 2 O 3 beads were separated and dried at RT for 24 h. Dry CMC/CuO-Co 2 O 3 beads were flat, while the dry Cs@CMC/CuO-Co 2 O 3 beads had a circle shape due to their chitosan coating, Figure 17. solution for crosslinking and formation of beads. The beads were kept in AlCl3 solution for 12 h and then collected and washed three times using deionized water. Secondly, Cs solution was prepared in 1% acetic acid distilled water mixture and stirred for 3 h at 50 °C. The washed beads were transferred to Cs solution and kept for 1 h. Finally, the Cs@CMC/CuO-Co2O3 beads were separated and dried at RT for 24 h. Dry CMC/CuO-Co2O3 beads were flat, while the dry Cs@CMC/CuO-Co2O3 beads had a circle shape due to their chitosan coating, Figure 17.
Metal Uptake Adsorption
In order to evaluate the selectivity of the prepared nanocomposite beads (Cs@CMC/ CuO-Co 2 O 3 and CMC/CuO-Co 2 O 3 ), adsorption methods of metal ions were developed toward certain metal ions including Ni(II), Ag(I), Zn(II), and Fe(II). Certain amounts of Cs@CMC/CuO-Co 2 O 3 and CMC/CuO-Co 2 O 3 (5.0 mg) were added individually into 5 mL of 5 ppm of sample solution of selected metal ions for 1 h at RT (25 ± 1 • C). The beads were separated from the solutions and dried at RT. The inductively coupled plasma−optical emission spectroscopy (ICP-OES) was employed to detect the concentration of each metal ion before and after adsorption on Cs@CMC/CuO-Co 2 O 3 and CMC/CuO-Co 2 O 3 beads. Optimization of parameters of the selected metal ions Ni(II) was carried out as mentioned later. Moreover, the Ag (I) beads were also kept after adsorption and converted to NPs for further studies.
For isotherm study, 5 mg of each adsorbent Cs@CMC/CuO-Co 2 O 3 and CMC/CuO-Co 2 O 3 beads was added to 5 mL of Ni solution with initial concentrations from (5-100 mg L −1 ). The pH value of Ni solution was adjusted to pH = 7, with mechanical shaking for 60 min.
Moreover, for kinetic study, 5 mg of each adsorbent Cs@CMC/CuO-Co 2 O 3 and CMC/CuO-Co 2 O 3 bead was added to 5 mL of 5 mgL −1 Ni solution. The pH value of Ni solution was adjusted to pH = 7. The concentration of Ni ions was tested at different times (10, 30, 60, 120, 240 min).
Formation of Zero-Valent Nanoparticles
Ni(II)/Cs@CMC/CuO-Co 2 O 3 and Ag(I)/CMC/CuO-Co 2 O 3 beads were collected and dried. These dried beads were later utilized for the synthesis of nanoparticles ( Figure 18) by loading them into a freshly prepared 0.05 M NaBH 4 aqueous solution for 20 min in order to complete the reduction of Ni(II) and Ag(I) to Ni(0) and Ag(0) zero-valent nanoparticles, respectively, as shown in the following Equations (9) and (10).
Metal Uptake Adsorption
In order to evaluate the selectivity of the prepared nanocomposite beads (Cs@CMC/CuO-Co2O3 and CMC/CuO-Co2O3), adsorption methods of metal ions were developed toward certain metal ions including Ni(II), Ag(I), Zn(II), and Fe(II). Certain amounts of Cs@CMC/CuO-Co2O3 and CMC/CuO-Co2O3 (5.0 mg) were added individually into 5 mL of 5 ppm of sample solution of selected metal ions for 1 h at RT (25 ± 1 °C). The beads were separated from the solutions and dried at RT. The inductively coupled plasma−optical emission spectroscopy (ICP-OES) was employed to detect the concentration of each metal ion before and after adsorption on Cs@CMC/CuO-Co2O3 and CMC/CuO-Co2O3 beads. Optimization of parameters of the selected metal ions Ni(II) was carried out as mentioned later. Moreover, the Ag (I) beads were also kept after adsorption and converted to NPs for further studies.
For isotherm study, 5 mg of each adsorbent Cs@CMC/CuO-Co2O3 and CMC/CuO-Co2O3 beads was added to 5 mL of Ni solution with initial concentrations from (5-100 mgL −1 ). The pH value of Ni solution was adjusted to pH = 7, with mechanical shaking for 60 min.
Moreover, for kinetic study, 5 mg of each adsorbent Cs@CMC/CuO-Co2O3 and CMC/CuO-Co2O3 bead was added to 5 mL of 5 mgL −1 Ni solution. The pH value of Ni solution was adjusted to pH = 7. The concentration of Ni ions was tested at different times (10, 30, 60, 120, 240 min).
Formation of Zero-Valent Nanoparticles
Ni(II)/Cs@CMC/CuO-Co2O3 and Ag(I)/CMC/CuO-Co2O3 beads were collected and dried. These dried beads were later utilized for the synthesis of nanoparticles ( Figure 18) by loading them into a freshly prepared 0.05 M NaBH4 aqueous solution for 20 min in order to complete the reduction of Ni(II) and Ag(I) to Ni(0) and Ag(0) zero-valent nanoparticles, respectively, as shown in the following Equations (9)
Catalytic Reduction Experiments
The catalytic ability of prepared catalysts, Ni/Cs@CMC/CuO-Co 2 O 3 , Cs@CMC/CuO- The catalytic procedure was performed by placing 2.5 mL of each pollutant (MO, EY, 4-NP and K 3 [Fe (CN) 6 ]) in a UV cuvette, and 0.5 mL of a freshly prepared solution of NaBH 4 (0.1 M) was added. Then, 5 mg of the prepared catalyst and NaBH 4 were both added to the mixture in the cuvette. Catalytic activity was continuously examined via UV-vis spectrophotometer with every 1 min interval. The recyclability of Ni/Cs@CMC/CuO-Co 2 O 3 beads was tested in the catalytic reduction of MO, EY and K 3 [Fe(CN) 6 ] . This catalyst was used up to five times after washing with deionized water and MeOH and dried for the next cycle.
The conversion (%) of all compounds was calculated by utilizing Equation (11): where C 0 (mgL −1 ) is the initial concentration of compounds, and C e (mgL −1 ) is the final concentration. The rate constant K and adjacent R 2 values were determined from pseudo-first-order kinetics as described below in Equation (12):
Characterization
The morphologies and structures of Ni/Cs@CMC/CuO-Co 2 O 3 , Cs@CMC/CuO-Co 2 O 3 , CMC/CuO-Co 2 O 3 and CuO-Co 2 O 3 were characterized by scanning electron microscope (SEM) (of JEOL, JSM-7600F, Japan). For SEM analysis, Ni/Cs@CMC/CuO-Co 2 O 3 , Cs@CMC/CuO-Co 2 O 3 , CMC/CuO-Co 2 O 3 and CuO-Co 2 O 3 were individually fixed on the stub using carbon tape as a binder and then sputtered with platinum for 15 s. In addition, X-ray diffraction (XRD) was employed to examine the phase structure of all prepared catalysts. Elemental analysis of Ni/Cs@CMC/CuO-Co 2 O 3 , Cs@CMC/CuO-Co 2 O 3 , CMC/CuO-Co 2 O 3 and CuO-Co 2 O 3 was analyzed by energy dispersive spectrometer (EDS). FTIR spectrometer was applied to analyze the spectra of all prepared materials. The catalytic reduction studies were recorded by UV-vis spectra, (Thermo Scientific TM Evolution TM 350 UV-vis spectrophotometer).
Data Availability Statement:
The data presented in this study are available on request from the corresponding author. | 11,975.6 | 2022-02-01T00:00:00.000 | [
"Environmental Science",
"Chemistry",
"Materials Science"
] |
China’s GDP forecasting using Long Short Term Memory Recurrent Neural Network and Hidden Markov Model
This paper presents a Long Short Term Memory Recurrent Neural Network and Hidden Markov Model (LSTM-HMM) to predict China’s Gross Domestic Product (GDP) fluctuation state within a rolling time window. We compare the predictive power of LSTM-HMM with other dynamic forecast systems within different time windows, which involves the Hidden Markov Model (HMM), Gaussian Mixture Model-Hidden Markov Model (GMM-HMM) and LSTM-HMM with an input of monthly Consumer Price Index (CPI) or quarterly CPI within 4-year, 6-year, 8-year and 10-year time window. These forecasting models employed in our empirical analysis share the basic HMM structure but differ in the generation of observable CPI fluctuation states. Our forecasting results suggest that (1) among all the models, LSTM-HMM generally performs better than the other models; (2) the model performance can be improved when model input transforms from quarterly to monthly; (3) among all the time windows, models within 10-year time window have better overall performance; (4) within 10-year time window, the LSTM-HMM, with either quarterly or monthly input, has the best accuracy and consistency.
part about innovation is on page 3. As for the findings of this paper, we give a comprehensive review of results in experiments across time windows and models, which is shown in Table 14 and Fig. 12. We find from these results that among all the time windows, models within 8-year time window have better overall performance in accuracy and consistence and LSTM-HMM with an input of monthly CPI generally has good precision, and within 8-year time window it has the best accuracy and consistency. The two most important reasons are that (1) 8-year window with 32 observations in each round of training is long enough for us to observe all types of GDP fluctuation states and also short enough to avoid the bias caused by too many observations of certain type; (2)LSTM take more effects of historical CPI into the training process and tend to predict a suitable real time CPI fluctuation state, which replaces the lagged observable states adopted by other models, and helps in the prediction of GDP fluctuation using HMM. The revised part is on page 25. All the revised parts are given as follows.
Revised part on page 3: Motivated by the usefulness of HMM and LSTM in economic forecast and the sparsity of literatures on the application of LSTM-HMM in GDP forecast, this paper establishes a LSTM-HMM to predict GDP fluctuation states. We further compare the predictive power of LSTM-HMM with HMM and GMM-HMM using monthly CPI or quarterly CPI within different time windows. There are three innovation points. First, in LSTM-HMM, we innovatively utilise LSTM to predict real-time CPI fluctuation states while feeding this prediction into the forecast of real-time GDP fluctuation states. Second, we select the inflation indicator CPI as the model input and utilise available monthly and quarterly CPI respectively in the LSTM-HMM. Third, we find from the empirical analysis of China's GDP fluctuation states that among all the time windows of rolling prediction, models within 8-year time window have better overall performance in accuracy and consistence and LSTM-HMM with an input of monthly CPI generally has good precision, which has the best accuracy and consistency with an input of monthly CPI within 8-year time window.
Revised part on page 25 and page 26: ......There are several reasons for these model results: firstly, 8-year window with 32 observations in each round of training is long enough for us to observe all types of GDP fluctuation states and also short enough to avoid the bias caused by too many observations of certain type as a proper data source for LSTM-HMM; secondly, LSTM-HMM take more effects of historical CPI into the training process with the application of LSTM and tend to predict a suitable real time CPI fluctuation state, which replaces the lagged observable states adopted by other models, and helps in the prediction of GDP fluctuation using HMM;
Reviewer 1 COMMENT 3:
What is the motivation to introduce a LSTM?
Reply: Thank you very much for the comment. This is a very helpful suggestion. LSTM is a key component of LSTM-HMM, which is characterized with the capability of processing tasks involving long time lags. There are studies on the economic forecast that utilise LSTM (Zhang and Huang , 2021;Zahara et al , 2020), while there are also researches on the application of HMM in economic forecast (Gregoir and Lenglart, 2000;Bellone and Gautier, 2004). However, there isn't any research on GDP forecast that utilises LSTM-HMM, which involves the LSTM's prediction of observable states for HMM. Therefore, this motivates our research that tries to combine the potentials of LSTM and HMM. We have detailed the motivation of introducing LSTM to HMM on page 2 and page 3. The revised parts are as follows: Revised part on page 2: Gross Domestic Product (GDP) is a key indicator of economic growth, which measures the total value of goods and services produced within a country in a year, not including its income from investments in other countries. There are considerable literatures on economic prediction in financial market that utilises Hidden Markov Model (HMM) (Gregoir and Lenglart, 2000;Bellone and Gautier, 2004) and Long Short-Term Memory (LSTM) recurrent neural network (Zhang and Huang , 2021;Zahara et al , 2020). However, there isn't any study on the application of LSTM-HMM in GDP forecast that combines the potentials of these two models, which serves as the motivation of our research on the China's GDP forecast using LSTM-HMM based on the dynamic relationship between inflation and economic growth.
Revised part on page 3: Long Short-Term Memory (LSTM) is a type of recurrent neural network that works better on tasks involving long time lags by bridging huge time lags between relevant input events (Ronald J and Jing, 1990), which has emerged as an effective and scalable model for time-series prediction. LSTM is applied to CPI prediction in Indonesia with multivariate input (Zahara et al , 2020). LSTM is also utilised to the optimal hedging in the presence of market frictions (Zhang and Huang , 2021), which shows usefulness in the empirical analysis of real option markets.
Motivated by the usefulness of HMM and LSTM in economic forecast and the sparsity of literatures on the application of LSTM-HMM in GDP forecast, this paper establishes a LSTM-HMM to predict GDP fluctuation states ......
Reviewer 1 COMMENT 4:
In section 2.1, I found the notation confusing. The authors write that "S is a discrete set (...), where t stands for time.", but I do not see t before that. After this phrase, the authors start to use s t , is this the same capital S defined before?
Thank you very much for the comment. We modified the definition of discrete set S, which is a time series denoted with time t. The revised part is as follows: Revised part on page 4: ...... S = {s 1 , s 2 , ..., s t , ...} is a discrete set of GDP fluctuation states, where t stands for time. ......
Reviewer 1 COMMENT 5:
In Section 3.1, when describing Fig. 5 the authors mention "the curve" and "the straight line". I suppose this is a typo.
Reply: Thank you very much for the comment. To tell the difference of GDP and CPI growth rate, we replace Fig.5 with the new figure, where GDP growth rate is represented with solid line, CPI growth rate is represented with dotted line. The revised part is as follows: Revised part on page 10: (See Fig.1) We firstly describe the trend of annual data of CPI and GDP to have a general understanding about their similarity of growth rate. The trend of annual GDP growth rate and CPI growth rate is shown in Fig.1, where the dotted line represents the annual growth rate of CPI and the solid line represents the annual growth rate of GDP.
Reviewer 1 COMMENT 6: In section 3.2, the Granger does not exactly measure causality, therefore it is better to use the term Granger predictive causality, and clarify this in the text. We conduct a test of Granger predictive causality with x t and y t to obtain the interaction relationship between CPI and GDP in the same period. Considering correlation between two variables that indicates comovement, Granger causality relates to the idea of incremental predictive power of one time series for forecasting another time series, which is a statistically testable criterion based on the ideas of precedence and predictive power (Croux and Reusens, 2013;Yao et al, 2000). ......
Reviewer 1 COMMENT 7:
The results of the article are based on the comparison of numbers without any clear interpretation. Many of the numerical results have four to five decimal digits (e.g. γ). Is this precision really significant? For instance, the authors draw conclusions and compare models with an accuracy of 0.6406 and 0.6563. Is this difference really significant?
Reply: Thank you very much for the comment. We have revised the number of decimal digits as 2 for uniform and significant description, since Bellone and Gautier (2004) set the number of decimal digits as 2 while conducting economic downturn analysis and HMM prediction. The revised parts are as follows: Revised part on page 11: (See Table.1) with high overall significance since its P-value is only less than 0.01. We then test the stability of the residuals e t of the linear model to avoid spurious regression using unit root test, where the P-value with lag order of 4 is below the 5 % significance level. The result of unit root test shows the stability of the time series e t involved in our analysis, and we then conduct a cointegration test using method proposed by (Phillips and Ouliaris, 1990), which shows that under the significance level of 10 percent, we can reject the hypothesis of no cointegration relationship since the statistical value is 29.83, higher than the critical value of 27.43 at right tail.
Revised part on page 12: (See Table 2) As we can see from Table 2, the correlation of x t and y t is 0.88 with a P-value less than 1 %.
Revised part on page 13: The 90% confidence intervals under different scales are also plotted(the dotted line) respectively, so we can read off the asymptotic 90% confidence set [101.33,102.67] from the graph from where test statistics cross the dotted line. whereγ is significant when taking the value of 101.37.
Revised part on page 14:(See Table 3) As we can see from Table 3, the model allows the slope parameters to differ depending on the value of h t . The slope coefficient θ 1 is 0.32 given h t ≤ 101.37, while the slope coefficient θ 2 is 0.62 given h t ≥ 101.37. ......
Reviewer 2 COMMENT 1:
Throughout the text the author describes several properties without specifying what CPI means. Therefore, the author must need to define and make clear what CPI means.
Reply: Thank you very much for the comment. We have supplemented the meaning of CPI in abstract while describing CPI as a key indicator of inflation rate in introduction part. The revised parts are as follows: ...... Among these researches, Consumer Price Index (CPI) has been utilised as the indicator of inflation rate (Sarel, 1996;Hwang and Wu, 2011), while Gross Domestic Product (GDP) is also a key indicator economic growth (Sarel, 1996;Gerlach-Kristen, 2009). ......
Reviewer 2 COMMENT 2:
In the last paragraph of the Conclusion Section the author states that "In practical application, we should try our best to reduce the impact of selection bias.". The author should clarify what he means by "practical application" and what kinds of selection bias is he talking about.
Reply: Thank you very much for the comment. We supplement the explanation about selection bias and also replace practical application with more detailed explanation. The revised parts are as follows: Revised part on page 26: ..... In the empirical analysis of GDP fluctuation states forecast, models encapsulated with HMM tend to ignore the effect of the sparse states and the giant difference of numbers of states will result in selection bias. In this paper, we reduce the impact of selection bias by experiments within pre-selected lengths of time window ranging from 4-year, 6-year, 8-year to 10-year and find that within 8-year window, the model performance is better. ......
Reviewer 2 COMMENT 3:
The author must be revise all figures and tables of the manuscript, with clear description in the respective captions, in order to improve the quality and readability of the manuscript. For instance, there is a typo in Table 13. Table 4) Lag period Sample size F-value P-value y t is not the cause of x t 5 80 2.578 0.04* x t is not the cause of y t 2 80 9.79 *** Revised part on page 14:(See Table 5) Table 6 and Table 7) Appropriate-speed growth stage 2 High-speed growth stage x > 101.37 Intensified inflation Figure 5: The structure of the LSTM we build to predict CPI fluctuation states using historical CPI series.
Revised part on page 16: Revised part on page 19: (See Table 9) Revised part on page 20:(See Fig.7 and Table 10) Revised part on page 21:(See Fig.8 and Table 11) Revised part on page 23:(See Fig.10) Table 13 and Fig.11) Table 14) is a clear spread in the ROC curves. I think that the author must provide some indication as to why this is so.
Reply: Thank you very much for the comment. We have supplemented description of performance of ROC curves within each time window and tried to provide some indication of the reason. The revised part is as follows: Revised part on page 24 and page 25: For 4-year time window in Fig.11(a), the ROC curves do not vary much. Among the models, LSTM-HMM has better performance with both quarterly and monthly input. LSTM-HMM and HMM performs better with quarterly input, but GMM-HMM performs better with monthly input. For 6-year time window in Fig.11(b), there is a clear spread between ROC curves of HMM(q), LSTM-HMM(q), GMM-HMM(m) and LSTM-HMM(m) and ROC curves of GMM-HMM(q) and HMM(m). This spread may result from model input, as GMM-HMM with monthly input and 6-year time window performs better than GMM-HMM with quarterly input and 6-year time window, while for HMM with 6-year time window, the model with quarterly input performs better. For 8-year time window in Fig.11(c), there is also a clear spread between ROC curves, which is similar with the spread in 6-year time window, resulting from different model inputs. In addition, LSTM-HMM(m) performs best with highest AUC value.
For 10-year time window in Fig.11(d), ROC curves share similar trends, where LSTM-HMM and HMM performs better with quarterly input, but GMM-HMM performs better with monthly | 3,453.8 | 2022-06-17T00:00:00.000 | [
"Economics",
"Computer Science"
] |
Intellectual capital management within the framework of the VBM concept
. Currently, the process of forming new management concepts is ongoing in Russia. At the same time, on the way to the development, creation and implementation of modern concepts, approaches and management systems at domestic enterprises there are several problems, one of which is the problem of intellectual capital management based on the VBM concept. Issues related to the definition of the concept of "intellectual capital", its assessment and management are relevant. The article discusses the features of the VBM concept at the enterprise level, its advantages and disadvantages, implementation problems. The analysis performed in the study made it possible to clarify the economic content of the concept of intellectual capital and identify its key features that allow characterizing it and using specific assessment methods. The article describes the main methods and indicators for assessing intellectual capital, based on which an approach to the assessment and management of intellectual capital of an enterprise is proposed. The approach is based on a combination of two methods for calculating intellectual capital - CIV and MVAIC. The first technique allows to assess the amount of intellectual capital of the enterprise. The second technique allows to reveal the structure of intellectual capital. The joint use of techniques allows to get a valuation of the intellectual capital of the enterprise and evaluate the effectiveness of investments in the development of its components. Criteria for assessing the effectiveness of investments in elements of intellectual capital are proposed.
Introduction
Currently, there is widespread development and implementation of new management concepts in the enterprise development management system. The most popular are concepts based on value management and the concept of sustainable development [1-5, etc.]. The concept of cost-based management or "VBM management" appeared in Russia at the end of the 20th century and today is widely used by many leading enterprises of the country. The concept itself implies a management of enterprises in which shareholders (investors) receive the maximum return on investment, and the enterprise itself, as an open socioeconomic system, must strive to maximize its value for its development. Management decisions, management methods and techniques should be directed towards one main goalto contribute to the growth of business value. The activities of the enterprise should be aimed at ensuring growth in its value, while the indicator of growth in value implies not only an increase in quantitative indicators, but also qualitative growth of the enterprise, i.e. its development, acts as an integral criterion for the quality of management.
The concept of VBM is based on the hypothesis that management entities can influence the results of an enterprise's activities, considering the cost of raising capital, as well as comparing the profitability of an enterprise with alternative options for investing capital. The concept of enterprise management is not only based on actions and managerial decisions aimed at increasing current income or planned for the near future, but also is focused on obtaining higher profits (super-profits) in the distant future. This, in turn, can increase the current and future value of the enterprise.
Features of the enterprise-wide VBM concept
The concept of company value management involves the construction of a management system and evaluation of the results of the enterprise's functioning based on specially developed cost indicators and the use of special management levers (functions and tools, often called drivers) developed based on these indicators. The peculiarity of cost-based management from the classical management concept is that with the VBM value management concept, the enterprise management activity is mainly aimed at increasing the value of this enterprise, and in the classical concept, the activity is aimed at generating profit.
The basic principles on which the VBM cost concept is based include [6]: the cash flow generated by the enterprise itself acts as an indicator evaluating the activities of the enterprise; if profitability is higher than the costs that are raised in capital, then new investments should be made to create new value; it is necessary to control the structure of the assets of the enterprise in order to ensure maximum growth of the enterprise.
The following factors contributed to the final implementation of the cost approach [Ibid.]: the emergence of large shareholders represented by insurance and investment funds, in which the value of the enterprise becomes the main indicator of activity; global development of international financial institutions -investment, stock, insurance; the development in the global economy of competition, which is comprehensive, covering not only consumer markets, but also markets for resources, information, etc.; the emphasis of the classical school of management on the final result of the production of the enterprise, which, in turn, does not allow to identify the most effective ways of development; the development of a new direction -value assessment, the need for which arises in many cases when managing an enterprise, including during its restructuring.
The advantages and disadvantages of the Value Based Management concept are presented in Table 1. Despite several advantages, the introduction of a value concept into the activities of modern domestic enterprises encounters serious difficulties. These difficulties are associated with the following factors: 1. Increased subjectivity of the initiator of the assessment when choosing approaches and indicators of cost management, as well as when choosing methods for evaluating individual elements of models.
2. Temporary limitations and static methods and estimates. 3. The existing theoretical and methodological base of cost-based management proceeds from the premise of the possibility of determining and implementing the best and most efficient use of property, ensuring the maximum flow of benefits, expressed in the growth of the value of the enterprise. 4. The lack of a universal decision-making technology at all functional levels of management, as well as the insufficient mutual coordination of these technologies among themselves by goals and factors, contributes to maintaining adherence to various standards and repeating erroneous decisions.
5. Issues of purchase and sale of an enterprise, when the change of ownership of the enterprise, and as a result of a change of ownership of capital, have not been fully investigated.
6. The studies related to the assessment and consideration in the control loop of information, social, organizational, structural and other types of capital have not been completed.
7. The problem of hierarchical subordination of development management mechanisms at various levels -tactical, strategic, institutional, has not been resolved.
Advantages
Disadvantages The VBM concept can be used both internally and by external users, as it is quite understandable When using this concept, various types of indicators are applied, which imply a special calculation technique, which in turn is a laborious process It can be used as a comparison tool, for example, in the process of benchmarking, comparing the effectiveness of performance results For small businesses, using the VBM concept is difficult, as it is difficult to make cost forecasts It can be used in the formation and distribution of enterprise resources, since it can be used to understand the differences between investments that can create value and not create value Problems may arise such as managerial costs when introducing the system into enterprise management practice Allows you to analyze the strategy of the enterprise The difficulties of mathematical calculation When using the concept has a good impact on the result of the enterprise The difficulty of translating accounting indicators to indicators that make economic sense Allows company management to focus on factors, highlight key factors that create value, and allow you to create higher shareholder value There are technical difficulties The main element of cost management is the value of the enterprise. Value acts as the basis for quantitative ratios in equivalent exchange. Determining the value of the enterprise is possible using three generally accepted approaches: comparative, costly and profitable. Table 2 presents the advantages and disadvantages of approaches to calculating the value of the enterprise. The criterion for choosing the optimal management decision is not only a positive investment return on the capital invested in the enterprise, but a certain level of value growth that keeps the invested capital in this field of activity.
Analyzing the above provisions, we can formulate the following definition of enterprise value in the context of development. This is the monetary aggregate flow of all benefits from the use of property, considering the cost of the prospect of additional income in the future, which are assessed at the time of making the management decision. In this case, the effective value of the enterprise in the context of development will be the value of the assets, which is equal to the positive difference of two values: the use value of the assets for the given business owner and the cost of their sale in the market.
Intellectual capital and its place in the VBM concept
Almost in parallel with the VBM concept, the concept of intellectual capital began to develop. The term "intellectual capital" was first used by J. Galbraith in 1969. Wider distribution of this term refers to the first half of the 1990s. In 1993, the Swedish insurance company Scandia published in its annual report data on its intellectual capital, the decisive role in the dissemination of this term was played by T. Stuart's article "Intellectual capital is the main wealth of your company" [7]. The study of intellectual capital is a new direction in enterprise management. The problem of its assessment is because there is no single methodology for assessing and measuring intellectual capital, and the current reporting does not allow for a realistic assessment of intellectual assets. Exploring the economic essence of the category of "intellectual capital" [7-9 etc.], we can conclude: 1. In its economic essence, intellectual capital is an enterprise resource necessary for an economic entity to produce products or provide services. Adding value to the enterprise, thereby potentially contributing to profit; 2. The main difference between intellectual capital and other resources of the enterprise is the difficulty to uniquely identify, evaluate and use it in full; 3. It is unique for each business entity, can be used an unlimited number of times in the process of production of goods and services.
Structurally, intellectual capital includes human, organizational, social, and managerial capital [10]. Intellectual capital also includes intangible assets of an enterprise, which are rights to various types of intellectual activity, including: exclusive rights to works of science, literature and art, exclusive rights to computer programs and databases, etc.
Analysis of various VBM models shows that most models do not allow considering the influence of non-financial factors on the performance of an enterprise [11]. At the same time, models that include the ability to consider non-financial factors (MVA, q-Tobin) are so highly aggregated models that they do not allow constructing models of the relationship of these factors with the resulting indicator. Even though some studies based on econometric models have revealed a high level of correlation between the value of intellectual capital and q-Tobin, the question of intellectual capital management remains open.
This leads to an obvious contradiction: why are enterprises of the same size and legal form operating in the same markets, i.e. having the same levels of cost of equity and borrowed capital, achieve different results. Within the framework of the value concept, an answer to such a question is impossible to obtain. The cost management models proposed later based on the interests of stakeholders allowed to somewhat reduce the tension of this problem but did not completely solve this problem due to certain limitations of the stakeholder approach.
A breakthrough in solving this problem became possible thanks to the introduction of the concept of intellectual capital. Analyzing the causes of the concept of intellectual capital, the following factors can be noted. Intellectual capital, in its economic essence, reflects the possibility of creating super profits by an enterprise. Based on the resource concept of the enterprise, intellectual capital is a set of features of connections in the mechanism of using diverse financial, material and intangible, both identifiable and unidentifiable resources, which in turn are transformed into hard-to-copy competitive advantages, which ultimately ensures the success of the enterprise.
It is important to understand that an increase in the value of a company determines not the intrinsic value of intangible assets as assets in general, but the ability of the company's management to effectively use the intangible resources at its disposal. The key to this should be a quantitative assessment of the impact of intangible assets on the value of the company, as well as the subsequent formation of the use algorithm.
Thus, it becomes obvious that an approach to assessing and managing the value of an enterprise using the approaches of intellectual capital seems to be productive, based on a combined valuation that considers both the total value and structure of intellectual capital. The main methods and indicators for calculating intellectual capital are presented in the Table 3.
The approach to the assessment and management of intellectual capital of the enterprise
One of the problems in assessing intellectual capital is that looking at the elements of intellectual capital, such as investment, for example, in personnel (human capital), marketing and advertising (consumer capital), and the development of digital infrastructure (structural capital) requires a correlation of costs and cash flows returns on these investments. However, the analysis of the methods considered in table. 4 shows, that none of the presented approaches allows this. At the same time, the absence of such a technique makes it difficult to develop and analyze the results of the enterprise's activities in managing its intellectual capital.
To resolve this contradiction, a methodology was proposed for valuing the elements of intellectual capital, based on the calculation of two indicators of intellectual capital, one of which, CIV, takes into account the economic essence of intellectual capital, as part of the assets generating extra profits and including discounting of excess profit, and the second indicator VAIC (MVAIC) allows you to take into account the structure of intellectual capital. To calculate the CIV, the following formula was used: If , then CIV was calculated: where: t -three-year average tax rate In the case of an infinite period of intellectual capitalization, it is simplified to the following form: VAIC calculation method: where: HCE -shows how efficiently human capital is used (equal to the ratio of value added to labor costs); SCE -shows how efficiently organizational capital is used (equal to the ratio of value added minus human capital and value added); CEE -shows how well used capital is used The individual components of VAIC were calculated as follows: where: RCE -shows how effective is the use of relational or consumer capital (equal to the ratio of the sum of the costs of selling, advertising, marketing to value added); IRE -investments in the development of consumer capital. As for structural capital, here, in our opinion, the following approach can be used. All other investments not related to investments in fixed assets, as well as not being essentially investments in IHC and IRE, can be classified as investments in structural capital. Thus, the intellectual capital of the enterprise will be created by the following elements: directly by human capital, structural capital and consumer capital: where the coefficients of the components of intellectual capital are defined as follows: Estimates of the components of intellectual capital obtained using expression (9) can be used in the future to analyze the effectiveness of investments in elements of intellectual capital and monitor the implementation of their development programs developed at the enterprise. This can be done by comparing the amount of investment in individual elements of intellectual capital with the growth rate of these elements.
Conclusion
In the work, based on a combination of CIV and MVAIC methods, an approach to the valuation of the elements of the intellectual capital of an enterprise is proposed This approach allows us to evaluate the contribution of each element to the growth of the intellectual capital of the enterprise, as well as evaluate the effectiveness of investments in these elements. Of course, another approach can be used to calculate the structure, however, at the present time, generally accepted methods for valuing the components of intellectual capital have not been developed.
Consideration of intellectual capital as a key competence of the enterprise, allowed to expand the list of factors affecting the volume of sales, costs, profitability of the enterprise, as well as its value. Elements of intellectual capital that are part of key competencies can be estimated based on the costs of their creation and maintenance. A methodology for assessing the effectiveness of investments in the intellectual capital of an enterprise is proposed, which is based on a combined assessment of the elements of intellectual capital using the CIV and MVAIC methods. The methodology proposes criteria for evaluating the effectiveness of investing in elements of intellectual capital, which are based on a comparison of the intensity of the growth rates of intellectual capital elements with the amount of investment in the intellectual capital of a trading company.
The reported study was funded by RFBR, project number 20-010-00942 A. | 3,964.8 | 2020-01-01T00:00:00.000 | [
"Economics"
] |
Ricci Soliton of CR -Warped Product Manifolds and Their Classifications
: In this article, we derived an equality for CR -warped product in a complex space form which forms the relationship between the gradient and Laplacian of the warping function and second fundamental form. We derived the necessary conditions of a CR -warped product submanifolds in K¨ a hler manifold to be an Einstein manifold in the impact of gradient Ricci soliton. Some classification of CR -warped product submanifolds in the K¨ a hler manifold by using the Euler–Lagrange equation, Dirichlet energy and Hamiltonian is given. We also derive some characterizations of Einstein warped product manifolds under the impact of Ricci Curvature and Divergence of Hessian tensor.
Introduction
The evolution equation for a one-parameter family of a Riemannian metric g is characterized
∂(g(t)) ∂t
= −Ric(g(t)), where Ric indicates the Ricci curvature.Equation ( 1) is known as the Ricci flow Equation [1].The Ricci flow equation is a nonlinear partial differential equation that is highly nonlinear and weakly parabolic.This is strictly parabolic to the group of the diffeomorphism of smooth manifold M, termed as a Gauge group [1][2][3] which has several applications in quantum physics, particle physics, and general relativity.Gauge groups such as neutrinos and leptons characterize numerous standard models in particle physics.The Yang-Mills, general relativity, and electromagnetism are our greatest theories of nature.Each is based on a gauge symmetry at its foundation.The concept of a gauge symmetry is undoubtedly complicated.However, at its core, it is just an ambiguity in the words we use to explain physics.Why should nature enjoy such uncertainty?Understanding nature as a redundant set of variables is helpful for two reasons.First, although gauge symmetry makes our explanation of physics redundant, it seems concise.Ricci flow has several applications in relativity and physics [4].The fixed point of Ricci flow is known as a Ricci soliton (for more details, see [5]), which is characterized by the following relation where L X denotes the Lie derivative and λ ∈ R can be any constant.The nature of Ricci soliton depends on the scalar λ.If λ > 0, λ < 0 and λ = 0, thus the Ricci soliton is known as expanding, shrinking and steady, respectively.In the relation (2), if we replace vector field X by gradient of some smooth function f then Equation ( 2) is transformed as follows: where ∇ 2 f is a Hessian of f .Equation ( 3) is known as gradient Ricci soliton equation, and is equivalent to The Equation ( 4) is the fundamental equation relating the Ricci tensor and Hessian tensor.As a special case, if f is a constant function on a smooth manifold M admitting gradient Ricci soliton, then (3) reduces to Ric = λg.This expression leads to M as an Einstein manifold.
Furthermore, the geometry of warped products has significant use in mathematics and physics.In physics, several solutions of the Einstein field equation have warped product structures, for example, Schwarzschild's solution and Friedmann-Lemaitre-Robertson-Walker's solution.In string theory, the RS model has a significant role which is a fivedimensional anti-de Sitter warped product manifold.Such manifolds were first realized in 1969 by R. L. Bishop and B. O'Neill when they studied manifolds of negative curvature.They proved that there does not exist any Riemannian product manifold whose curvature is negative.After that, several authors studied warped product manifolds under different circumstances.This was taking more attention at the beginning of the twenty-first century, when authors such as [6][7][8][9][10][11][12][13][14][15][16] studied relevant topics and discussed some geometric properties related to singularity theory and submanifolds theory, etc.In 2002, B. Y. Chen derived the existence of the CR-warped product of the form M T × f M ⊥ in a K ähler manifold, where M T is real submanifold and M ⊥ is a complex submanifold.After that, numerous geometers studied CR-warped product manifolds and their generalized classes in different ambient spaces [6,7,[17][18][19][20].
The warped product manifold denoted by M × f N is a product of Riemannian manifolds M 1 and M 2 , which furnished a Riemannian metric g gratifying where g M and g N are the Riemannian metric of the smooth manifolds M and N, respectively, and f : M −→ (0, ∞) bis a smooth function known as a warping function [21].If M × f N is a warped product manifold, then the following relations hold in From the above relation, we deduce that M and N are totally geodesic and totally umbilical manifolds in M × f N.
Recently, the Ricci soliton of warped products is taking more attention from the geometers.A Ricci soliton with warped product structure and gradient Ricci soliton with warped product structure were classified by different authors [22][23][24][25][26][27][28].The author of [23] derived that if warped product manifold admits a gradient Ricci soliton, then the fiber is necessarily Einstein, and the potential function depends on the base manifold.In [24], the authors considered the Ricci soliton of a warped product.They derived useful results for such a manifold and applied them to the different spacetime models.Recently, the authors of [29] derived some useful results from Sasakian manifolds that admit an almost -Ricci soliton structure, and the authors of [30] extended a -Ricci soliton to a -η-Ricci soliton in Kenmotsu manifold.Furthermore, the Ricci curvature of warped products is utilized in string theory and the general theory of relativity.In the well-known Einstein's field equations (for more details, see [31]), the Ricci tensor establishes the connection to the matter distribution in the universe.Furthermore, the Ricci tensor is a part of the curvature of spacetime, which represents gravity's general relativity and also examines the degree to which matter will tend to converge or diverge concerning time.More generally, the Riemannian curvature is not more important than the Ricci curvature in physics.Suppose the solution of the Einstein field equation is a Ricci flat Riemannian (pseudo-Riemannian) manifold.In that case, it indicates that the cosmological constant is zero (for more details, see [32][33][34]).Due to the huge application of the Ricci soliton or, more generally, the Ricci tensor in physics, we are motivated to study the Ricci soliton of CR-warped product submanifold in a complex space form.We consider the question and provide a partial answer to it.The question is that what are necessary and sufficient conditions for warped product immersions in complex space forms to be an Einstein warped product manifold with the impact of a gradient Ricci soliton?
This article is arranged as follows: Second 2 includes some necessary information about the K ähler manifold and its submanifolds.In Section 3, we derived an equality for a CR-warped product in a complex space form, forming the relationship between the gradient and Laplacian of the warping function and second fundamental form.The vector field X is considered as the gradient of the warping function in the relation (2) and derived the condition for such warped product to be Einsteinian and also derived some useful results in this article as applications of Theorem 1 and Lemma 2 into the Euler-Lagrange equation, in the Dirichlet energy and in the Ricci curvature in Section 4.
Preliminaries
From the well-known literature of complex geometry, an almost Hermitian manifold is a smooth manifold that admits an almost complex structure J and Hermitian metric g satisfying for every U 1 , U 2 ∈ Γ(T M 2n ), Γ(T M 2n ) and I indicate for the section of tangent bundle M 2n and identity transformation, respectively, (see, [6,[35][36][37]).The metric g is skew-symmetric Definition 1.A K ähler manifold [6,[35][36][37] is almost a Hermitian manifold, M 2n satisfies Here, ∇ indicates the Levi-Civita connection on M 2n .
Moreover, if the holomorphic sectional curvature of a non-flat K ähler manifold is constant, then M 2n is termed as a complex space form.In this article, we denote M(c) for a complex space form.The the curvature tensor R of M(c) is characterized by where c is holomorphic sectional curvature.Let us assume that M is an m-dimensional Riemannian submanifold of a K ähler manifold M 2n .Let us denote Γ(TM) for the section of the tangent bundle of M and Γ(TM ⊥ ) for the set of all normal vector fields of M, respectively, and also ∇ for the Levi-Civita connection on tangent bundle TM, ∇ ⊥ for the Levi-Civita connection on normal bundle TM ⊥ , respectively.Thus, the Gauss and Weingarten formulas are described as follows: ∇ for all U 1 , U 2 ∈ Γ(TM) and ξ ∈ Γ(TM ⊥ ), where A ξ and h are shape operator and second fundamental form, aided by The submanifold M is totally umbilical [6,7,19,20] if where H is the mean curvature vector described by H = 1 m trace(h).The covariant derivative of σ is computed by the following relation The following relation characterizes the Gauss and Codazzi equations for every where tU 1 = tan(J U 1 ) (resp.t ξ = tan(J ξ)) and nU 1 = nor(J U 1 ) (resp.n ξ = nor(J ξ)) are the tangential and normal parts of J U 1 (resp.J ξ), and tan and nor are orthogonal projections on TM and TM ⊥ .With the help of (9), ( 18) and (19), we have Furthermore, the covariant derivative of J , t and n are described by . By the utilization of ( 8), ( 10), ( 12), ( 13), ( 18), ( 19) and ( 22)-( 24), we obtain for every U 1 , U 2 ∈ Γ(TM).Consider a smooth function f : M −→ R, thus, the gradient and Laplacian are described by for any U 1 ∈ Γ(TM).The relation (28) can be expressed as Let H f be the Hessian of f , thus the Laplacian and Hessian are ailed by We will recall the above results for later use.
Curvature Inequality
In [36], Chen derived the general inequality for CR-warped product submanifold M T × f M ⊥ of K ähler manifold which forms a connection between the gradient of the warping function and second fundamental form via the accompanying relation where β = dim(M ⊥ ).The above relation establishes a relationship between intrinsic invariant and extrinsic invariant.He also derived the classification of such types of warped products by the solution of a special system of partial differentials in a complex space form.After some time, he derived the curvature type inequality for a CR-warped product submanifold M T × f M ⊥ in a complex space theorem which is expressed as the following: where ∆(ln f ) denotes the Laplacian of ln f .
By the inequality (1), he classified CR-warped product manifolds in complex Euclidean space satisfying the equality case of (1) up to rigid motion by the partial Serge embedding defined as φ pk a : C k * × S p −→ C ap+k , where C F * = C ∼ {0} and a, p, k ∈ N.With the help of Hopf fibration, he derived some conditions for CR-warped products in complex projective space CP n (4) and in complex hyperbolic space CH n (−4) to satisfy the equality sign in (1) (for more details, see Theorem 5.1 [35]).Thereafter, several authors studied CR-warped products in different ambient manifolds.Definition 2. Let M be a Riemannian submanifold of a K ähler manifold M 2n .Then M is real submanifold if J (TM) ⊂ TM and M is complex submanifold if J (TM) ⊂ TM ⊥ .Definition 3. A CR-submanifold of K ähler manifold M 2n whose tangent bundle decomposed as TM = D ⊕ D ⊥ , where D is a real distribution and D ⊥ is a complex distribution.Moreover, if there is a Riemannian metric on M of the form g Example 1.Let us consider 10-dimensional Euclidean space R 10 with coordinate (x 1 , • • • , x 5 , y 1 , • • • , y 5 ) and Euclidean metric g.An almost complex structure on R 10 is defined by Consider a subset M ⊂ R 10 immersed as a submanifold by the following immersion The basis spans the tangent subspace of M at each point With straightforward computation, we observed that the distribution spanned by {Z u , Z v , Z w , Z δ } and distribution spanned by Z θ are invariant and anti-invariant distribution, respectively.This shows that M is a warped product manifold with warping function Now, we recall one lemma related to CR-warped product of K ähler manifold for further use: Proof.By the application of (17), we have By the definition of (15), we obtain (∇ . By the application of covariant differentiation property into the last expression, we have With the help of Definition 10, ( 12), ( 13) and ( 14) and (39), the above expression takes the following form Similarly, we have From ( 45)-( 47), we have By using the fact that M is a CR-warped product submanifold in M(c) then, we observe that m = 2α + β and 2n } be a basis of ν.Using above frame in the relation of (48), we find 2α By the virtue of ( 40) and (41), we have 2α By the utilization of ( 49) and (50), we have (43).Since the Laplacian of some functions is a trace of Hessian, by using this fact, we obtain (44).
Then the second fundamental form satisfies Proof.By the direct use of ( 30) into (32), we obtain (51).
Applications
The study of curvatures in differential geometry and physics has great importance.The curvatures of the Riemannian (or pseudo-Riemannian) manifold are determined intrinsically and extrinsically.In curvatures, the Ricci curvature and scalar curvature are more applicable in physics.The Ricci curvature Ric and scalar curvature ρ of M is defined as where K(E q ∧ E s ) is the sectional curvature of the plane spanned by E q and E s .Let G k be the k-plane section of TM spanned by the orthonormal basis In the potential theory, Dirichlet energies have significant use.If f : M −→ R is a smooth function, then the Dirichlet energy is defined by where E( f ) and dV indicate Dirichlet energy and volume element, respectively.If R is replaced by a smooth manifold, then the Sigma model evaluates the Dirichlet energy.The Lagrange equation of the Sigma model has solutions that provide extreme Dirichlet energies.
In Lagrangian mechanics, the Lagrangian L of a mechanical system is T − V, where T is kinetic energy and V is the system's potential energy, respectively.As a generalization to smooth manifolds, the Lagrangian of the smooth function f is determined by The Euler-Lagrange equation for a Lagrangian L is ∆ f = 0. Now, we recall some useful results for further use: ).Let M be a compact, connected Riemannian manifold without boundary and f be a smooth function on M such that ∆ f ≥ 0 (∆ f ≤ 0).Then f is a constant function.
Moreover, if we apply the Green Theorem on a compact orientable Riemannian manifold without boundary, then we obtain where div(X) indicates the divergence [7] for a connected, compact Riemannian manifold with a boundary.The well-known Hopf lemma takes the following form: ).Let M be a compact, connected Riemannian manifold with boundary and f be a smooth function on M such that ∆ f = 0 on M and f ∂M = 0, then f = 0.
The above theorem is also known as the uniqueness theorem of the Dirichlet problem.The Hamiltonian, denoted by H, represents the mechanical system's total energy.On the smooth even-dimensional manifold, the Hamiltonian induces a symplectic structure.The Hamiltonian [7] on the manifold is characterized by Proof.Suppose that ln f satisfies the Euler-Lagrange equation, i.e., ∆ f = 0.This implies that Since h 2 ≥ cαβ + 2β ∇(ln f ) 2 − β∆(ln f ), therefore using the above equation, we have Using the given relation (60), the above expression is reduced to ∇(ln f ) 2 ≤ 0. However, ∇(ln f ) 2 is always positive.Therefore, we must have ∇(ln f ) = 0.This implies that f is constant; thus, the warped product is a Riemannian CR-product.This accomplishes the proof.
Corollary 1.
Let M = M T × f M ⊥ be a CR-warped product in M(c) and warping function is a solution of Euler-Lagrange equation, then M is a Riemannian CR-product Proof.By the direct application of (43) and proceeding same steps as Theorem 4, we achieve (63).
Proof.By taking integration of (32), we have Now, utilizing the relations of (57) into Equation (65), we receive By the application of ( 64) and (66), we observe M β ∇(ln f ) 2 dV ≤ 0. This implies that ∇(ln f ) 2 is negative, i.e., ∇(ln f ) 2 ≤ 0. The last expression leads to f being a constant.This completes the proof.Theorem 6.Let M = M T × f M ⊥ be a compact, orientable CR-warped product submanifold in M(c) such that ∂M = φ.Then M is a Riemannian CR-product if and only if Proof.By taking integration of (43), we have Now utilizing the relations (55) into Equation (68), we receive By the virtue of ( 69) and (67), we have M ∆(ln f )dV = 0.By applying the Hopf lemma, we achieve that f is constant.
Proof.By the utilization of (59) into (43), we have 2α By virtue of (71), we observe that the relation (70) holds if and only if ∆(ln f ) = 0. Since M is a compact, orientable Riemannian manifold, then by the application of Hopf lemma, we have f as constant.This completes the proof.
Application to Gradient Ricci Soliton
The Ricci soliton is a natural generalization of Einstein manifolds.Such manifolds are important to study warped product manifolds because any regular surface is Einsteinian, and the surface of revolution is a warped product manifold.Moreover, they apply to each other in a more general setting, which can be realized in current times.Another generalization of the Einstein manifold is an almost Ricci soliton and quasi-Einstein manifold.In this paper, we derive some characterization of Einstein manifolds with the impact of Ricci soliton.Theorem 8. Let M = M T × f M ⊥ be a CR-warped product in M(c) admitting a shrinking gradient Ricci soliton.Then the following inequality holds Additionally, the equality holds if M T is a totally geodesic submanifold and M ⊥ is totally umbilical submanifolds of M(c).
Proof.If M = M T × f M ⊥ is a CR-warped product submanifold of a complex space form M(c) admitting a shrinking gradient Ricci soliton.Then, M fulfils the following relation for every X 1 , X 2 ∈ Γ(TM T ).By using the above defined frame for M T in (73), we have By replacing E q by J E q into (74), we have By the consequence of ( 74) and (75), we have Applying (76) into the relation (51), we have This accomplishes the proof.
Theorem 9. Let M = M T × f M ⊥ be a CR-warped product in M(c) admitting a shrinking gradient Ricci soliton.Then M is Einsteinian if the following equality holds Proof.By the virtue of (44), ( 75) and (76), we receive that Suppose (78) satisfies in M then from (79), we have ∇(ln f ) 2 = 0. From the above expression, we receive that ∇(ln f ) = 0. Therefore f is constant.Therefore, by the gradient Ricci soliton equation, we easily observe that M is Einstein's warped product.This finishes the proof.
Corollary 2. Let M = M T × f M ⊥ be a CR-warped product in C n admitting a shrinking gradient Ricci soliton.Then M is Einsteinian if the following equality holds Proof.Since C n is flat space, therefore c = 0, and by the direct application of (78), we obtain the result.
Theorem 10.Let M = M T × f M ⊥ be a CR-warped product in M(c) admitting a steady gradient Ricci soliton.Then M is Einsteinian if the following equality holds Ric(E q , E q ).(81) Proof.For the steady gradient Ricci soliton λ = 0, we achieve the result by proceeding with similar steps as the proof of Theorem 9.
Theorem 11.The necessary condition for a compact CR-warped product submanifold M = M T × f M ⊥ in M(c) to be a CR-product is that Proof.Since warping function f is smooth, ln f = τ is also a smooth function, applying τ to the well known Ricci identity, we have for any vector fields X 1 , X 2 , X 3 that are tangent to M T .Because f is a smooth function and , then the curvature tensor behaves like the derivative defined by R X With the help of the property that τ is closed we can easily prove ∇ 2 d(τ)(X 2 , X 1 , X 3 ) = ∇ 2 d(τ)(X 1 , X 2 , X 3 ).Now, for an orthonormal frame {E 1 , E 2 , • • • , E 2α } on M T , we have ∇ E i E j (p) = 0, for fixed point p ∈ M T .If we describe ∇ E i X 1 = 0 for any X 1 ∈ Γ(TM T ), and considering the trace with respect to X 2 and X 3 in the relation ∇ 2 d(τ)(X 2 , X 1 , X 3 ) = ∇ 2 d(τ)(X 1 , X 2 , X 3 ), then using (83), we concede that div(Hess τ ) = Ric(∇(τ), −) − d(∆(τ)). ( Since M is a compact CR-warped product manifold with boundary then by taking the integration of (84), we have
) 4 . 1 .Theorem 4 .
Application to Euler-Lagrange Equation Let M = M T × f M ⊥ be a CR-warped product in M(c) and warping function is a solution of the Euler-Lagrange equation, then M is a Riemannian CR-product if h 2 ≥ cαβ.
orientable CR-warped product submanifold in M(c) such that ∂M = φ.Then the M is a Riemannian CR-product if and only if Hamiltonian H satisfies | 5,115.6 | 2023-04-25T00:00:00.000 | [
"Mathematics",
"Physics"
] |
Lethal and Sublethal Toxicity of Beta-Carboline Alkaloids from Peganum harmala (L.) against Aedes albopictus Larvae (Diptera: Culicidae)
Plant-derived agents are powerful bio-pesticides for the eco-friendly control of mosquito vectors and other blood-sucking arthropods. The larval toxicity of beta-carboline alkaloids against the Asian tiger mosquito, Aedes albopictus (Skuse) (Diptera: Culicidae), was investigated under laboratory conditions. The total alkaloid extracts (TAEs) and beta-carboline alkaloids (harmaline, harmine, harmalol, and harman) from Peganum harmala seeds were isolated and tested in this bioassay. All alkaloids were tested either individually or as binary mixtures, using the co-toxicity coefficient (CTC) and Abbott’s formula analysis. The results revealed considerable toxicity of the tested alkaloids against A. albopictus larvae. When all larval instars were exposed to the TAEs at 48 h post-treatment, the mortality of all larval instars varied in a concentration-dependent manner. The second-instar larvae were the most susceptible to different concentrations of TAEs, and the fourth-instar larvae were more tolerant to TAEs than the second-instar larvae. Especially, the third-instar larvae exposed to all alkaloids also showed that all doses resulted in an increased mortality of the third-instar larvae at 48 h post-treatment, and the toxicities of the tested alkaloids in a descending order were TAEs > harmaline > harmine > harmalol, with the LC50 values of 44.54 ± 2.56, 55.51 ± 3.01, 93.67 ± 4.53, and 117.87 ± 5.61 μg/mL at 48 h post-treatment, respectively. In addition, all compounds were also tested individually or in a 1:1 ratio (dose LC25/LC25) as binary mixtures to assess the synergistic toxicity of these binary combinations against the third-instar larvae at 24 and 48 h post-treatment, respectively. The results demonstrated that when tested as a binary mixture, all compounds (especially TAEs, harmaline, and harmine) showed their synergistic effects, exceeding the toxicity of each compound alone. Interestingly, the obtained data further revealed that the TAEs at sublethal doses (LC10 and LC25) could significantly delay the larval development and decrease the pupation and emergence rates of A. albopictus. This phenomenon could be helpful in order to develop more effective control strategies for different notorious vector mosquitoes.
Introduction
Aedes albopictus (Skuse) (Diptera: Culicidae), commonly known as the Asian tiger mosquito, is an aggressive daytime human-biting behavior and one of the most important nuisance mosquito species in the world [1]. A. albopictus has proven to be an infectious vector, which can transmit a spectrum of at least 23 human pathogens causing various diseases including dengue virus, West Nile virus, and chikungunya virus [2] (Manni et al., 2017). The World Health Organization (WHO) estimates that there are 68,000 clinical cases each year, but more than 3 billion people in Asia are at risk of exposure [3,4]. It is well known that one way to reduce the mosquito populations is targeting mosquito larvae with chemical insecticides, such as pyrethroids, organophosphates, and insect growth regulators [5,6]. However, repeated use of these chemical insecticides can lead to the development of resistance in mosquitoes or affect human health, or to undesirable effects on non-target organisms [7]. For these reasons, there is an urgent need to develop new insecticides which are more environmentally safe and also biodegradable and are targeted specifically against vector mosquitoes. In fact, plant extracts and plant-derived compounds belonging to many families have been reported to possess larvicidal properties against Aedes, Culex, and Anopheles mosquitoes (Diptera: Culicidae) [8,9].
Peganum harmala L. (Zygophyllaceae) is one of the most famous plants, the so-called Harmal, used in popular medicine. As a perennial herb, it is widely distributed in Europe, North Africa, Middle East, Central Asia, Northwest India, and Northwest China [10,11]. The full plant, aerial parts, seeds, and fruits have generally been reported to exhibit a traditional herbal medicine with various pharmacological activities, such as analgesic activity, antiproliferative, anti-parasitological, and antimicrobial activities, anticancer, and other activities [10,12]. The seeds of P. harmala have also been included in the Drug Standards of the Ministry of Public Health of the People's Republic of China, and the therapeutic dose of them is 4~8 g [13]. Ulteriorly, the quality standards for the seeds and the total alkaloid extracts from the seeds of P. harmala have been formulated in previous studies [14]. The seeds of P. harmala contain about 2% and 6% (w/w) pharmacologically active alkaloids, which are mostly beta-carbolines such as harmaline, harmine, harmalol, and harmane. In particular, the total content of harmine and harmaline is more than 50% in the total alkaloid extracts (TAEs) of P. harmalala seeds [12,15]. Modern pharmacology has also revealed that P. harmala alkaloids can inhibit monoamine oxidase A (MAO-A), acetylcholinesterase (AChE), and butyrylcholinesterase (BChE), and interact with γ-aminobutyric acid (GABA), and induce apoptosis and DNA damage [11,12,16,17].
There have been some reports about the pesticidal bioactivity of TAEs and betacarboline alkaloids, indicating their lethal and sublethal effects on many pests. We previously evaluated the insecticidal activity of TAEs and its alkaloids against a variety of pests in laboratory and field conditions [18][19][20]. Several studies have also reported on the effectiveness of alkaloid extracts of P. harmala against many insects and parasites [21][22][23][24]. In addition to lethal effects, several studies have also revealed that P. harmala total alkaloids or its beta-carboline alkaloids could induce sublethal effects on insect development, reproduction, lifespan, and behavior, even at low-dose levels [25][26][27][28][29][30]. Therefore, these results demonstrated that TAEs and its beta-carboline alkaloids have a variety of unique insecticidal activities against pests, indicating that these alkaloids are potential plantderived agents.
Synergistic effects between different chemical components in plants have been documented, with the toxicity or other bioactivity of one compound significantly increased in the presence of other compounds [9,30,31]. In fact, the optimal efficacy of a medicinal plant may not be due to one major active component, but to the combined effect of distinct compounds originally in the plant. For instance, the antibacterial and antifungal activities of beta-carboline alkaloids of P. harmala seeds were increased when tested as binary or total alkaloidal mixtures showing a kind of synergism among these alkaloids such as harmaline, harmine, harmane, and harmalol [32]. Another study also obtained similar results on the combined toxic effects of two alkaloids, harmine and ricinine, on beet armyworm (Lepidoptera: Noctuidae) [33]. Because such responses are likely to occur affecting the total alkaloid extract of P. harmala seeds, it is important to evaluate an active synergistic mixture with the pesticide properties. Consequently, it could be concluded that P. harmala alkaloids have potential in the search for new and eco-friendly pesticidal agents for the control of the vector mosquitoes such as A. albopictus.
To the best of our knowledge, the insecticidal activity of harmala alkaloids against vector mosquitoes has not been reported. Therefore, this study aims to evaluate the lethal and sublethal toxicity of beta-carboline alkaloids from P. harmala against A. albopictus larvae, explore the synergistic toxicity of binary mixtures of four isolated alkaloids at low-dose Toxics 2023, 11, 341 3 of 11 levels, and provide a reference for the further exploration and utilization of the plant as a botanical insecticide.
Materials and Methods
2.1. Material, Standards, and Reagents P. harmala L. fresh seeds were collected in August 2020 from ripened samples at Huan in County Northwest China and were dried to a constant weight at 60 • C. The raw material was identified by Prof. Xiaoqiang Guo, Life Science and Technology College of Longdong University, China. Harmaline, harman, harmalol, and harmine with high (>95%) label purities were purchased from Fluka Chemical Co. and used as reference materials. The molecular structural formulas of the four alkaloids are shown in Figure 1.
eco-friendly pesticidal agents for the control of the vector mosquitoes such as A. albopictus.
To the best of our knowledge, the insecticidal activity of harmala alkaloids against vector mosquitoes has not been reported. Therefore, this study aims to evaluate the lethal and sublethal toxicity of beta-carboline alkaloids from P. harmala against A. albopictus larvae, explore the synergistic toxicity of binary mixtures of four isolated alkaloids at low-dose levels, and provide a reference for the further exploration and utilization of the plant as a botanical insecticide.
Material, Standards, and Reagents
P. harmala L. fresh seeds were collected in August 2020 from ripened samples at Huan in County Northwest China and were dried to a constant weight at 60 °C. The raw material was identified by Prof. Xiaoqiang Guo, Life Science and Technology College of Longdong University, China. Harmaline, harman, harmalol, and harmine with high (>95%) label purities were purchased from Fluka Chemical Co. and used as reference materials. The molecular structural formulas of the four alkaloids are shown in Figure 1.
Total Alkaloid Extracts of P. harmala Seeds
The total alkaloid extracts of P. harmala seeds were extracted by the method we previously described [20]. Specifically, dried and powdered seeds (1 kg) were macerated four times with 1 L of methanol for 24 h. The extracts were combined and then the solvent was evaporated to dryness, using a rotary evaporator. The methanol extract of the seeds was successfully separated using hexane and dichloromethane in 10% NH4OH medium (pH 9-10). The obtained fractions were concentrated, yielding the total alkaloid extracts (TAEs) of P. harmala seeds [14]. Fractionation of the individual alkaloids was prepared according to the method of Kartal et al. (2003) [34]. Briefly, the TAEs were eluted on a silica gel column initially with pure CHCl3 and increasing amounts of CH3OH. All fractions obtained were tested on precoated TLC plates using CHCl3/CH3OH (9:1, v/v). Similar fractions were combined and washed sequentially in CHCl3 and CH3OH. The crystallized samples were compared by their retention times on high-performance liquid chromatography (Agilent, Santa Clara, CA, USA, 1290 Infinity II) relative to authentic samples, and using electrospray mass spectrometry (Thermo Scientific, Waltham, MA, USA, Orbitrap Exploris™ 240) [15]. All fractions obtained were dried and stored in closed dried bottles for further bioassays.
Total Alkaloid Extracts of P. harmala Seeds
The total alkaloid extracts of P. harmala seeds were extracted by the method we previously described [20]. Specifically, dried and powdered seeds (1 kg) were macerated four times with 1 L of methanol for 24 h. The extracts were combined and then the solvent was evaporated to dryness, using a rotary evaporator. The methanol extract of the seeds was successfully separated using hexane and dichloromethane in 10% NH 4 OH medium (pH 9-10). The obtained fractions were concentrated, yielding the total alkaloid extracts (TAEs) of P. harmala seeds [14]. Fractionation of the individual alkaloids was prepared according to the method of Kartal et al. (2003) [34]. Briefly, the TAEs were eluted on a silica gel column initially with pure CHCl 3 and increasing amounts of CH 3 OH. All fractions obtained were tested on precoated TLC plates using CHCl 3 /CH 3 OH (9:1, v/v). Similar fractions were combined and washed sequentially in CHCl 3 and CH 3 OH. The crystallized samples were compared by their retention times on high-performance liquid chromatography (Agilent, Santa Clara, CA, USA, 1290 Infinity II) relative to authentic samples, and using electrospray mass spectrometry (Thermo Scientific, Waltham, MA, USA, Orbitrap Exploris™ 240) [15]. All fractions obtained were dried and stored in closed dried bottles for further bioassays.
Insect
A. albopictus, maintained for more than 100 generations without exposure to any known insecticide, was obtained from laboratory colonies in the State Key Laboratory of Pathogen and Biosecurity, Institute of Microbiology and Epidemiology, Beijing. According to the World Health Organization's standards for larval susceptibility test methods [35], eggs for the study were obtained by feeding mated 15 generation females provided with a 10% sucrose solution for 12 h, and then provided with a rat placed in resting cages (25 × 25 × 35 cm) overnight for blood feeding to be carried by the females. All rats were maintained and used in accordance with the Guidelines of the Animal Care and Use Committee of Fuyang Normal University (license number 083/2021). Larvae were fed a diet of dog biscuits, milk powder, beef liver, and yeast powder in a ratio of 2:1:1:1, respectively. The insectary room was maintained at a photoperiod of 14:10 (L/D) h, temperature of 27 ± 1 • C, and relative humidity of 75-85%.
Individual Toxicity of the Tested Alkaloids
The larvicidal activity of TAEs and the four major compounds, namely harmaline, harmine, harmane, and harmalol, isolated from P. harmala seeds was evaluated according to the WHO protocol [35] (2005). Based on previous pre-experiments, each compound was tested at 20, 40, 60, 80, 100, and 120 µg/mL. Stock dilution (w/v) of each material was dissolved in 1 mL of methanol and then diluted in 249 mL of distilled water to obtain each of the required concentrations. The control was prepared using 1 mL methanol in 249 mLof distilled water. Thirty required larvae were introduced into each solution. Five biological replicates were then performed for each concentration, with 30 larvae per replicate (n = 150). Larval mortality was recorded at 24 h and 48 h post-treatment and corrected according to Abbott's formula as Corrected mortality (%) = [(Treated mortality-Control mortality)]/(100-Control mortality)] × 100. The 25 and 50% lethal concentrations (LC 25 and LC 50 , respectively) were evaluated by probit analysis [36].
Synergistic Toxicity of Binary Mixtures of Four Isolated Alkaloids
In this bioassay, four isolated alkaloids (harmaline, harmine, harmalol, and harman) were combined in a 1:1 ratio (doses LC 25 /LC 25 ), binary combinations of the alkaloids were assessed for the third-instar larvae, and TAEs as the positive control group. Larval mortality was recorded at 48 h post-treatment and corrected using Abbott's formula. The co-toxicity coefficient (CTC) of the mixtures was calculated according to Alonso-Amelot and Calcagno [37]
Sublethal Effects of the Tested Alkaloids on Larval Growth
In this experiment, TAEs were used to evaluate the sublethal effects. TheLC 10 and LC 25 values of the second-instar larvae determined by a bioassay at 48 h post-treatment were selected as the sublethal concentrations. The second-instar larvae were placed in a 250 mL glass beaker and exposed to the LC 10 and LC 25 doses for each test. More than 30 second-instar larvae were introduced into each concentration, and five biological replicates were performed for each concentration, with more than 30 larvae per replicate (n > 150) [35]; 48 h after treatment, dead larvae were counted and only alive larvae were transferred to glass beakers through a small filter, and larvae were provided with the larval food mixture at a concentration of 50 mg/L until pupation. Larval development was surveyed daily until all larvae had either pupated or died. Pupae from each treatment were removed daily and were transferred into a cup with distilled water until the adults emerged. The development of mosquito larvae, pupae, and adults emerging each day was recorded.
Statistical Analysis
The data from the larval mortality tests were subjected to an analysis of variance (ANOVA of square root transformed percentages). Differences between the treatments were determined by Tukey's multiple range tests to compare differences at p < 0.05 significance level. The sublethal and lethal dosages of each material to the tested larvae were calculated Toxics 2023, 11, 341 5 of 11 by using probit analysis. All data were analyzed using the SPSS Statistical Software Package version 16.0. The results with p < 0.05 were considered statistically significant.
Contents of the Isolated Alkaloids of P. harmala
According to the dry weight of seeds, the w/w extraction yield of the total alkaloid extracts (TAEs) of P. harmalala seeds, the TAE was 4.83% for hexane and dichloromethane. It was found that harmaline and harmine were the main alkaloids, with a content of 2.62% and 1.59% (w/w), respectively. And their contents were higher than that of the others (harmalol, 0.32% and harman, 0.15%). The chemical structures of these four major compounds are shown in Figure 1. Based on the literature, the total alkaloid content of P. harmala seeds varied between 2% and 6% [14,15], and the ratio of alkaloids in P. harmala may vary during different stages of growth.
Larvicidal Activity of the Tested Alkaloids
The total alkaloid extracts (TAEs) of P. harmala seeds exhibited a larvicidal effect against the second-instar larvae to fourth-instar larvae of A. albopictus at 48 h post-treatment, indicating that the mortality of all the larval instars varied in a concentration-dependent manner (Figure 2a). The second-instar larvae were the most susceptible to different concentrations of TAEs, and the fourth-instar larvae were more tolerant to TAEs than the second-instar larvae (p < 0.05). For example, the mortality rate of the second-instar larvae was over 80% at a 60 µg/mL concentration, while the mortality rate of the fourth-instar larvae was only over 80% at a 100 µg/mL concentration (Figure 2a). In this study, the third-instar larvae were thus used as toxicity assays unless otherwise specified. Furthermore, the third-instar larvae exposed to all alkaloids tested (TAEs and four compounds) also showed that all doses resulted in an increased mortality of the third-instar larvae at 48 h post-treatment, although harmalol and harman showed relatively low larvicidal activity. The larvicidal activity of the tested alkaloids in a descending order was TAEs > harmaline > harmine > harmalol > harman (Figure 2b).
Statistical Analysis
The data from the larval mortality tests were subjected to an analysis of variance (ANOVA of square root transformed percentages). Differences between the treatments were determined by Tukey's multiple range tests to compare differences at p < 0.05 significance level. The sublethal and lethal dosages of each material to the tested larvae were calculated by using probit analysis. All data were analyzed using the SPSS Statistical Software Package version 16.0. The results with p < 0.05 were considered statistically significant.
Contents of the Isolated Alkaloids of P. harmala
According to the dry weight of seeds, the w/w extraction yield of the total alkaloid extracts (TAEs) of P. harmalala seeds, the TAE was 4.83% for hexane and dichloromethane. It was found that harmaline and harmine were the main alkaloids, with a content of 2.62% and 1.59% (w/w), respectively. And their contents were higher than that of the others (harmalol, 0.32% and harman, 0.15%). The chemical structures of these four major compounds are shown in Figure 1. Based on the literature, the total alkaloid content of P. harmala seeds varied between 2% and 6% [14,15], and the ratio of alkaloids in P. harmala may vary during different stages of growth.
Larvicidal Activity of the Tested Alkaloids
The total alkaloid extracts (TAEs) of P. harmala seeds exhibited a larvicidal effect against the second-instar larvae to fourth-instar larvae of A. albopictus at 48 h post-treatment, indicating that the mortality of all the larval instars varied in a concentration-dependent manner (Figure 2a). The second-instar larvae were the most susceptible to different concentrations of TAEs, and the fourth-instar larvae were more tolerant to TAEs than the second-instar larvae (p < 0.05). For example, the mortality rate of the second-instar larvae was over 80% at a 60 μg/mL concentration, while the mortality rate of the fourth-instar larvae was only over 80% at a 100 μg/mL concentration (Figure 2a). In this study, the third-instar larvae were thus used as toxicity assays unless otherwise specified. Furthermore, the third-instar larvae exposed to all alkaloids tested (TAEs and four compounds) also showed that all doses resulted in an increased mortality of the third-instar larvae at 48 h post-treatment, although harmalol and harman showed relatively low larvicidal activity. The larvicidal activity of the tested alkaloids in a descending order was TAEs >harmaline > harmine > harmalol > harman (Figure 2b).
Comparison of the Lethal Concentration (LC 50 ) of Third-Instar Larvae
To further study the toxicity of the tested alkaloids (TAEs, harmaline, harmine, harmalol, and harman) to the third-instar larvae of A. albopictus, a bioassay was performed by the probit analysis method. The obtained results revealed that the toxicity of the third-instar larvae increased with the increasing concentrations (ranging from 20 to 120 µg/mL) of the tested alkaloids at 48 h post-exposure, as shown in Figure 3. There was a good linear relationship between the logarithm of each compound concentration and the probit value of the mortality for 48 h post-treatment. According to the obtained formula, the LC 50 value of each compound was calculated. The results revealed the significant larvicidal effects of the tested alkaloids including TAEs, harmaline, harmine, harmalol, and harman, with the LC 50 values of 44.54 ± 2.56, 55.51 ± 3.01, 93.67 ± 4.53, and 117.87 ± 5.61 µg/mL, respectively, while the LC 50 value of harman exceeded the highest concentration tested (120 µg/mL).
harmalol, and harman) to the third-instar larvae of A. albopictus, a bioassa formed by the probit analysis method. The obtained results revealed that th the third-instar larvae increased with the increasing concentrations (ranging 120 μg/mL) of the tested alkaloids at 48 h post-exposure, as shown in Figure 3 a good linear relationship between the logarithm of each compound concen the probit value of the mortality for 48 h post-treatment. According to the o mula, the LC50 value of each compound was calculated. The results revealed cant larvicidal effects of the tested alkaloids including TAEs, harmaline, ha malol, and harman, with the LC50 values of 44.54 ± 2.56, 55.51 ± 3.01, 93.67 117.87 ± 5.61 μg/mL, respectively, while the LC50 value of harman exceeded concentration tested (120 μg/mL).
Synergistic Toxicity of Binary Mixtures
Four isolated alkaloids (harmaline, harmine, harmalol, and harman) we dividually or in a 1:1 ratio (dose LC25/LC25) as binary mixtures to assess the toxicity of these binary combinations against the third-instar larvae at 2 post-treatment, as shown in Table 1. The results revealed that a considerable effect between all combinations was significantly increased when examine mixtures than when each compound was examined individually. A mixture o and harmine at their LC25 against the third-instar larvae of A. albopictus result and 79.3% mortality at 24 and 48 h post-treatment, respectively. In addition mixtures of harmane with harmine or harmalol showed weak synergistic eff tively. Notably, TAEs also showed excellent activity against the third-instar
Synergistic Toxicity of Binary Mixtures
Four isolated alkaloids (harmaline, harmine, harmalol, and harman) were tested individually or in a 1:1 ratio (dose LC 25 /LC 25 ) as binary mixtures to assess the synergistic toxicity of these binary combinations against the third-instar larvae at 24 and 48 h post-treatment, as shown in Table 1. The results revealed that a considerable synergistic effect between all combinations was significantly increased when examined as binary mixtures than when each compound was examined individually. A mixture of harmaline and harmine at their LC 25 against the third-instar larvae of A. albopictus resulted in 59.2% and 79.3% mortality at 24 and 48 h post-treatment, respectively. In addition, the binary mixtures of harmane with harmine or harmalol showed weak synergistic effects, respectively. Notably, TAEs also showed excellent activity against the third-instar larvae at 24 and 48 h after treatment. Thus, the results revealed that the activity of the tested alkaloids was significantly increased when tested as binary or TAEs indicating a kind of synergism among these alkaloids. [37]. Each datum represents the mean of three replicates. In the same column, means followed by the same letters are not significantly different (p > 0.05).
Sublethal Effects of Total Alkaloid Extracts (TAEs) on Larval Development
For the newly emerged second-instar larvae exposure to sublethal concentrations of TAEs at 48 h post-treatment, the obtained results revealed that TAEs at LC 10 and LC 30 concentrations exhibited sublethal effects on larval development time, as well as pupation and emergence rate ( Table 2). The developmental time of the larvae stage was 183.9 h and 201.3 h days significantly longer in the LC 10 (p < 0.05) and LC 30 (p < 0.01) treatments, respectively than in the control. The pupa time was also significantly prolonged at 67.2 h and 83.5 h in LC 10 and LC 30 treatments, respectively. Additionally, the pupation and emergence rates of LC 30 treatment were more significantly decreased than the control (p < 0.05). The results indicated that sublethal concentrations of TAEs could delay the development of A. albopictus larvae and decrease the reproductive potential of A. albopictus individuals.
Discussion
With the increasing demand for more eco-friendly products for mosquito control, plants may be valuable sources for mosquito control products. Plants can produce a vast repository of secondary compounds with a wide range of biological activities such as insecticidal activity [8]. Plant-derived alkaloids, either as plant-derived insecticides or as pure compounds, provide unlimited opportunities for new and selective pesticide discoveries because of the varied insecticidal active targets and unmatched availability of chemical diversity [9]. In this study, total alkaloids and four isolated alkaloids of P. harmala exhibited considerable larvicidal activity against A. albopictus larvae in a dose-dependent manner. Especially, the TAEs exhibited strong larvicidal activity against all instar larvae of A. albopictus at 48 h post-treatment (Figure 1a). Comparing the achieved LC 50 values, the larvicidal activity of the tested alkaloids in a descending order was TAEs > harmaline > harmine > harmalol (Figure 1b). Similarly, Shang et al. [22] (2016) observed the acaricidal activity of the total alkaloids from P. harmala on Psoroptes cuniculi. The authors also found that extracts of three bioactive alkaloids exhibited excellent acaricidal activity, as confirmed by our previous study [18] as well as this one. The authors Miao et al. [24] (2020) and Abbassi et al. [21] (2003) also demonstrated some of our observations. For example, the authors observed a toxic effect in harmaline and the extract of P. harmala seeds on the mortality of Caenorhabditis elegans and Spodoptera exigua. Therefore, extensive bioassays on larvicidal properties of the alkaloid components are needed to reveal possible synergisms, especially in those cases where the total alkaloids are more active than their individual main components such as harmaline and harmine.
It is well known that secondary metabolites always exist in plants in the form of simple or complex mixtures; there are thus many hypotheses about the so-called phytochemical redundancy [38]. Accordingly, evaluating for the pesticidal activity of specific phytochemicals must be supplemented by the appropriate testing of relevant mixtures and the whole extracts [31][32][33]. In this study, doses matching the estimated LC 25 were used to evaluate the biological activity of simple or complex mixtures of the tested alkaloids; results showed that the action of a single alkaloid usually resulted in evidently lower larval mortality than the expected 25%. Despite that, when included in binary mixtures, some combinations resulted in a larval mortality greater than 70% (Table 1). This phenomenon was observed in binary mixtures of harmaline/harmine or harmaline/harmalol, indicating a kind of synergism among these compounds. Rizwan-ul-Haq et al. [33] (2009) obtained similar results regarding the two alkaloids harmaline and ricinine from isolated the seeds of P. harmala and Ricinus communis, respectively. They found that the insecticidal activity of these alkaloids on Spodoptera exigua larvae was significant when treated alone, while a mixture of both alkaloids at the same dose was shown to be more effective. Another study also found that the antibacterial and antifungal activities of isolated alkaloids (i.e., harmine, harmaline, and harmalol or total alkaloids) was significantly increased, when tested as a binary or total alkaloidal mixture, indicating a kind of synergistic effect among these compounds [32]. Synergism between different chemical components in plants has been documented, in which the toxicity or other active effects of one compound are significantly increased in the presence of other compounds [31]. The optimal efficacy of a medicinal plant may not be due to one major active ingredient, but to the combined effect of different compounds in the plant [8] (Dinesh, et al., 2014). Due to such responses commonly occurring in the plant-derived extracts, it is thus important to select an active synergistic mixture with the insecticidal properties. Thus, it could be concluded that the total alkaloids or binary mixture from of P. harmala have the potential for new and selective insecticidal agents for mosquito vector control. However, the mechanism behind the synergism among these beta-carboline alkaloids is still to be further explored.
Recently, plant-derived biopesticides have been used against many insect pests, especially vector mosquitoes, because plant-derived compounds (e.g., alkaloids, essential oils, and terpenoids) are safer to use and have a wide range of sublethal effects such as biochemical metabolism abnormality, growth and/or reproduction inhibition, adult repellent, and oviposition deterrent [9]. In the present study, the TAEs could significantly delay the development of A. albopictus larvae and pupae, when treated with sublethal concentrations (LC 10 and LC 25 ). In addition, the pupation rate and emergence rate of the surviving individuals also decreased significantly under LC 25 exposure dose ( Table 2). Our previous research found that total alkaloids and harmalin from P. harmala could significantly inhibit the absorption of Na + , K + , and glucose by Sf9 cells, as well as the number of blood cells of S. frugiperda larvae when exposed to LC 50 dose [20] . Jbilou et al. (2008) [26] found that P. harmala extracts exhibited many sublethal effects on Tribolium castaneum, including α-amylase activity, larval development, and progeny production. Similarly, another two studies also observed sublethal effects of harmalin on Spodoptera exigua and Plodia interpunctella larvae, which involved nutrient metabolism, a-amylase activity, and | 6,763.4 | 2023-04-01T00:00:00.000 | [
"Biology"
] |
Structural phase transitions and photoluminescence mechanism in a layer of 3D hybrid perovskite nanocrystals
Although the structural phase transitions in single-crystal hybrid methyl-ammonium (MA) lead halide perovskites (MAPbX3, X = Cl, Br, I) are common phenomena, they have never been observed in the corresponding nanocrystals. Here we demonstrate that two-photon-excited photoluminescence (PL) spectroscopy is capable of monitoring the structural phase transitions in MAPbX3 nanocrystals because nonlinear susceptibilities govern the light absorption rates. We provide experimental evidence that the orthorhombic-to-tetragonal structural phase transition in a single layer of 20-nm-sized 3D MAPbBr3 nanocrystals is spread out within the 70 - 140 K range. This structural phase instability range arises because, unlike in single-crystal MAPbX3, free rotations of MA ions in the corresponding nanocrystals are no longer restricted by a long-range MA dipole order. The resulting configurational entropy loss can be even enhanced by the interfacial electric field arising due to charge separation at the MAPbBr3/ZnO heterointerface, extending the orthorhombic-to-tetragonal structural phase instability range from 70 to 230 K. We conclude that the weak sensitivity of conventional one-photon-excited PL spectroscopy to the structural phase transitions in 3D MAPbX3 nanocrystals results from the structural phase instability providing negligible distortions of PbX6 octahedra. In contrast, the intensity of two-photon-excited PL and electric-field-induced one-photon-excited PL still remains sensitive enough to weak structural distortions due to the higher rank tensor nature of nonlinear susceptibilities involved. We also show that room-temperature PL originates from the radiative recombination of the optical-phonon vibrationally excited polaronic quasiparticles with energies might exceed the ground-state Frohlich polaron and Rashba energies due to optical-phonon bottleneck.
I. INTRODUCTION
Hybrid methyl-ammonium (MA) lead halide perovskites (MAPbX3, X = Cl, Br, I) represent a class of materials offering an illustrative platform for studying the relaxation dynamics of photoexcited carriers and their transport phenomena in novel highly efficient solar cells for solar energy harvesting technology. [1][2][3][4][5][6][7][8][9][10][11] One of the most important specific features of these hybrid materials is that their crystalline structure can be viewed as two alternating sublattices. Specifically, the inorganic sublattice is composed by cornersharing PbX6 octahedra which are responsible for forming the valence band (VB) maximum and conduction band (CB) minimum of these materials. 7,8 Consequently, the initial relaxation of photoexcited carriers, their recombination and transport phenomena all occur in the inorganic sublattice. Alternatively, the organic MA sublattice acts as a medium modifying the electrostatic potential of the inorganic sublattice, thus contributing less significantly to the charge screening and localization effects, nevertheless, providing an ultralow thermal conductivity being caused by long-range MA dipole order. 12 The structural peculiarities of these materials allow for the three structural phase transitions occurring in the temperature range of T ~140 -240 K, which usually appear in single-crystal MAPbX3 and its polycrystalline thin film, [13][14][15][16][17][18][19][20][21][22][23][24][25] whereas they have never been observed in MAPbX3 nanocrystals. 21 Because arrays of colloidal nanocrystals are known to be promising alternatives to the single-crystal semiconductor based electronics, optoelectronics, and solar energy harvesting applications 26 and because the properties of single-crystal MAPbX3 differ significantly for different structural phases, [13][14][15][16][17][18][19][20][21][22][23][24][25] we comprehensively explored the structural phase transitions in a fully encapsulated single layer of 20-nm-sized 3D MAPbBr3 nanocrystals using one-photon-excited and two-photon-excited PL spectroscopy. The effect of the technologically important MAPbBr3/ZnO heterointerface on this phenomenon has also been studied here. We show that two-photon-excited PL spectroscopy and electric-field-induced one-photon-excited PL spectroscopy are capable of more precisely monitoring the structural phase transitions in 3D MAPbBr3 nanocrystals compared to conventional one-photon-excited PL spectroscopy since nonlinear susceptibilities govern the light absorption rates. Consequently, one can recognize that the orthorhombic-totetragonal structural phase transition in 3D MAPbBr3 nanocrystals, unlike in single-crystal MAPbX3, is spread out over the broad temperature range of T ~70 -140 K. This extension of the structural phase transition defines the structural phase instability range, within which the local field fluctuations arising due to free rotations of MA ions are no longer restricted by long-range polar order. 27,28 The resulting configurational entropy loss and the corresponding liquid-like motion of MA cations 1 can be even enhanced by the interfacial electric field when charge separation at the MAPbBr3/ZnO heterointerface occurs, extending the orthorhombic-to-tetragonal structural phase instability range from T ~70 to 230 K. The latter effect is found to be dependent on the ZnO layer thickness and the photoexcited carrier density. Finally, we conclude that a stepwise shift of the PL band with temperature observed for single-crystal MAPbX3 and assigned to structural phase transitions does not appear anymore in 3D MAPbX3 nanocrystals because of negligible distortions of PbX6 octahedra under the structural phase instability regime. On the contrary, the nearly monotonic blue shift of PL band with increasing temperature in a fully encapsulated single layer of 20-nm-sized 3D MAPbBr3 nanocrystals seems to result rather from the heating effect under TO/LO phonon bottleneck [2][3][4] than that being induced by the progressive PbX6 octahedra distortions. Consequently, room-temperature PL is expected to originate from the radiative recombination of the optical-phonon vibrationally excited polaronic quasiparticles with energies might exceed the ground-state Fröhlich polaron and Rashba energies due to optical-phonon bottleneck. Because of small masses and large radii of these vibrationally excited polaronic quasiparticles, their high mobility and long-range diffusion become possible.
II. EXPERIMENTAL A. Sample fabrication
The size-controlled CH3NH3PbBr3 nanocrystals were synthesized by a ligand-assisted reprecipitation (LARP) technique. 29 The CH3NH3Br precursor was synthesized by adding hydrobromic acid (HBr, 48 wt.% in H2O, 99.99%; Sigma-Aldrich) drop wise to a stirred solution of methylamine (CH5N, 30~33 wt.% in ethanol) at 0 o C followed by stirring for 1 h. Upon drying at 100 °C in air, white CH3NH3Br powder in quantitative yield was formed. After being washed with diethyl ether (Shanghai Lingfeng Chemical Reagents) and recrystallized with ethanol, CH3NH3Br powder was dried for 24 h in a vacuum furnace and along with other precursors was added into the N,N-dimethylformamide (DMF, C3H7NO, anhydrous, 99.8%). Specifically, CH3NH3Br and lead (II) bromide (PbBr2, powder, 98%) were dissolved in 100 μL DMF forming a mixture with a concentration of 0.1 mM and then 200 μL oleic acid (C18H34O2; Aladdin) and 20 μL oleylamine (C18H37N, 80~90%; Aladdin) were added into this mixture. The oleic acid/oleylamine ligand ratio was selected for tailoring the size of the nanocrystals. The 100 μL mixture of various precursors was injected afterwards into 3 mL chloroform (Shanghai Lingfeng Chemical Reagents) as an antisolvent. A yellow-greenish colloidal solution was acquired afterwards. For further purification, 1.5 mL toluene/acetonitrile (CH3CN, anhydrous, 99.8%) mixture with a volume ratio 1:1 was added into the solution and the sediment was dispersed in hexane after centrifuging at 9000 rpm for 2 min.
To prepare the fully encapsulated layer of 3D CH3NH3PbBr3 nanocrystals, the sapphire plates (10×10×0.3 mm; Jiangsu Hanchen New Materials) were cleaned by successively soaking them in an ultrasonic bath with deionized water, acetone, and isopropanol for 10 min each and dried with nitrogen. The sapphire substrates were transferred afterwards into the atomic layer deposition (ALD) system (PICOSUN™ R-200) to grow a ZnO film. Diethyl zinc (DEZn, Zn(C2H5)2) and H2O were used as precursors. High purity nitrogen with dew point below -40 o C was used as a purging and carrier gas. The reactor chamber pressure was set as 1000 Pa during the growth. When the growth temperature of 200 °C was reached, DEZn was introduced to the reactor chamber with a flow rate of 150 sccm followed by purging with nitrogen to remove the residues and byproducts. The precursor of H2O with a flow rate of 200 sccm was introduced afterwards into the reactor chamber to start with the ZnO layer growth. The number of ALD growth cycles was selected to grow a ZnO layer with thicknesses of 30 and 100 nm, which were also verified by other methods.
Closely packed and uniformly distributed CH3NH3PbBr3 nanocrystals were spin-coated afterwards by optimizing the spin speed to 1500 rpm to either the clean sapphire (Sa) plate or to those initially ALD-coated with a ZnO layer of 30 nm and 100 nm thickness. The resulting structure was covered by another sapphire plate, leaving the air gap above the nanocrystal array of ~1 m and gluing sapphire plates on sides by UV adhesive. The obtained samples will be referred further below as MAPbBr3/Sa, MAPbBr3/ZnO(30nm) and MAPbBr3/ZnO(100nm), respectively.
B. Scanning electron microscopy (SEM) imaging.
The cross-sectional SEMs images of the sandwiched samples were acquired using a ZEISS Gemini 300 field emission scanning electron microscope in a secondary electron mode after cleaving the samples with a diamond scriber.
C. Transmission electron microscopy (TEM) imaging
The crystallinity of the synthesized CH3NH3PbBr3 nanocrystals was confirmed by TEM imaging (Tecnai F30 field-emission TEM) operated at 300 kV and at room temperature.
D. X-ray diffraction (XRD) characterization
The XRD patterns of the synthesized CH3NH3PbBr3 nanocrystals were measured using a Rigaku SmartLab X-ray diffractometer, equipped with a Cu KR radiation source (wavelength at 1.542 Å). The samples were scanned from 10 o < 2θ < 60 o at an increment of 10 o /min.
E. Conventional ultraviolet-visible absorption and PL characterization
The conventional absorption spectra were measured at room temperature using the Beijing Spectrum Analysis 1901 Series spectrometer. To study PL spectra, the Ocean Optics QE 65 Pro spectrometer equipped with a 365 nm excitation source was used with a spectral resolution of ~1.0 nm.
F. Temperature-dependent PL measurements
To study temperature-dependent PL spectra, the commercially available temperature controller (Lakeshore 336) with a temperature range of 20 -295 K was used. The PL spectra were measured using the Andor Shamrock SR750 spectrograph equipped with a CCD detector. Figure 1a shows a sketch of the experimental setup for measuring the temperature dependent PL from a layer of 3D MAPbBr3 NCs. Three laser wavelengths were used to excite PL: 325, 442, and 800 nm, which were emitted from the CW He-Cd laser (325 and 442 nm) and the femtosecond laser (Astrella-Tunable-V-F-1K) with the pulse width of 100 fs and a repetition rate of 1.0 KHz (800 nm). The laser beam incidence angle was 30°. The sample holder was designed to allow the excitation laser light to pass through the sample and the cryostat windows, being blocked afterwards outside the cryostat. Such an arrangement eliminates any additional spectroscopic features associated with the scattering and reabsorption of laser light from the sample holder to appear in PL spectra. The laser spot diameter was ~250 m for all the three excitation wavelengths. The excitation power was changed by a variable neutral density filter (Thorlabs). The averaged laser power varied from 0.03 to 20 mW for the CW He-Cd laser and from 2.0 to 30 mW for the pulsed laser. For the experimental conditions applied, 1.0 mW average laser power corresponds to the laser light intensity (power density) of 2.04 W/cm 2 for the CW laser and 20.5 GW/cm 2 for the pulsed laser. Taking into account the measured one-photon absorption coefficient (~4.0 × 10 4 cm -1 and ~1.9 × 10 4 cm -1 for 325 and 442 nm laser light, respectively) and the two-photon absorption coefficient of ~8.6 cm/10 9 W, 30 the reflectance coefficient of 0.37 and estimating afterwards the power density absorbed within the CH3NH3PbBr3 nanocrystal array, 31 the corresponding photoexcited carrier densities were calculated to be ~7.5 × 10 17 cm -3 ( ~7.5 × 10 -4 nm -3 ), ~4.9 × 10 17 cm -3 ( ~4.9 × 10 -4 nm -3 ), and ~1.7 × 10 18 cm -3 (~1.7× 10 -3 nm -3 ) for 325, 442, and 800 nm laser light, respectively. Assuming the nanocrystal cubic shape of the same edge length of ~20 nm (the corresponding volume is ~8 × 10 3 nm 3 ), the average electron-hole pair occupancy per nanocrystal can be estimated as ~6.0, ~3.9, and ~13.6, respectively. However, multiple excitons photoexcited in a nanocrystal is believed to be non-interactive since their small effective exciton Bohr radius in MAPbX3 perovskite materials (2.0 -4.0 nm), compared, for example, to GaAs (~12 nm), 32 as will be discussed further below in details. Figure 2(a) shows the TEM image of the as-grown colloidal cubic-shaped MAPbBr3 nanocrystals together with the corresponding histogram presenting the nanocrystal size distribution which is maximized at ~19.8 ± 1.7 nm. The highresolution TEM image [ Fig. 2 thus suggesting that no more than one layer of the closely packed MAPbBr3 nanocrystals was deposited. Because 3D MAPbX3 nanocrystals are known to be the basic building blocks for growing the corresponding nanoplates and nanowires, that is, 2D and 1D structures, 35,38 our samples can also be identified as quasi-2D arrays of 3D CH3NH3PbBr3 nanocrystals to distinguish them from the 2D layered counterpart of hybrid perovskites [(C4H9NH3)2PbBr4]. 38 Figure 2(h) and (i) shows the room-temperature conventional absorption and PL spectra of the MAPbBr3/Sa, MAPbBr3/ZnO(30nm), and MAPbBr3/ZnO(100nm) samples identified in Fig. 2(d)-(f). The absorption spectrum of the MAPbBr3/Sa sample reveals two contributions associated with electronic transitions from VB to two CBs. 40,41 The ZnO layer additionally contributes to absorption spectra in the UV range for the MAPbBr3/ZnO samples [ Fig. 2(h)]. 42 The Stokes shift was estimated as ∆ Stokes = + ℎ = ~60 meV, where is the reduced Planck constant, ∆ Stokes is the frequency difference between the 1s free exciton (FE) peak in absorption spectra and the PL peak, and and ℎ are the corresponding reorganization The Stokes shift was estimated as a photon energy difference between the FE peak in absorption spectra and the PL peak. energies 43 for electrons and holes, respectively. The latter quantities characterize hence the band gap renormalization appearing as the energetic difference between the unrelaxed (non-equilibrium) and relaxed (equilibrium) carriers, which can be estimated in the frame of the Fröhlich large polaron model 9,44 as = ~32.6 meV and ℎ = ~39.2 meV for the longitudinaloptical (LO)-phonons contribution. The intensity of the 1s FE absorption peak decreases in MAPbBr3/ZnO due to the interfacial-electric-field-induced FE dissociation, the process which balances the relative densities of free carriers and FEs. 19 The more prominent suppression of the 1s FE absorption peak in the MAPbBr3/ZnO(30nm) sample compared to the MAPbBr3/ZnO(100nm) sample suggests that the interfacial electric field in the former is stronger than that in the latter. The exciton dissociation process is also accompanied by a blue-shift of PL-peak (~10 meV), which is greater in the MAPbBr3/ZnO(30nm) sample as well [ Fig. 2(i)]. These facts together with good coincidence between reorganization energies and the Stokes shift all confirm the FE nature of the band-edge light emission at room temperature. The latter statement is also well consistent with the large polaronic exciton binding energy in MAPbBr3 (~35 meV), 17,23 thus exceeding substantially the room temperature = 25.7 meV, where kB in the Boltzmann constant and T is the temperature. We also note that because the size of MAPbBr3 nanocrystals (~20 nm) substantially exceeds the exciton Bohr radius (~2.0 nm), 45 any quantum-confinementinduced effects are expected to be negligible. The typical phonons contributing to the temperaturedependent dynamics include the PbBr6 octahedra twist mode (TO-type) with frequency ~40 cm -1 (~5 meV) and the distortion mode (LO/TO-type) with frequency ~58 cm -1 (~7.2 meV). 46,47 The interaction between MA cations and PbBr6 anions results in a broad MA torsional (MAT) mode peaked at ~300 cm -1 (~37.2 meV), which governs the orientation dipole order of MA cations in the whole crystal. 46 MA internal (MAI) modes have much higher frequencies of ~900 -3200 cm -1 (112 -397 meV). 46,47 However, owing to a global lattice compression in nanocrystals, 36 the frequency of the LO-phonon mode observed for MAPbBr3 nanocrystals increases to ~150 cm -1 (~18.6 meV). 48,49 Because the lattice compression varies with the nanocrystal size, TO/LO-phonon energy is expected to be spread over a few meV. 49 Consequently, we will refer further below to the following low-energy lattice vibrations: (i) the TO-phonon mode with averaged energy 〈 〉 ~5.0 meV, (ii) the LOphonon mode with averaged energy 〈 〉 ~18.6 meV, (iii) the MAT-phonon modes with averaged energy 〈 〉 ranging between ~35 and ~90 meV, and (iv) the MAI-phonon modes and their combinations with averaged energy 〈 〉 ranging between ~100 and ~700 meV.
B. Structural phase transitions in 3D MAPbBr3 nanocrystals
The structural phase transitions in MAPbX3 have been monitored for single-crystal MAPbBr3 13-18,20,21 and MAPbI3 16,19,20 using the XRD, 13,14,18 absorption, 16 reflection, 17 one-photon excited PL, 13,14,18,19,21 two-photon excited PL, 15 and dielectric response 20 techniques. Polycrystalline films, 21,22,23 microplate crystals 24 and nanowires 25 of MAPbBr3 23 and MAPbI3 21-25 have also been studied using the XRD, 22 absorption, 21,23 charge transport 24 and one-photon excited PL 21,22,24,25 techniques to recognize the structural phase transitions on the nanoscale. The structural phase transition in MAPbX3 usually appears as the stepwise shift of the corresponding spectral band [13][14][15][16][17][18][19][20][21][22][23][24][25] whereas its intensity is less suitable for this purpose. 18,19,21,22,24 Three structural phase transitions were found to occur at T ~145 K [orthorhombic(O)to-tetragonal(T1)], at T ~155 K [tetragonal(T1)-totetragonal(T2)], and at T ~237 K [tetragonal(T2)-to-cubic(C)], which usually appear in single-crystal MAPbX3 and its polycrystalline thin film. 13,15 However, even a shift of the absorption and PL bands was incapable for recognizing the structural phase transition in MAPbX3 nanocrystals. 21 The reason that a stepwise shift of the absorption and PL bands is no longer observable in MAPbX3 nanocrystals has been suggested to arise from the configurational entropy loss upon suppressing long-range MA polar order. 21,27,28 To study the structural phase transition in a layer of 3D MAPbBr3 nanocrystals, we measured the temperature dependences of PL from the aforementioned three samples using the three laser excitation regimes of photon energy (i) 3.81 eV (exc = 325 nm) being above ZnO and MAPbBr3 band gaps ( = ~3.37 and ~2.3 eV, respectively); (ii) 2.81 eV (exc = 442 nm) being below ZnO band gap but above MAPbBr3 band gap; (iii) 1.55 eV (exc = 800 nm) being below ZnO and MAPbBr3 band gaps [ Fig. 1 Figure 3 shows PL spectra measured as a function of temperature using the three different laser excitations, as indicated for each of the panels. Additionally, Figure 4 shows PL spectra measured at temperatures T = 50, 150, and 285 K, which correspond to the orthorhombic, tetragonal, and cubic structural phases of single-crystal MAPbBr3, respectively. All PL spectra demonstrate a characteristic ≤100 meV blue-shift with increasing temperature from T ~20 to 295 K ( Fig. 3 and Fig. 4). The position of PL peak in low-temperature spectra (T = 50 K) slightly vary with excitation photon energy in the range of ~30 -40 meV. This variation sets up the range of inhomogeneous broadening, which is believed to be due to the nanocrystal structural imperfectness since the MAPbBr3/ZnO heterointerface does not affect significantly the position of PL bands and their full width at half maximum (FWHM). Moreover, the LO-phonon sideband 48 is red-shifted from PL peak by ~18 meV for the MAPbBr3/Sa sample when two-photon excitation is applied (exc = 800 nm) (Fig. 4). As temperature increases, both the PL peak position variations and the LO-phonon sideband peak are masked by homogeneous broadening due to dominant carrier scattering with optical phonons. The bandwidth of the room-temperature PL bands (FWHM) reaches ~90 meV, indicating that homogeneous and inhomogeneous broadenings are somewhat comparable. The PL broadening dynamics with increasing temperature will be discussed in detail in the next section.
There are two general tendencies characterizing the temperature-dependent dynamics of one-photon-excited PL (exc = 325 nm or 442 nm). Specifically, PL peak intensities [ Fig. 3 photoexcited carrier density, since at least 10-fold higher carrier density was photoexcited in this case compared to one-photonexcited PL. The temperature dependences of PL peak position for the MAPbBr3/ZnO(30nm) sample also demonstrate the minor features when approaching room temperatures [ Fig. 5(d) -(f)], pointing to more complicated dynamics in this case. The fairly monotonic blue-shift with increasing temperature is well consistent with that reported for MAPbI3 nanocrystals and suggested to provide evidence that MAPbI3 nanocrystals do not undergo the bulk phase transitions. 21 Alternatively, we show that although the temperature dependences of PL peak position reveal a fairly monotonic behaviour, the temperature dependences of PL intensity can be either monotonic or non-monotonic depending on the PL excitation regime applied. Figure 5(a)-(c) clearly demonstrates this dual behaviour for the MAPbBr3/Sa sample. Specifically, although one-photonexcited PL decays with increasing temperature nearly monotonic, there are two distinct peaks for two-photon-excited PL, the positions of which (T ~140 K and ~245 K) closely match those known for the orthorhombic-to-tetragonal and tetragonalto-cubic phase transitions in single-crystal MAPbBr3. [13][14][15][16][17][18][19][20][21] The temperature dependences of the integrated PL intensity can be fitted using the multiple Mott equation, 50 which for one-photonexcited PL, takes into consideration phonon-assisted PL quenching in all the structural phases (three terms), as well as in the phase transition regions (two terms), where (0) is PL intensity at T = 0 for each of the terms, is the pre-exponential factors characterizing the relative probabilities of non-radiative decay, and is the corresponding activation energies. We note that all 5 terms in Eq. (1) are positive for one-photon-excited PL, thus characterizing the overall phonon-assisted PL quenching, while partial contributions from the specific components can be weakly recognized [ Fig. 6(c)]. However, the situation changes dramatically when switching to two-photon excited PL. Consequently, phonon-assisted PL quenching in each of the structural phases (three positive terms) still contribute into the temperature-dependent dynamics, however, together with PL intensity increase when the structural phase changes towards the higher symmetry one (two negative terms) [ Fig. 6(a) -(c)]. This observation demonstrates a higher sensitivity of the nonlinear absorption coefficient to the crystalline lattice symmetry and suggests that the specific phonon modes participate in the structural phase transitions 51 similarly to PL non-radiative decay. 50 Consequently, temperature dependences of both onephoton-excited and two-photon-excited PL intensities can be fitted using the same and parameters, nevertheless, the intensity (0) of two terms governing the structural phase transition dynamics change sing when switching to two-photonexcited PL. Specifically, PL quenching in the orthorhombic phase involves TO-phonons ( 1 ~5 meV). In contrast, PL quenching in the tetragonal/cubic phase involves MAI-phonons ( 3 ~204 meV, 5 ~413 meV). The orthorhombic-totetragonal phase transition is a phonon-assisted process which occurs owing to MAT-phonon activation ( 2 ~45 meV) whereas the tetragonal-to-cubic phase transition involves MAIphonons ( 4 ~615 meV). We note also that the orthorhombicto-tetragonal structural phase transition in 3D MAPbBr3 nanocrystals is spread out over the T ~70 -140 K range, which is believed to be due to the configurational entropy loss and the corresponding structural phase instability when, unlike in singlecrystal MAPbX3, free rotations of the MA ions are no longer restricted strongly by long-range polar order. 27,28 The resulting local field fluctuations in MAPbX3 nanocrystals and the liquidlike motion of MA cations 1 weaken and smooth distortions of PbX6 octahedra which are responsible for the band-edge electronic transitions, thus eliminating the stepwise shift of the corresponding absorption and PL bands.
Although the temperature dependences of the integrated PL intensity for the MAPbBr3/ZnO(100nm) sample are quite similar to those of the MAPbBr3/Sa sample, they differ significantly for the MAPbBr3/ZnO(30nm) sample [ Fig. 5(a)-(c)]. The dynamics can be associated with that being caused by the interfacial electric field. 19 Specifically, one should distinguish between the two principally different PL excitation regimes. One of them (exc = 325 nm) deals with the excitation of carriers in both MAPbBr3 and ZnO. The charge separation process at the MAPbBr3/ZnO heterointerface in this case is not efficient enough because although the photoexcited holes in both materials tend to reside in MAPbBr3, the majority of photoexcited electrons in MAPbBr3 do not leave it since the edge of the ZnO CB is filled by electrons photoexcited in ZnO [ Fig. 1(b)]. The second regime involves the excitation of carriers exclusively in MAPbBr3 (exc = 442 and 800 nm) and hence the interfacial electric field at the MAPbBr3/ZnO heterointerface is formed with high efficiency since electrons can freely move to the CB of ZnO whereas holes remain in MAPbBr3 [ Fig. 1(b)]. One can hence vary the strength of the interfacial electric field by varying the photoexcited carrier density and exc.
The thickness of the ZnO layer also significantly affects the interfacial electric field strength. Specifically, the interfacial electric field in the MAPbBr3/ZnO(30nm) sample is expected to be much stronger compared to that in the MAPbBr3/ZnO(100nm) one. This statement can be clarified in the framework of the two effects that can potentially occur at the MAPbBr3/ZnO heterointerface: (i) the strain-induced effect and (ii) the charge-separation-induced effect. The first one takes into consideration that the ZnO layer was grown on the sapphire substrate and hence the thicker the ZnO layer, the stronger the residual strain should act on MAPbBr3 nanocrystals. 51 The second effect also depends on the ZnO layer thickness, but in the opposite way. Because the strength of the interfacial electric field is proportional to the carrier density separated at the MAPbBr3/ZnO heterointerface, the thicker the ZnO layer, the lower the carrier density in it and hence the weaker the interfacial electric field. This principal difference between the strain and electric field induced effects can be distinguished by testing two samples of different ZnO layer thicknesses, as it has been done in the current study. Specifically, the non-monotonic behaviour observed for the MAPbBr3/ZnO(30nm) sample with one-photon excitation (exc = 325 and 442 nm) points to the stronger interfacial electric field being involved in this sample. Moreover, as the strength of the interfacial electric field increases (exc = 442 nm), the shift of the structural phase transition towards the higher temperature range also progresses. Alternatively, the temperature dependences of the MAPbBr3/Sa and MAPbBr3/ZnO(100nm) samples demonstrate a similar monotonic behaviour, suggesting that the interfacial electric field in MAPbBr3/ZnO(100nm) sample is as weak as that in the MAPbBr3/Sa sample and proving that the strain-induced effect is negligible [ Fig. 5(a) and (b)]. This tendency is also confirmed using two-photon-excited PL (exc = 800 nm), despite the nonmonotonic temperature dependences for all the samples. Specifically, the temperature dependence of two photon-excited PL intensity for the MAPbBr3/ZnO(100-nm) sample looks more like that for the MAPbBr3/Sa sample [ Fig. 5(c)].
The orthorhombic-to-tetragonal phase transition in the MAPbBr3/ZnO(30nm) sample reaches the extremely broad temperature range of T ~70 -230 K [ Fig. 5(a)-(c) and Fig. 6(d)], confirming once again that the interfacial electric field in this sample is enhanced. The dynamics also demonstrate more clearly the existence of T1 and T2 subphases. The corresponding activation energies are in the range of MAT-and MAI-phonons [ Fig. 6(d)], confirming that the structural phase instability results from the MA dipole order suppression. Additionally, if the exc = 800 nm excitation regime is applied, the cubic structural phase feature is not observed for the MAPbBr3/ZnO(30nm) sample even for temperatures ranging up to T ~ 295 K [ Fig. 5(a)-(c)], suggesting that the room-temperature structural phase in this case most likely is also instable, being the mixture of the orthorhombic and tetragonal phases.
To estimate how far the interfacial electric field is extended inward towards the MAPbBr3 nanocrystal core, we calculated the Thomas-Fermi screening length for the photoexcited carrier densities = 1.0 × 10 19 cm -3 , = 1.0 × 10 18 cm -3 and = 1.0 × 10 17 cm -3 (see the Experimental Methods section) as 1⁄ = ~2.7 nm, ~3.9 nm and ~5.8 nm, respectively, with being the Thomas-Fermi wavevector defined as 2 = ( 19,53 where * is the carrier effective mass ( * = 0.13 0 and ℎ * = 0.19 0 for electrons and holes, respectively, with m0 being the free-electron mass), e is the electron charge, = 21.36 is the static dielectric constant, and 0 is the permittivity of free space. 9 Consequently, the Thomas-Fermi screening length naturally decreases with increasing carrier density, indicating that the interfacial electric field tends self-consistently to be confined at the heterointerface when the photoexcited carrier density increases. Because the range of the orthorhombic-to-tetragonal structural phase transition also extends with increasing photoexcited carrier density, the latter behaviour implies that the strength of the interfacial electric field mainly governs the structural phase instability in the whole MAPbBr3 nanocrystal rather than the field extension inward towards the nanocrystal core. The short-range Thomas-Fermi screening length also suggests that the long-range MA dipole order which was suppressed substantially in the whole MAPbBr3 nanocrystal cannot be restored by the interfacial electric field.
We also note that the temperature dependences of PL peak position observed for the MAPbBr3/ZnO(30nm) sample demonstrate some additional features when approaching room temperature [ Fig. 5(d) -(f)]. We associate these features with FE dissociation because of stronger interfacial electric field in this sample. 19 Specifically, the interfacial electric field dissociates FEs in the tetragonal phase, giving rise to the blue-shift of the PL band because of progressive switching from FE to band-to-band recombination. The rate of this process strongly depends on the electric field strength, thus being maximised for the exc = 442 nm and exc = 800 nm excitation regimes. Once the tetragonalto-cubic structural phase transition occurs, the FE dissociation process weakens, giving rise to the red-shift of the PL band since FE binding energy in the cubic structural phase is higher compared to that in the tetragonal phase. 54 It should be noted that applying exc = 800 nm excitation, one can observe only the initial blue-shift of the PL band because the tetragonal-to-cubic structural phase transition in this case occurs at temperatures higher than room temperature [ Fig. 5(f)].
C. PL excitation mechanisms
The formation of the interfacial electric field of different strengths is also one of the key circumstances of why PL technique becomes sensitive enough to negligible structural distortions in 3D MAPbBr3 nanocrystals. Specifically, this behaviour is realized because PL excitation involves the absorption rates governed by the second-order and third-order nonlinear susceptibilities, which owing to their higher rank tensor nature compared to the first-order susceptibilities, are known to demonstrate a higher spatial sensitivity to the lattice symmetry. [55][56][57][58] The situation emerging is known as electric-fieldinduced one-photon-excited PL and two-photon-excited PL, both involving nonlinear susceptibilities. 55 This behaviour is in stark contrast to the conventional one-photon-excited PL which loses sensitivity to the structural phase transition in 3D MAPbX3 nanocrystals because of structural phase instability and the corresponding negligible distortions of PbX6 octahedra responsible for light-emitting process. It is worth noting that all three structural phases in MAPbX3 materials are centrosymmetric. 59,60 This statement significantly distinguishes between PL excitation regimes through the light absorption rate. Specifically, PL intensity can be expressed as 61 where ∝ ℎ is the emission rate caused by carrier radiative recombination with and ℎ being the density of electrons and holes, respectively. The latter process is known as bimolecular recombination and mainly appears through electroluminescence (EL), when and ℎ , in general, can be different as a consequence of the specific structure of the samples, their doping type, as well as the carrier injection level. Additionally, for highly efficient light-emitters, and ℎ should be low enough to guarantee the carrier wavevector conservation in the recombination process. 61 In contrast, in PL experiments one always excites equal numbers of electrons and holes, = = ℎ (each photon with energy exceeding band gap energy excites two particles, electron and hole), with energies equal to one half of the difference between photon and band gap energies. The density of photoexcited carrier is usually much higher compared to the intrinsic carrier density (the doping level). Furthermore, PL resulting from recombination between nonthermalized (hot) carriers (hot PL) should also be negligible since the wavevector is not strictly conserved for them. However, if wavevector conservation is not necessary, the situation which may happen due to the trapping of carriers by defects or carrier interaction with phonons (including the polaron formation as well), the rate ∝ (monomolecular recombination) if the photoexcited carrier density exceeds the intrinsic carrier density. 61 The latter proportionality indicates that two-particle recombination is a highly probable process emitting a single photon. The monomolecular recombination is hence a direct opposite of the PL excitation process and its rate is known to be significantly enhanced as compared to that of bimolecular recombination. 61 Because ∝ , where and are rates of light absorption and carrier relaxation to the light-emitting states, respectively, 61 and because is expected to be a constant for the fixed incident photon energy, as that occurs in our case, PL intensity can be expressed as that is, being predominantly governed by the absorption rate (in units of s -1 ), which, in general, is a sum of several contributions associated with one-photon absorption (1) , twophoton absorption (2) if the excitation light intensity ( ) is strong enough, and one-photon electroabsorption (1) if an external or internal electric field is applied, so that where is the excitation photon energy, (1) , (1) and (2) are the corresponding one-photon and two-photon cross sections. 55 If > , then the (1) and (1) terms dominate the PL excitation dynamics. On the contrary, if < , then the (2) term completely governs the PL excitation mechanism. Consequently, the following proportionalities ~ and ~ 2 correspond to onephoton-excited and two-photon-excited PL, respectively. [62][63][64][65] These relations can be confirmed experimentally when analyzing the slope of the power dependences of presented in a log-log plot (Fig. 7).
However, the absorption rates of the one-photon and twophoton absorption processes are known to be proportional to the imaginary part of the first-order and third-order optical susceptibilities ( 55 Consequently, this approach is well consistent with the aforementioned centrosymmetry restriction applied to MAPbBr3 crystals, according to which the second-order nonlinear process [ χ (2) ] is not allowed, whereas the linear [χ (1) (− ; )] and third-order nonlinear [χ (3) (− ; , , − )] processes should completely govern the one-photon and two-photon absorption in these materials, appearing through onephoton-excited and two-photon-excited PL, respectively. However, once the crystalline lattice is getting distorted by an external or internal electric field, for example, the centrosymmetry breaking allows the second-order nonlinear process to appear through the linear electro-optic effect [55][56][57][58] We note that PL excitation involving χ (2) (− ; , 0) and χ (3) (− ; , 0,0) still produces a one-photon-excited PL response, although the light absorption rates are characterized by nonlinear electro-optic susceptibilities. This brief discussion of nonlinear optics highlights an advantage of the electric-fieldinduced one-photon-excited and two-photon-excited PL for monitoring structural phase transitions in hybrid perovskite nanoscale materials. This behaviour results from the fact that these techniques exploit the higher sensitivity of nonlinear optical and electro-optical susceptibilities to the crystalline lattice distortions compared to the conventional linear optical processes. [55][56][57][58] Specifically, χ (1) (− ; ) is a second rank tensor containing 9 elements, whereas χ (2) (− ; , 0) and χ (3) (− ; , 0,0), χ (3) (− ; , , − ) are the third and fourth rank tensors containing 27 and 81 elements, respectively. 55 This principal difference also indicates that both (1) and (2) mainly characterize PL excitation in the nanocrystal core, contrary to (1) which characterizes PL excitation in both the nanocrystal core and the nanocrystal surface. Consequently, because the ratio of the surface-to-core PL contributions for nanocrystals is large enough and because the surface states are less sensitive to the structural phase transitions in the core, the structural phase transitions in MAPbX3 nanocrystals can be significantly masked by PL from the surface states when conventional one-photon excitation is applied. This circumstance together with the structural phase instability occurring within the broad temperature range seems to be a reason why the structural phase transitions in MAPbX3 nanocrystals have never been observed. It is worth noting that PL peak position remains almost unchanged with increasing laser power for all the samples and
D. PL broadening dynamics
To gain deeper understanding through the phonon-assisted structural phase transitions in 3D MAPbBr3 nanocrystals, the temperature dependences of the PL band FWHM for all the samples were analysed (Fig. 8). All the dependences clearly demonstrate the two-stage homogeneous broadening process appearing at low (T ~20 -140 K) and moderate (T ~140 -295 K) temperatures. These two temperature intervals closely match those corresponding to the orthorhombic and tetragonal-cubic phases in single-crystal MAPbBr3, respectively. However, PL broadening dynamics with temperature is expected to be governed rather by various phonons being involved than the structural phase transitions. The temperature dependences of the PL band FWHM also reveal additional features for the MAPbBr3/ZnO(30nm) sample when approaching room temperature. These features are due to the interfacial-fieldinduced FE dissociation, which additionally to the blue-shift of the PL band [ Fig. 5(d) -(f)] is accompanied by its narrowing. 19 Further broadening of the PL band with increasing temperature occurs when the tetragonal phase transforms to the cubic one, at which FE binding energy is increased and hence the FE dissociation process is slowed down, as discussed in the preceding section for the PL peak position dynamics.
To analyze the PL band FWHM variations with temperature, we use a phenomenological approximation for phonon-induced broadening 19,66,67 = γ 0 + γac + where 0 is inhomogeneous broadening, γac is the electron (hole)acoustic-phonon coupling strength, γTO/LO is the electron (hole)-TO/LO-phonon coupling strength, γMAT is the electron (hole)-MAT-phonon coupling strength. Consequently, the electron (hole)-acoustic-phonon coupling strength (γac ~3×10 -5 eVK -1 ) is negligible [ Fig. 8(a)-(c)], being of the same order as that previously reported. 19,66 The first stage of PL band broadening is due to the scattering of carriers with TO/LO-phonons, whereas the second stage can be attributed to the MAT-phonons effect. It should be especially stressed here that the MAT-phonon contribution becomes significantly enhanced for the MAPbBr3/ZnO(30nm) sample [ Fig. 8(b)]. This behaviour confirms the stronger interfacial electric field in this sample to occur and its significant effect on the suppression of the MA cation dipole order in the whole nanocrystal. 45 We note that the fits are not necessarily unique, thus allowing one to determine the effective energy ranges of TO/LO-phonons as 〈 LO/TO 〉 ~4.0 -17 meV and MAT-phonons as 〈 MAT 〉 ~55 -92 meV, which well match those discussed above in the sample characterization section.
We also found that the electron (hole)-MAT-phonon coupling strength (γMAT ~245 -679 meV) is ~100-fold greater than the electron (hole)-TO/LO-phonon coupling strengths (γTO/LO ~2.5 -8.4 meV). This strong coupling of electrons (holes) to MAT-phonons highlights the main specific feature distinguishing the carrier relaxation in MAPbX3 compared to conventional semiconductors. Specifically, the photoexcited carriers relax down not only through the TO/LO-phonon cascade, but also through the MAT-phonon excitation. This behaviour is the reason why TO/LO-phonon bottleneck occurs in MAPbX3. Specifically, because of the ultralow thermal conductivity between the sublattices, the organic sublattice heated during carrier relaxation keeps TO/LO-phonons in the inorganic sublattice at the temperature of the former, thus blocking their decay through acoustic phonons (Klemens-Ridley anharmonic process) and allowing carriers to reabsorb TO/LOphonons. [2][3][4] This process is expected to be enhanced in nanocrystals as a consequence the additional reduction of thermal conductivity through the nanocrystal boundaries. Consequently, the nearly monotonic blue shift of PL band with increasing temperature seems to result rather from the heating effect under TO/LO-phonon bottleneck than that being induced by a progressive distortion of PbX6 octahedra. This conclusion is also well consistent with the structural phase instability in MAPbX3 nanocrystals due to the configurational entropy loss. 21,27,28
E. PL mechanism
PL mechanism in MAPbX3 is not trivial mainly due to a polar crystal lattice and the ultralow thermal conductivity between the sublattices. The observed dominant PL blue-shift with increasing lattice temperature (T) [Fig. 5(d)-(f)] is opposite to that usually predicting the band gap ( ) variation in conventional semiconductors 61,68,69 where a and b are fitting parameters and is the mean temperature of phonons taking part in the scattering process with carriers. It has recently been suggested that ( ) can show either a decrease (red-shift) or an increase (blue-shift) depending on whether derivative ⁄ (slope) is positive (phonon emission) or negative (phonon reabsorption), respectively. 18 The latter behavior points to the non-equilibrium dynamics, being equivalent to the introduction of the negative absolute temperature. 70 To adapt this situation to TO/LO-phonon bottleneck, we consider the Bose-Einstein phonon occupation numbers for spontaneous TO/LO-phonon emission and for TO/LO-phonon reabsorption where denote the temperature at which TO/LO-phonon bottleneck occurs. This approach implies that upon photoexcitation, electrons and holes (also FEs) cool down through the TO/LO/MAT-phonon cascade. Consequently, free carriers and FEs relax down at least within a few ps timescale and their temperature in the light-emitting states prior to emission ( ) is determined by the decay of TO/LO-phonons in the inorganic sublattice through the Klemens-Ridley anharmonic process [2][3][4]30,31 , which however is controlled by the organic sublattice temperature ( ). Because the further cooling of the organic sublattice through acoustic phonons is slower than that of the inorganic sublattice due to the more energetic optical phonons involved in the organic sublattice, TO/LO-phonon bottleneck in the inorganic sublattice occurs at = > and allows carriers in the latter to reabsorb TO/LO-phonons. The effect progresses with increasing , since the organic sublattice cooling rate is reduced. Consequently, TO-phonon bottleneck occurs predominantly in the orthorhombic phase whereas LOphonon bottleneck dominates in the tetragonal/cubic phase. The resulting TO/LO-phonon-dressing process hence lowers the electron (hole, exciton) energies by the polaron (reorganization) energy 1,7,10,71-73 . Consequently, the stronger the electron (hole, exciton)-phonon coupling, the larger the number of TO/LOphonons contribute to the polaronic effect.
According to this model, the resulting polaronic electron (pe), polaronic hole (ph) and polaronic exciton (PE) quasiparticles are involved into their further recombination in MAPbBr3 nanocrystals, thus completely governing their PL and transport properties. Moreover, upon TO/LO-phonon bottleneck, polaronic quasiparticles can reabsorb TO/LOphonons to form the TO/LO-phonon vibrationally excited polaronic quasiparticles with reduced ground-state polaron energy. Consequently, polaronic quasiparticle recombination may occur in either the ground or vibrationally excited polaron states [ Fig. 9(a)]. This behavior gives rise to the ~100 meV blueshift of PL-peak with increasing temperature, unlike a red-shift in conventional semiconductors. Because the blue-shift is observed for the entire temperature range applied, one can assume that > 300 K whereas it should apparently be less than the material melting point of < 450 K. Owing to screening from other carriers and defects, polaronic quasiparticles possess a very small recombination rate (PL decay-time is very long), thus being expected to demonstrate high mobility and long-range diffusion.
To treat the experimental results, we first consider the single electron (hole) polaron energy. The polaronic band gap renormalization is known to narrow the band gap by reorganization energy, introducing the ground-state Fröhlich polaron energy for electrons and holes which can be given as 71,74 ,ℎ = 〈 TO/LO 〉〈 ,ℎ 〉, where 〈 ,ℎ 〉 = Using 〈 〉 = 5 meV and 〈 〉 = 18.6 meV, one can obtain the following Fröhlich polaron coupling coefficients 〈 〉 = 3.37, 〈 ℎ 〉 = 4.07 and 〈 〉 = 1.75, 〈 ℎ 〉 = 2.11 for TO and LO phonons, respectively, which are well consistent with those calculated using the Feynman-Osaka model 9 . The ground-state Fröhlich polaron energy for electrons and holes is hence temperature independent and can be calculated as = ~16.9 meV and ℎ = ~20.35 meV for TOphonons and = ~32.6 meV and ℎ = ~39.2 meV for LOphonons. These estimates imply that LO-phonons might govern the room-temperature ~60 meV Stokes shift ( + ℎ ) discussed above in the sample characterization section. Specifically, assuming that the absorption and PL spectra manifest the unperturbed and BGR-induced dynamics, respectively, the corresponding band gap narrowing is and the Stokes shift is hence equal to + ℎ . To introduce the temperature effect into the dynamics, we use the Bose-Einstein phonon occupation numbers defined above, so that We note that Eq. (12) completely describes the variation of the polaronic band gap energy of MAPbX3 nanocrystals, depending on whether TO/LO-phonon bottleneck occurs. To use this approach to PEs, one should consider the PE binding energy ≡ . . − − ℎ , where . . is the PE ground-state energy [ Fig. 9(a)]. 77 Because for MAPbBr3 is similar to the exciton binding energy ( ≈ ) and because = 35 meV is higher than the room temperature = 25.7 meV, 16,77 we consider PEs as those dominantly contributing to PL in the temperature range of 20 -295 K. Consequently, the exciton peak energy in absorption and PL spectra varies as follows ) term dealing with the equilibrium relaxation dynamics and governing the PL peak red shift. As a result, the PE absorption and PL peaks both tend to blue shift with temperature in the broad temperature range of = 20 -100 K, indicating a strong TO/LO bottleneck effect to occur. However, this general trend is gradually weakened for the PL peak when temperature approaches to that at which the crystalline lattice is getting unfrozen enough to initiate the anharmonic three-phonon TO/LO-phonon decay process involving acoustic phonon branches (Klemens/Ridley process). The resulting deviation of the PL peak temperature dependence from that of the PE absorption peak is hence coming from the weakening of TO/LO-phonon bottleneck, which mainly appears for the PL peak since PE relaxation (cooling) towards their ground state is required prior to light emission. Alternatively, the PE absorption peak temperature dependence mainly reflects the non-equilibrium dynamics, which ignores any relaxation processes. The resulting Stokes shift progresses with increasing temperature, reaching ~60 meV at room temperature. the value which presents the energetic difference between the polaronic band gap and PE ground-state energy. To verify whether Eq. (12) is relevant to the experimental observations, we re-plotted the temperature dependences of PLpeak position as ( ) − (20 ) , where the lowest temperature data taken at = 20 K is applied instead of that at = 0 K (Fig. 5(d)-(f)), Alternatively, neglecting the spontaneous TO/LO-phonon emission, Eq. (12) describes the PL-peak energy increase with increasing temperature [ Fig. 9(a)]. Figure 5(f) confirms the latter tendency by numerically simulated results under = 450 K obtained without any fitting parameters. One can clearly see that TO-phonon bottleneck dominates in the orthorhombic phase whereas LO-phonon bottleneck controls the dynamics in the tetragonal/cubic phase. The inflection point is hence a signature of switching between these two regimes.
It is worth noting that the band gap modification energy ( ) − (0 ) varies in the range of ~60 -100 meV, which closely matches the polaronic band-edge energy ( + ℎ ) ~ 70 meV for LO-phonons, thus suggesting that the blue shift of PL band can be associated with the LO-phonon vibrationally excited polaronic quasiparticles. We note that this mechanism, which is applied to a layer of MAPbBr3 nanocrystals clearly demonstrating a single PL peak, is completely different to that proposed for thick films demonstrating dual emission features like in bulk single-crystals. 19,78 Consequently, the proposed mechanism was associated with the thermal expansion of the lattice, 78 which seems to be irrelevant for nanocrystals where the lattice is flexible enough due to structure phase instability. Furthermore, because and ℎ for LO-phonons are of the same order as Rashba energies ( ~40 meV), 79 the polaronic nature of the edge states in MAPbX3 materials at room temperature should dominate over that associated with the Rashba effect.
To recognize TO/LO-phonon bottleneck on the pe/ph and PE masses, we consider again the process of TO/LO-phonon emission/reabsorption by hot carriers. LO-phonon emission influences the pe/ph masses as show the numerically simulated results which point out that if TO/LO-phonon bottleneck is neglected ( = 0), the polaron masses increase with increasing temperature in all the structural phases, that is, the electron (hole, exciton)-phonon coupling is enhanced. TO-phonon bottleneck significantly increases the polaron masses at = 0 K followed by their slight decrease with increasing temperature. This behavior indicates that the polaronic quasiparticles are strongly localized in the orthorhombic phase. Alternatively, LO-phonon bottleneck dominating in the tetragonal/cubic phase decreases the polaron masses, making them be almost temperature independent. We note that the pe/ph masses in the latter case are only slightly above the electron and hole effective masses, whereas the PE mass is less than those.
The effect of TO/LO-phonon bottleneck on the polaron radii can be considered using the polaron radii of Fröhlich polarons at = 0 K 71 , We use the aforementioned parameters to obtain the following equilibrium polaron radii at = 0 K when TO-phonons are involved 〈 〉 = 7.66 nm, 〈 ℎ 〉 = 6.35 nm, 〈 〉 = 9.95 nm and when LO-phonons are involved 〈 〉 = 3.97 nm, 〈 ℎ 〉 = 3.29 nm, and 〈 〉 = 5.16 nm. These values well match those calculated using the Feynman-Osaka model 9 . The polaron radii are at least three times greater than the exciton Bohr radius 80 = 2 0 2 = 1.17 nm, which in turn is about twice the lattice constant (~0.59 nm) 9 . These estimates prove the large polaron nature (Fröhlich polaron) 71 of quasiparticles in MAPbBr3 and imply that the lattice distortions spread over many lattice sites, thus allowing Fröhlich polarons to travel through the lattice as free quasiparticles. We note that the PE radius 〈 (0)〉 only slightly greater than 〈 , ℎ (0)〉 because the overlapped polarization clouds of pe and ph partially cancel each other. 81 The temperature effect on the polaron radii can be treated following the general consideration for polaronic quasiparticles in quantum dots 81 (18)], the polaron radii shorten with increasing temperature in all the structural phases, thus agreeing with the corresponding increase of the polaron masses. 71 TO-phonon bottleneck slightly increases the polaron radii at = 0 K [Eq. (19)], nevertheless, they significantly decrease with increasing temperature in the orthorhombic phase. Once LO-phonon bottleneck begins contributing to the dynamics in the tetragonal/cubic phase, the polaron radii become longer and significantly elongate with increasing temperature. The latter behavior demonstrates the weakening of the electron (hole, exciton)-phonon coupling. The resulting polaron diameters at room temperature exceed the MAPbBr3 nanocrystal size. This tendency allows the LO-phonon vibrationally excited polaronic quasiparticles to travel through a layer of 3D MAPbBr3 nanocrystals without scattering on the electrostatic potential fluctuations associated with structural imperfections. Accordingly, the mobility and diffusion of polaronic quasiparticles in a layer MAPbX3 nanocrystals at room temperature should be significantly enhanced due to LO-phonon bottleneck.
IV. CONCLUSIONS
In this article we highlight several basic approaches which would be interesting to a broad audience of scholars exploring unique PL and transport properties of MAPbX3 materials. One of them suggests that two-photon-excited PL spectroscopy and electric-field-induced one-photon-excited PL spectroscopy are required to study the structural phase transitions in 3D MAPbX3 nanocrystals. These techniques are capable of more precisely monitoring the structural phase transitions because the secondorder and third-order nonlinear susceptibilities govern the light absorption rates.
Consequently, one can recognize that the structural phase transitions in 3D MAPbBr3 nanocrystals may occur at about the same temperatures as those in single-crystal MAPbBr3. However, the orthorhombic-to-tetragonal structural phase transition in 3D MAPbBr3 nanocrystals, unlike in single-crystal MAPbX3, is spread out over the broad temperature range of T ~70 -140 K due to the structural phase instability induced by local field fluctuations when free rotations of MA ions are no longer restricted strongly by long-range polar order. The resulting configurational entropy loss and the liquid-like motion of MA cations in 3D MAPbBr3 nanocrystals can be even enhanced by the interfacial electric field arising due to charge separation at the MAPbBr3/ZnO heterointerface, extending the range of the orthorhombic-to-tetragonal structural phase instability from T ~70 to 230 K and significantly shifting the tetragonal-to-cubic phase transition towards higher temperatures exceeding room temperature.
The latter effect is found to be dependent on the ZnO layer thickness and the photoexcited carrier density, thus allowing one to control the structural phase instability range in 3D MAPbBr3 nanocrystals. Finally, we conclude that a stepwise shift of the PL band with temperature observed for single-crystal MAPbX3 is no longer an indication of the structural phase transition in 3D MAPbBr3 nanocrystals because of negligible distortions of PbX6 octahedra under the structural phase instability regime. On the contrary, the nearly monotonic blue shift of PL band with increasing temperature in a fully encapsulated single layer of 20nm-sized 3D MAPbBr3 nanocrystals seems to result rather from the heating effect under TO/LO phonon bottleneck than that being induced by the progressive PbX6 octahedra distortions.
Furthermore, we point out that two-photon-excited PL spectroscopy and electric-field-induced one-photon-excited PL spectroscopy mainly characterize PL excitation in the nanocrystal core, contrary to conventional one-photon-excited PL spectroscopy dealing with PL excitation in both the nanocrystal core and the nanocrystal surface. Consequently, because the ratio of the surface-to-core PL contributions for nanocrystals is large enough and because the surface states are less sensitive to the structural phase transitions in the core, the structural phase transitions in MAPbX3 nanocrystals can be significantly masked by PL from the surface states when conventional one-photon excitation is applied. This circumstance together with the structural phase instability occurring within the broad temperature range seems to be a reason why the structural phase transitions in MAPbX3 nanocrystals have never been observed.
We also confirmed that the photoexcited carriers responsible for the light-emitting and transport properties of a layer of 3D MAPbBr3 nanocrystals are the polaronic quasiparticles, which can be TO/LO-phonon vibrationally excited to the higher-energy states owing to TO/LO-phonon bottleneck. Consequently, PL from MAPbBr3 nanocrystals results from the recombination of PEs, which can emit light either in the ground or TO/LO-phonon vibrationally excited states, thus giving rise to the ~100 meV blue-shift of PL-peak usually appearing in MAPbBr3 nanocrystals with increasing temperature. We note that this polaronic nature of the edge states in MAPbX3 nanocrystals becomes dominant exclusively at higher temperatures (including room temperature) just because energies of the TO/LO-phonon vibrationally excited polaronic quasiparticles (~100 meV) significantly exceeds the groundstate polaron ( , ℎ ≤ ~40 meV) and Rashba energies ( ~40 meV). Alternatively, the Rashba spin-split nature of the edge states in MAPbX3 nanocrystals is expected to be dominant only at low temperature when the Rashba energy might exceed energies of the TO/LO-phonon vibrationally excited polaronic quasiparticles.
Additionally, we showed that at room temperature owing to LO-phonon bottleneck, the polaron masses diminish and polaron radii increase. This behavior creates unique conditions for the TO/LO-phonon vibrationally excited polaronic quasiparticles to travel long distances without scattering on electrostatic potential fluctuations governed by structural imperfections. | 12,671.4 | 2019-12-24T00:00:00.000 | [
"Materials Science",
"Physics"
] |
The Hypotheses Testing Method for Evaluation of Startup Projects
This paper suggests new perspective to evaluating innovation projects and understanding the nature of startup risks. Author consider five principal hypotheses that underlie every innovative project that comprise a bunch of respective assumptions to manage startup risks in a proactive manner. Suggested approach spots the light on a project’s uncertainties and risks, embedded investment and managerial options, and enables more comprehensive and accurate evaluation of innovation. The Hypotheses Testing Method enables to estimate risks and attractiveness of a startup project in a clear and fast manner. It replaces unclear traditional techniques like NPV and DCF, avoiding heavy cash flow modelling.
out from the perspective of six stages. Whereas, assuming each stage to be designed to collect specific information in order to dispel uncertainty and risks, Stage Gate model heavily relies on a waterfall logic that eliminates managing risks in a proactive manner via agile methodologies. Without suggesting any particular set of hypotheses basically intrinsic to a project, the model excludes the possibility to test them from the very beginning, even though they relate to later stages. Moreover, the stages used in Stage Gate approach are quite general, which doesn't anticipate a delivery logic of a project, which makes its evaluation more complicated. On top of all of that, even though Stage Gate model is the closest existing methodology to cover uncertainties and risks of an innovation project, it also doesn't enable to assess its risks intrinsic to each stage and evaluate its value.
This paper proposes to look at innovation from a conceptual point of view concentrating on primary stages of their creation and development towards product implementation and marketing using incremental delivery perspective. Shifting the focus from end results to the complex delivery process, the authors propose a new approach to understanding innovation projects and their risks in a wider perspective than it is used to being considered. Proposing the Hypotheses Testing Method, this study suggests a set of hypotheses that cover any innovation project and all aspects of creating new products. The authors assume that in its fullest configuration an innovation project always consists of the five high-level hypotheses that can be further decomposed into smaller assumptions. These are a team competency, technological capability, customer value, business model, and market depth hypotheses which have convexities and overlap during the project progress depending on the degree of innovativeness, and so enable managing risks in a proactive manner testing them from the very beginning of a project's creation.
Understanding decomposition of each hypothesis, reveals a more detailed picture of a project's uncertainties and risks, variety of available investment and managerial options, and enables more comprehensive and accurate evaluation of innovation. The Hypotheses Testing Method is practically applicable to any project, which makes it a conceptual basis for creating and development of innovation by perceiving their risks in a much clearer manner and enabling assessment of their subtle value.
Method
At all times, the engine of progress and development was the invention of something new, which did not exist before. However, it is important to understand that all innovations created are designed to meet existing (and nonexistent) human needs. In fact, each innovation brings its own unique value to the customers in terms of solving specific problems. Different approaches to assessing the future development cost have appeared with the general availability of investments in the development of innovations. In order to be able to invest in the development of innovation (creation of new products), it is necessary to assess this opportunity. The existing assessment models are reduced to a rather narrow emphasis -the finished product, that is, the scientific literature shifts the emphasis on assessing the product, that is, the result of an innovation, omitting the entire life cycle of creating an innovation with all the inherent risks and uncertainties.
When researching an innovation project as a form of innovation activities (Chesbrough, Lettl & Ritter, 2018) for the development and implementation of innovations, researchers often deviate from the specific distinctive features of the innovation project that is a complex system of interdependent and interconnected resources, deadlines and implementers of activities aimed at achieving specific goals (objectives) for the priority areas of science and technology (Frank, 2016;Hartwig & Mathews, 2020).
It is careless to assess the future cash flows of an innovative project using classical financial methods, because omitting the entire period of innovation development and evaluating only the product sales stage (scaling), standard Net Present Value (NPV) significantly reduces the project investment attractiveness and embedded flexibility to minimize sunk costs and maximize upsides, and is not an appropriate method of evaluating projects that are subject to significant uncertainty and risks.
Yes, this method considers a certain risk, but it is stable throughout the entire project and is enclosed in a discount rate, which does not reflect the reality of all stages of an innovative project. Without dividing the project into risks of different nature, the assessment will be linear and may give significant deviations from the output results. Even more advanced methods like Decision Tree Analysis and Scenario analysis include a constant discount rate. Why is it important? Due to its limitations, the discount rate cannot include the risks of a complete abandon of the project at the early stages, that is, a working prototype and Minimum Viable Product (MVP). This leads to the fact that the estimated income at the last project stages will be equal to 0 (after all, the product has not been launched, which means there is nothing to sell), and the losses will be equal to the amount of funds spent on the project. In fact, the NPV model based on the discount rate cannot include risks in the early stages of the project. Understanding the nature of the risks of an innovative project will allow to shift the emphasis in the assessment of innovation at all its stages, from the idea to scaling the finished product, thereby identifying and highlighting possible clusters of risks at the beginning of an innovative project. Focusing on all the risks of an innovative project regardless of its stage, separating certain risks depending on their impact on the outcome of the stage and the project as a whole through testing the relevant hypotheses, will allow investors to understand the nature of pertaining project risks and thus ensure the managerial flexibility in innovation project risk management.
A characteristic feature of projects whose goal is to create innovation is the dependence of uncertainty and, as a consequence of risks at the initial stages of creating an innovation, on the degree of innovation project (Bowers & Khorakian, 2014;Keller, 2017;Chappin et al., 2019;Kock & Gemünden, 2020;Mathews & Russell, 2020;Young, 2020).
Strategic decisions within the innovation project are made in the face of uncertainty, which is an integral characteristic of innovation (Wang, 2017;Shestakov & Poliarush, 2019;Loch, 2007). Analyzing uncertainty, it should be noted that it relates to innovations directly through the process of their creation within the project, since it is usually not obvious what kind of methodologies, techniques and tools should be used to create the innovation; and that the result of the innovation project is unknown and may deviate significantly from the expected one (Nagji & Tuff, 2012;Shestakov, 2018).
The 'uncertainty factor' of an innovation project generates a certain range of hypotheses that underlie the project's value and return on investment. Upon calculating the cash flows after the implementation of the project as well as upon applying the appropriate discount rate (Latimore, 2002;Zizlavsky, 2014), investors calculate the net present value (NPV) of the investment, which enables making the appropriate decision to participate in the project or not Smit & Trigeorgis, 2017). However, in case of innovation projects, the application of the classical method of cash flows estimation leads to the multiplication of uncertainty, which is the way of adding new assumptions to a set of other uncertainties. It is so due to the fact, that the cash flow assumption is derived from the successful implementation of the project at such stages as idea development, prototyping, minimum viable product (MVP) and basic version development, business model testing and market entry, only after that the project moves to the stage of scaling (testing the market depth), at which it begins to generate cash flows. That is, to determine the degree of inherent risk, it is necessary to separate all types of risks between the stages of the project and evaluate them separately (both for the early stages and for the later ones).
Stage Gate model (SGM) as the closest existing methodology to cover uncertainties and risks is a project management methodology (Cooper, 2016;Cooper & Sommer, 2016). The Stage-Gate process is based on the belief that product innovation begins with ideas and ends once a product is successfully launched into the market. The first 2 stages of an innovative project under SGM ('Scope' and 'Design') affect market research and detailed investigation involving primary research and involve voice of customer approach in a slightly narrow form, which means that there is no progress in reducing the level of uncertainty in the context of the customer valueit is still the 'black box'. The third stage -'Develop'combines all development, it is, in fact, an innovative project from the beginning of the prototype development to the creation of a finished fully functional product. Confirmation of consumer value occurs only at the 4th stage of this method ('Scale Up'). However, in fact, all capital at risk is concentrated in stage 3 under the SGM, which makes it possible to lose all funds for the project after the hypothesis of customer value is not confirmed. The last stage under SGM i.e. 'Launch' assumes the beginning of full-scale operations or production, marketing and sales (commercialization phase), which is really the final stage of any innovation project.
After completion of each stage the gate follows, where the project leader or manager decides to start the next stage or abandon the project (Go/Kill decision). However, it is worth considering that the risks of unconfirming the hypotheses, which are included in stages 4 and 5, must be analyzed not in fact, but at the initial stages of the project. This will minimize the losses of the investor and competently bring the project to completion, if the hypotheses regarding the team competence, technological capability, customer value, business model and market depth are confirmed. SGM does not fully reveal the depth of the risks of creating an innovation but focusing on the process of creating innovation makes this model the closest to the approach proposed in this work.
In case of project management, the innovativeness of SGM is that it adds more flexibility to classical waterfall logic due to the possibility of adding cyclicality at the stages of the project. It's more about project management, not about innovation project evaluation. In case of risk management, SGM not aimed at reducing the level of uncertainty for competent project risk management and does not provide answers for a more substantive understanding of the risks of an innovation project.
We use proactive agile instead of waterfall logic (which reflects the SGM logic) to show the impact of certain risks on the result of each stage of the project in interactive and iterative logic. Using this approach, it is possible to manage risks by analyzing in detail all the convexities for each hypothesis. And this, in turn, enables the investor to overestimate the probability of a negative outcome for each type of risk, thereby more clearly managing the uncertainties of creating an innovation. Understanding the risks of an innovation project, from the very beginning of an innovation project there is an opportunity to work them out at each stage by iterations, which is a methodologically clear systematization of the risk management.
When the process of measuring is done correctly, it will be clear that a company is either moving the drivers of the business model or not. If not, it is a sign to pivot or make a structural course correction to test a new fundamental hypothesis about the product, strategy and engine of growth.
However, it is worth noting one significant distinguishing characteristic in the approach proposed in this article and the lean methodology. Lean is a methodology for quickly going through the early project stages, namely creating a working prototype and MVP. This technique can really give an understanding of the possibility of implementing an innovative idea, but it does not give answers to the possibility of product commercializing. In other words, lean touches on the technical side of the innovation project and bypasses the business side. This is the key difference -after all, a technically working version of the product, without promotion, commercial component and market conquest, will not have exponential growth. In fact, the lean approach does not come close to understanding the business risks of the project. The approach proposed in this work corrects this bulge, completely covering all the main risks of going through all stages of an innovative project, both technical and business.
Instead of quantifying the depth of the innovative product market and building a detailed financial cash flow model, we emphasize to shift the focus to qualitative analysis of key assumptions at each project stage i.e. idea -prototype -MVP -market entryproduct scaling and analyze how they can be checked as early as possible. Based on lean methodologies to innovation process and considering all the possibilities and sides of the SGM approach, this paper proposes an independent approach. We will outline the main hypotheses on the way of innovation project implementation and innovation creation.
Hypothesis 1: Team Competence (H1). The project team is a group of responsible people who form the basis for further innovation development activities, directly participate in innovation processes, and perform tasks for the creation, implementation, and realization of innovation products. The technical part of the team should be able to create a prototype, MVP and a basic version of the innovation product for further implementation. The business part of the team (entrepreneurship, management, marketing, business development, sales) should be able to formulate the concept of the product, to manage effectively its development, to create and implement the strategy of sales and scaling of the product not only in regard to customers but also non-customers, i.e. potential customers of the given product, who for some reason do not use available alternatives.
The goals and specific roles of the innovation team depend on the type, size and scope of the project as well as activity of the company. In order to innovate, companies need a driving force from within, which is an experienced team that thinks through innovation categories, implements the right technologies and best practices for innovation processes. 'In the R&D context, a critical set of roles are around leading teams to promote good team spirit, trust and support, and to build group dynamics and processes that encourage necessary teamwork to turn creative ideas into innovation products and services' (Paulsen, 2009).
The team competence hypothesis is present at every stage of innovation creation: starting with the idea till the scaling of the product created. This means that the team competence generates end-to-end risks throughout the life cycle of the innovation project. Due to the different orientation of the technical and business parts of the team as well as diverse activities throughout the implementation of the innovation, the risk impact at different stages of the innovation project is different. At the prototype, MVP, and basic version development stages, the technical team competence hypothesis (H1.2) has a greater impact than the business team competence hypothesis (H1.1), since the main tasks at these stages are creating the prototype, MVP, and basic version respectively and require technical expertise. The business-focused team of a project usually begins to dominate after the business model hypothesis (H4) testing during a project basic version stage (see fig. 1), while the technical team developed a product at the previous stages continues focusing mainly on supporting and refining the product.
Hypothesis 2: Technological Capability (H2). Innovation is created in a certain time period, where appropriate technological solutions and theories dominate and are available for use. A technology maturity can significantly affect the results of the innovation project and product implementation. The technological capability required to create and manage technical changes includes skills, knowledge and experience that often (but not always) differ significantly from those required to operate existing technical systems (Bell & Pavitt, 1995). 'The technology development capability allows the firm to choose and to use technology with strategic purposes, to create new methods, process and techniques, and mostly, to offer new [innovation] products.' (Zawislak, 2012).
The technological capability hypothesis also involves assessment of the technology's complexity, its practical applicability, the development level of supporting technologies that in one way or another affect the use of the underlying technology. Also, an innovation product may consist of several technological solutions, which implies multilayered levels of complexity, practical applicability, development of basic and supporting technologies. In fact, diversification of risks of different technological solutions does not reduce the whole uncertainty of the innovation project, but rather creates multilevel risks that must be identified and managed by testing the H2. One of the main success factors for innovation products is the technical component, which leads to the high efficiency of the innovation product (Garcia & Calantone 2002).
Incremental development of a product through prototyping and MVP, with a limited basic functional set, means creating an initial version of the innovation in order to enable potential customers to evaluate key features of the newly created product and to prove or deny its value (testing the customer value hypothesis (H3)), which allows managers to make a respective pivot or exit a project and so to fix sunk costs at a minimum level. 'MVP allows entrepreneurs to focus more on knowing who their customers are, what habits they have, and how to attract and retain them.' (Trimi, 2012). Technological risk reduces as you progress through the innovation project development stages, which means that at the prototype stage the risk is the highest, and at the market entry and scaling stages it is the lowest.
It is obvious that technologies used for creation of a prototype, MVP and basic version of a product can be complex, and their practical applicability and level of development can affect the technological capability to accomplish necessary tasks and therefore to achieve the project targets.
Hypothesis 3: Customer Value (H3). 'Customer value has been widely recognised as a key factor in organisational management, marketing strategy and customer behaviour.' (Sánchez-Ferná ndez & Iniesta-Bonillo, 2009). All processes of an innovation project are aimed at meeting existing customers' needs and/or creating a new, previously unknown, demand. Pinto and Mantel, based on the research of the 97 failed projects, identified a project value and customer satisfaction as two out of three main causes of failure (Pinto & Mantel, 1990). The marketing component reflected in the degree of customer commitment to the product is one of the basic concepts of innovation success (Garcia & Calantone, 2002). The essence of the customer value hypothesis (H3) is to confirm the value of the newly created product and the willingness to use it by targeted customers.
A new product is valuable if it offers a better way to solve a problem. But even if the product has some functional, emotional or social benefits in comparison to other alternatives, a customer before acquiring the product will also take into account its cost, time and efforts needed to use it. That is, value is not just the quality and quantity characteristics of a product against those available on the market, it is the willingness of customer to buy this product within the offered price model, which is reasonable to be tested before market entry and as a part of the business model hypothesis testing.
Hypothesis 4: Business Model (H4). 'Technology by itself has no single objective value. The economic value of a technology remains latent until it is commercialized in some way via a business model.' (Chesbrough, 2010). 'The essence of a business model is in defining the manner by which the enterprise delivers value to customers, entices customers to pay for value, and converts those payments to profit' (Teece, 2010). Following the project logic, the purpose of making a business model is to confirm the willingness of customers to buy a product for the offered price model (based on a particular value chain) and using designed marketing channels. Despite the fact of being competitive, an innovation product may not find its customers through a poorly designed business model.
The business model of the company working on the implementation of the innovation should give clear answers to the questions: how the product is created, how it is sold and delivered, how it is supported and maintained, how users are attracted, and how the company will earn from innovation (monetization model). In the absence of a proper business model, technologically innovative product will hardly enter the market, not to say creating a new one, and disruptive innovation will not 'disrupt' the target market.
Hypothesis 5: Market Depth (H5). An investor always looks at an innovation project in terms of its commercial scalability. Innovations aimed at satisfying a narrow customer category generally do not have significant demand among venture capital investors.
The uncertainty about the market depth is related not only to the unpredictable volumes of possible revenues, but also to the sources of revenue. The market depth hypothesis is tested with regard to i) type, level, form, and degree of innovation, the creation of which is envisaged by an innovation project, since different combinations of innovation characteristics may imply significant differences in the breadth and depth of a target market; ii) the value of the product being developed and the ability to meet the specific needs of customers; iii) scalability and access to new markets of the newly created product. Assumptions outlined at the market entry stage allow to be reviewed and pivoted upon incoming data, and so to mitigate market risks. Fig. 1 below illustrates the five hypotheses that need to be tested for in-depth understanding of innovation project risks.
Results
The 5 main hypotheses presented and described above combine most of the risks of implementing an innovation project. It is possible to overestimate an innovation project based on a qualitative analysis, however, it is also possible to use more visual calculations that will help not only to compare the project value estimated with the inclusion of a hypotheses conceptual model with the project value estimated by the Discounted Cash Flow (DCF) method, but also to expand the horizon of risks to minimize losses and focus on those project risk bulges that can lead to the project abandon.
Further, the real options method will be used in paper as a proxy for the quantitative assessment of an innovative project with the inclusion of 5 hypotheses at different stages of the innovation project. The calculations below reflect the logic of the project evaluation process, if the hypothesis testing approach at each stage of innovation creation is used.
Thoroughly analyzed theoretical basis for the ROA (Real option analysis) methodology in innovation projects evaluation and interpreted practical significance of the results obtained, this section represents the case study with a detailed explanation of all stages of evaluation and results on each project life cycle. By detailing and quantifying the hypotheses suggested above, managerial flexibility over the life cycle of an innovation project will be estimated to fully understand the competitive advantages of the valuation method used.
The Innovation Company interested in investing into a breakthrough innovation project considers the product features, and functionality to be developed will significantly outweigh existing solutions and solve a set of specific customer problems more efficiently and at a less cost.
According to the classic DCF approach to assessing investment attractiveness the project team proposes to invest $500,000, namely $150,000 in the product basic version development during the 1st year, $100,000 in market entry stage during the 2nd year, $150,000 and $100,000 in marketing during the 1st and the 2nd half of the 3rd year respectively. The DPP (Discounted Payback Period) is 2 years, the ROI (Return on Investment) for a 3-year period is 500 percent, and the NPV is $2,500,000. The liquidation cost of the project is equal to its basic version development, which means the capital at risk in the case of investing in the project being $150,000, and the annual risk-free rate is 5 percent.
Due to customer demand uncertainty and uncertainty of the project team ability to create a product and achieve declared characteristics the Company considers how to minimize investment risks by analyzing and exploiting available contingencies of the project.
Assuming Prototype, MVP, and Basic Version development costs equal to $50,000 each ($150,000 in sum), adding market entry and marketing costs, the project decision tree accounting for all possible decision options and related costs is created for better understanding of how the ROA works in case of innovation project (see fig. 2).
Figure 2. Innovation Project Decision Tree
S 0or base pointis the stage where the Company decides to invest in the project. If the team competence (H1) and the technological capability (H2) hypotheses are disputed (in other words, the team is not able to create the working prototype), the Company will suspend the implementation of the project (S 0 f) fixing losses in the amount of actually invested funds, that is $50,000. If the hypotheses are confirmed, the Company will continue developing the project (S 0 s) investing in the next stage.
Investing in the MVP development, if at least one of the three hypotheses (H1, H2, H3) is denied, that is, either the team is not able to create the MVP or the product value is not confirmed by its customers, the Company will terminate the project (S 0 f 2 ) fixing $100,000 of losses. If all three hypotheses are confirmed, the Company will continue developing the project (S 0 s 2 ) investing in the next stage.
In case of refutation of the hypotheses H2, H3, H4 on the basic version stage, the Company will terminate the project (S 0 f 3 ) fixing $150,000 of losses. If hypotheses are confirmed, the Company will continue investing into the project (S 0 s 3 ) additional $100,000 to launch marketing campaign entering on the market.
Further, if both the H4 and H5 hypotheses are confirmed on the market entry stage, the Company will continue investing (S 0 s 4 ), since uncertainty about market demand would be partially dissolved, which now allows to estimate expected cash flows with more confidence. But if at least one of the hypotheses is denied, the Company will suspend the project implementation (S 0 f 4 ) fixing $250,000 of losses.
Having fully confirmed the H4 and partially H5 hypotheses, it becomes reasonable to boost marketing costs in order to keep on testing the market depth depending on three possible outcomes, which is rapid growth (S 0 s 5 ), moderate growth (S 0 m 5 ) or weak growth (S 0 f 5 ). So the Company has a choice (i) to invest further $100,000 in the case of rapid (S 0 s 5 ) or moderate (S 0 m 5 ) growth, or (ii) $50,000 in the case of weak growth (S 0 f 5 ).
Assessing investment attractiveness of the project, the Company applies a multiplier to invested capital approach for project valuation, a multiplier method, that varies depending on the final results of the H5 testing: ▪ In case of rapid exponential growth (S 0 s 5+ ) or rapid linear growth (S 0 s 5 ), the multiplier will be x40 or x20 respectively, which means the project value will be $20M or $10M. ▪ In case of moderate rapid growth (S 0 m 5+ ) or moderate linear growth (S 0 m 5-), the multiplier will be x10 or x5 respectively, and the project value will be $5M or $2.5M.
▪ In case of weak moderate growth (S 0 f 5+ ), the multiplier will be x2.5 with the project value of $1.125M.
Whereas, if the market depth hypothesis is eventually denied, the Company will have to terminate the project (S 0 f 5-) with $450,000 of losses.
Next, based altogether on the backward induction approach, the real option method, the hypothesis testing and the multiplier methods, it is applicable to assess the project investment attractiveness (see fig. 3).
Figure 3. Generalized block diagram for the Hypothesis Testing Method
In order to assess investment attractiveness of an innovation project taking into account its uncertainties, risks and decision-making flexibility, this paper proposes the new approach based on the Hypothesis Testing Method and implies three steps: 1. Build a decision tree by decomposing the project contingencies within its evolution path, minimizing investments on earlier stages and increasing them upon successful testing of underlying hypotheses.
2. Estimate final scenarios depending on the innovative potential of a project using the multiplier method or cash flow forecast if applicable.
3. Calculate option price at each node starting from the last stage and then move back to market entry, basic product, MVP and prototype stages using backward induction logic and ROA.
After assessing an innovation project value, the probability of financial losses (Value-at-Risk modeling) has to be done in order to estimate risk landscape of the project. jems.ideasspread.org Journal of Economics and Management Sciences Vol. 4, No. 4;2021 The Hypothesis testing method proposed here is suitable for evaluating any innovation project, but its value significantly increases with the increase in innovativeness degree, as the more innovative the project is (for example disruptive or breakthrough), the more difficult it is to evaluate it with existing methods.
The option price, that reflects net project value, is calculated using the following formula (Smit & Trigeorgis, 2004): where option price, risk-neutral probability, +the highest project value, Ithe amount of investment required), −the lowest project value,risk-free interest rate (rate of return) which is calculated based on annual risk-free rate: where annual risk-free rate,number of years.
The risk-neutral probability is calculated as follows (Smit & Trigeorgis, 2004): where ( )expected value of the project.
Using the multiplier method calculate the project value on the end nodes at scaling stage. Then, calculate option prices on each decision tree node starting from the end and then move backward to the left. In order to understand the estimation logic, S 0 s 5 node option price will be calculated step-by-step.
For rapid growth (S 0 s 5 ) the total investments (I) are $500,000, the highest project value (V + ) is $20,000,000 (S 0 s 5+ ) and the lowest project value (V -) is $10,000,000 (S 0 s 5-). The risk-free interest rate for the product scaling stage (r) using formula (2) is 0.16: (1 + 0.05) 3 − 1 = 0.16 Substituting these values into the formula (3) and (4), the risk-neutral probability is equal to 0.74: (1 + 0.16) ($20,000,000 0.5 + $10,000,000 0.5) − $10,000,000 $20,000,000 − $10,000,000 = 0.74 The option price for S 0 s 5 node using the formula (1) is calculated as follows: = 0.74 $19,500,000 + (1 − 0.74) $9,500,000 1 + 0.16 = $14,568,081 To put it another way, $14,568,081 is the net project value in case of rapid growth of sales considering its further inherent uncertainty and uncertainties dispelled: ▪ the team's ability to create a working prototype, MVP and basic version; ▪ the complexity of the technologies used, their practical applicability and maturity affecting the technological ability to create the product; ▪ the unique value of the product and customers willingness to use it; ▪ the customer willingness to buy the product with the offered price model; ▪ the scale of the problem solved that ensures comprehensive interest from customers and the marketing strategy used.
Using the same logic, the option values on all other nodes of the decision tree are calculated (see fig. 4). That is, the entry cost or net project value at its early stage (S 0 ) with all inherent uncertainties and managerial decision options (flexibility) related to H1-H5 hypotheses is equal to $185,031.
Furthermore, since the E(V) for calculating risk-neutral probability (3) also contains a probabilistic indicator that directly affects the option price (net project value), it makes sense to compare the results by the ordinary and real option decision tree.
As it is shown in fig. 5, for the ordinary decision tree, where the event probabilities are determined and the project value is reduced to the net present value calculated using a discount factor, the expected project values are fairly lower. Vol. 4, No. 4;2021 Thus, the estimated project value under the ordinary decision tree is $61,969, which is 3 times less than the value obtained by the real options approach ($185,031, see fig. 4), which reduces the project investment attractiveness as well as the chances of getting financing.
Applying sensitivity analysis, the probability factor impact on the expected results from the ROA approach and decision tree were compared. The net project value on the node S 0 in fig. 4 and the net present value on the node S 0 in fig. 5 were re-estimated including possible deviation of the probability factors used to calculate the decision tree and ROA risk-neutral probability on each node using normal distribution (mean 0.5, std. dev. 0.05).
Conducting Evaluating financial risks, the Value-at-Risk (VaR), which is a statistical measure of potential loss that could happen, needs to be estimated. VaR for ROA is 12,6 percent that is, with 0.99 probability, the investor's losses will not exceed 12,6 percent of the net project value at early stage (S 0 ). VaR for decision tree is 44,0 percent, so the investor's losses will not exceed 44,0 percent of the project value. Meanwhile, according to the decision tree the project value at early stage may go below zero with the probability of 15,2 percent, while using ROA the probability of going below zero is 1,8 percent.
Now let's get back to standard NPV calculated for the project in the amount of $2,500,000, which implies successful product creation and market entry, and the proof of all the underlying hypotheses. Meanwhile, NPV approach methodologically does not have the capability to consider and respectively account all contingencies and respective ups and downs inherent to an innovation project. Obviously, this controversy makes a comparison of standard NPV with the net project value calculated with ROA, that in our case is $185,031, irrelevant. In order to compare standard NPV with the option price calculated with ROA, it is necessary to look at the net project value at the market entry stage, which is $5,651,272 (S 0 s 4 in fig. 4) or 126.1 percent higher than the project standard NPV calculated using the DCF approach (see fig. 7). The innovation project investment attractiveness is more than 2 times higher if it is estimated using the ROA, which significantly increases the chances of investors entering the project and its practical realization. Another significant advantage is the fact that the ROA also evaluates each of the innovation creation steps that cannot be done by simple NPV. Omitting the entire period of innovation development and evaluating only the product sales stage (scaling), standard NPV significantly reduces the project investment attractiveness and embedded flexibility to minimize sunk costs and maximize upsides, and is not an appropriate method of evaluating projects that are subject to significant uncertainty and risks. Instead of that, the approach proposed, unlike the existing methods of evaluating innovation opportunities, involves the project life cycle stages and underlying hypotheses.
Discussion
Due to inherent uncertainty and related risks of innovations, prediction of future optimal decisions looks like a vague idea, but it is not a reason for excluding managerial flexibility from taking into consideration while assessing innovation projects. Evaluating investment attractiveness of an innovation opportunity, its profitability, considering inherent contingencies and minimizing financial losses, should include available managerial options during the project's lifetime.
Real option and decision tree analysis are useful tools for assessing strategic landscape and respective investment decisions. ROA corrects NPV value by incorporating flexibility of managerial decision making, which allows to reconsider uncertain situations and adopt them into simpler analytical structures. However, at the same time, ROA is a financial innovation assessment tool, most suitable for assessing an innovative project, which has been known for many years. Estimating the innovation project, this paper proposes a completely new approach that allows an investor to look differently at innovation development. This article describes the mechanism for reassessing the risks of innovation at each innovation project development stage, which allows to build risk clusters and understand the likelihood of project implementation failure competently.
1. Independence from a discount factor. 2. Using risk-neutral probability instead of subjective probabilities. 3. Hands-on applicability of ROA as it accounts to managerial flexibility during an innovation project lifetime, unlike NPV. 4. Minimizing the impact of project duration on its value. In the DCF approach all calculations are subject to the time factor, however, in case of innovations a project duration can differ significantly from expected one. Estimating an innovation project using the DCF method, the output will differ significantly, whereas within the ROA the deviation is minimized. 5. Minimizing capital-at-risk usually considered as a whole investment cost. 6. Focus on the phased success, which allows to reconsider project related risks based on the information obtained after the uncertainties have been cleared in the previous stage.
The proposed innovation project evaluation method is a logical extension of the real option theory as a competitive approach to the assessment of investment opportunity with a high level of uncertainty. By shifting attention from the cash flow that will occur only after the successful product implementation (which implies the successful completion of all stages of product development, as well as marketing and sales launching) to testing the hypotheses at each stage of the innovation project, it is possible to explore all possible risks separately and overestimate these risks in iterative logic for deeper understanding of the investment opportunity. This is a new paradigm in risk assessment of innovations.
Evaluation of an innovation project due to inherent uncertainty should cover all stages of innovative product creation, starting with an idea to scaling on the market. Each stage is fraught with uncertainty and risks that can be outlined by testing of appropriate hypothesis. The Hypothesis testing method in the context of the evolution of an innovation product allows to investigate project risks, isolate clusters of uncertainty and identify differentiated risks of team, technology, customer value, business model and market depth, which in turn allows to make a more informed investment decision by reassessing the risks and opportunities based on the hypotheses inherent with each stage of the innovation project.
Applying the ROA to assess the innovation project investment attractiveness is a conceptually correct decision, as it involves both the ability to assess managerial flexibility and the departure from risk aggregation in the discount rate factor.
The substitution of classical valuation methods for innovations is a logical and necessary way, since the inclusion of strategic flexibility and minimization of capital at risk by using the Hypothesis testing method are features that allow investor to make more accurate management decision and avoid missing the breakthrough opportunities. This approach changes the direction of the innovation assessment process from the financial component that is present after sales began, to all stages of creating an innovation, starting from the idea generation, in order to describe the main risks of the project as fully as possible. This method is flexible, which allows it to be used for projects of different areas. It can also be expanded by adding assumptions to the underlying hypotheses or narrowed down if one or more hypotheses have already been tested and the uncertainty about these risks is completely dispelled. | 9,301.6 | 2021-12-06T00:00:00.000 | [
"Business",
"Engineering",
"Economics"
] |
Isolation and Characterization of Hydrocarbon Degrading Fungi from Used (Spent) Engine Oil Polluted Soil and Their Use for Polycyclic Aromatic Hydrocarbons (PAHs) Degradation
Fungi capable of effectively degrading and cleaning up hydrocarbons was isolated from soil samples contaminated with used engine oil at auto-mechanic workshops (at Mgbuka-Nkpor, Nigeria) using vapour phase transfer method. The ability of the potential isolates to utilize used engine oil, diesel and petrol were assessed using gravimetric method. The ability of both the pure and consortium culture of the best potential strains to degrade the polycyclic aromatic hydrocarbons (PAHs) component of used engine oil, diesel and petrol was assessed using Gas Chromatography. A total of 8 fungal isolates were identified in this study based on their cultural and microscopic characteristics. Of these, 4 that showed high promise for hydrocarbon bioremediation potentials in screen flasks were confirmed as Candida tropicalis, Rhodosporidium toruloids, Fusarium oxysporium and Aspergillus clavatus based on their 18S rRNA gene sequencing. High biodegradation efficiency (> 70%) was recorded in the PAHs component in used engine oil, diesel and petrol with both the pure and consortium culture of the best potential strains; C. tropicalis and A. clavatus, within 16 days of incubation at 28C. However, there was complete (100%) depletion of some PAHs such as anthracene, naphthalene, acenaphthalene, acenaphthylene, phenanthrene and benzo (k) fluoranthene) in the hydrocarbon substrates with the pure and consortium culture of the isolates within 16 days of incubation at 28C. Both the pure and consortium culture of the isolates (Candida tropicalis and Aspergillus clavatus) could therefore be utilized in the bioremediation of used engine oil, diesel and petroleum, as well as PAHs contaminated soil.
Introduction
The development of petroleum industry into new frontiers, the apparent inevitable spillages that occur during routine operations and records of acute accidents during transportation has called for more studies into oil pollution problems [1]. Oil pollution has been recognized as the most significant contamination problem [2]. The most notable oil spills at sea involve large tankers, such as Exxon Valdez, which spilled thousands of tones oil [3,4]. These oil spills can cause severe damage to sea and shoreline organisms [5]. Most responsible for the contamination are service stations, garages, scrap yards, waste treatment plants, sawmills and wood impregnation plants.
Engine oil is a complex mixture of hydrocarbons and other organic compounds, including some organometallic constituents [6]. It contains hundreds or thousands of aliphatic, branched and aromatic hydrocarbons [7,8], most of which are toxic to living organisms [9]. Used engine oil renders the environment unsightly and constitutes a potential threat to humans, animals and vegetation [10,11]. Fat soluble components may accumulate in the organs of animals and may be enriched in the food chain, even up to humans [12]. Prolonged exposure and high oil concentration may cause the development of liver or kidney disease, possible damage to the bone marrow and an increased risk of cancer [13][14][15]. In the long term, toxic and carcinogenic compounds can cause intoxication, diseases, cell damage, developmental disorders and reproduction problems [9]. In addition to toxic effects, oil products can affect plant and animals physically. A thick layer of oil inhibits the metabolism of plants and suffocates them. Destruction of plants affects the whole food web and decreases the natural habitat of numerous species [16].
Microbial remediation of a hydrocarbon-contaminated site 32 Isolation and Characterization of Hydrocarbon Degrading Fungi from Used (Spent) Engine Oil Polluted Soil and Their Use for Polycyclic Aromatic Hydrocarbons (PAHs) Degradation is accomplished with the help of a diverse group of microorganisms, particularly the indigenous bacteria present in soil. These microorganisms can degrade a wide range of target constituents present in oily sludge [17,18,15]. Hydrocarbon degrading bacteria and fungi are widely distributed in marine, freshwater and soil habitats. Similarly, hydrocarbon degrading cyanobacteria have been reported [19,20], although, contrasting reports indicated that growth of mats built by cyanobacteria in the Saudi coast led to preservation of oil residues [21]. Typical bacterial groups already known for their capacity to degrade hydrocarbons include Pseudomonas sp., Marinobacter sp., Alcanivorax sp., Microbulbifer sp.,Sphingomonas sp., Micrococcus sp., Cellulomonas sp., Dietzia sp. and Gordonia sp. [22]. Molds belonging to the genera Aspergillus sp., Penicillium sp., Fusarium sp., Amorphoteca sp., Neosartorya sp., Paecilomyces sp., Talaromyces sp., Graphium sp. and the yeasts Candida sp., Yarrowia sp. and Pichia sp. have been implicated in hydrocarbon degradation [20]. Fungal bioremediation has been successful for clean-up of pentachlorophenol (PCP), a wood preservative and polycyclic aromatic hydrocarbon [23]. The advantages associated with fungal bioremediation lay primarily in the versatility of the technology and its cost efficiency compared to other remediation technologies (such as incineration, thermal desorption, extraction) [24]. The application of bioremediation capabilities of indigenous organisms to clean up pollutants is viable and has economic values [25]. This study was therefore undertaken with a view to isolate and characterize fungi from used engine oil-polluted soil and to access the polynuclear aromatic hydrocarbons (PAHs) degrading potentials of the isolates.
Collection of Soil Samples
Soil samples were collected randomly using a pre-cleaned hand scoop at a depth of 2 -3cm from 3 auto-mechanic workshops that had a heavy spillage of used engine oil (UEO) at Mgbuka-Nkpor (6 o 9'N 6 o 50'E), Nigeria. The samples were pooled together, homogenously mixed to obtain a composite sample and placed into a sterile container. The hydrocarbon (used engine oil) used in this work was subsequently collected direct from the engine of 911 Lorry (at Mgbuka-Nkpor) in a sterile container. Samples were transported to the laboratory for analysis.
Isolation of Fungi with Used Engine oil Utilizing Abilities
Used engine oil utilizing fungi were isolated from soil samples obtained from auto-mechanic workshops on Mineral Salt agar Medium with composition as listed in Ekpenyong and Antai [26]. Fifty micrograms per millilitre (50µg mL -1 ) of each of Penicillin G and Streptomycin was incorporated into the medium to inhibit interfering bacteria. The medium pH was adjusted to 5.5. The vapour phase transfer method was used with used engine oil as carbon and energy source supplied from the lid of the plates [27,28].
Each distinct colony on oil degrading enumeration plates were purified by repeated sub culturing onto Sabouraud Dextrose Agar (SDA) (Merck, Germany). The isolates were characterised and identified using colonial appearance and microscopic characteristics based on the schemes of Barnett and Hunter [29] and Efiuvwevwere [30].
Screening Tests for Used Engine Oil Utilization by Fungal Isolates
The isolates were screened for engine oil utilization capabilities in mineral salt broth medium. Screen tubes were incubated at 28 o C for 16 days. Growth in tubes was scored as high (+++), moderate (++), low (+) and no growth (-). Viable count was taken at the end of 16 days incubation period by plating out onto Sabouraud Dextrose Agar and incubating at 28 o C for 48 hours. Fungal biomass was also quantified at the end of 16 days incubation period at 540nm [31].
Determination of Used Engine Oil Biodegradation of the Potential Isolates
The rate and extent of biodegradation of used engine oil by four potential isolates; Candida tropicalis, Rhodosporidium toruloids, Fusarium oxysporium, and Aspergillus clavatus (confirmed using 18S rRNA gene sequencing) were assessed using the gravimetric method of Odu and Isinguzo, [32]. Degradation study flasks as well as control were incubated in triplicate at 28 o C and 120 revolutions per minute (rpm) for 16 days. The amounts of hydrocarbon left after 16 days incubation was determined by extracting the residual oil with n-hexane (BDH Chemicals, England) in a separating funnel and noting their absorbance reading at 450nm, and the concentrations read off from the standard curve obtained from n-hexane extracts of used engine oil at different concentrations. Mean results were obtained and expressed as percentage weight loss of used engine oil. The whole process was repeated for diesel and petrol.
Determination of the PAHs Degradation by the Isolates
A 24 hour pure cultures as well as the consortium of each of the two best potential strains (Candida tropicalis and Aspergillus clavatus) were inoculated into Mineral Salt broth (100ml in 250ml Erlenmeyer flask) containing 1% (v/v) used engine oil, and incubated at ambient temperature of 28 o C at 120 rpm for 16 days. Control flask without the organism was prepared accordingly. After 16 days the extent of polycyclic aromatic hydrocarbons (PAHs) degradation using undegraded engine oil as the control was determined by Gas Chromatography at Springboard Research Laboratory, Awka, Anambra State, Nigeria. The whole process was repeated for petroleum and diesel oil.
The Buck 530 Gas Chromatography was equipped with a column oven, automatic injector, Mass spectrometer (Quadrupole Mass spectrometer, m/z 50 to m/z 400), HP 88 capillary column (30.0 m x 0.32 mm, film thickness 0.25µm) CA, USA. The analytic conditions of the chromatography were as follows: detector temperature, 250 o C, injector temperature, 220 o C, integrator chart speed, 2cm min -1 . Initial -final oven temperature was 70 -280 o C/min, and a holding time of 2 -5 minutes. Carrier gas was helium (99.999% or 5.0 grade purity) at 5 psi, and injection volume was 1µL. The chromatograph was then attached to an integrator.
Result
A total of eight hydrocarbon utilizing fungi were isolated from the soil sample contaminated with engine oil. The fungal genera identified were Candida tropicalis, Rhodosporidium toruloids, Fusarium oxysporium, Aspergillus clavatus, Saccharomyces cerevisiae, Candida albicans, Microsporium gypseum and Trichophyton mentagrophytes, based on their cultural and microscopic characteristics (Table 1).
The hydrocarbonoclastic potentials of the selected isolates revealed that Candida tropicalis caused 86.2% weight loss of UEO in 16 days. This was closely followed by the weight loss of 85.0% caused by Aspergillus clavatus, Rhodosporidium toruloids and Fusarium oxysporium recorded 79.3% and 80.5% weight losses, respectively (Fig. 1). Candida tropicalis and Aspergillus clavatus also caused higher weight losses of 89.5% and 80.5% respectively, in diesel oil, while a weight loss of 66.8% and 64.2% was observed with Rhodosporidium and Fusarium species respectively (Fig. 1). Aspergillus clavatus caused a weight loss of 87.0% in petroleum oil, followed by the weight loss of 81.8%, caused by Candida tropicalis. Fusarium oxysporium and Rhodosporidium toruloids recorded weight losses of 74.4% and 68.8% respectively, in petroleum oil (Fig. 1).
The result of the Gas Chromatographic analysis for the removal of the PAHs in used engine oil by the two best potential strains: Candida tropicalis and Aspergillus clavatus, as well as their mixed culture (consortium) of the organisms are presented in table 3. Most of the PAH components of the used engine oil were completely removed by the single and mixed culture of the isolates. Aspergillus clavatus and the mixed culture achieved 100% depletion of the PAH components: phenanthrene, fluoranthene, pyrene and benzo (k) fluoranthene, while Candida tropicalis recorded 100% depletion of phenanthrene and benzo (k) fluoranthene, 96.76% depletion of fluoranthene and 99.27% depletion of pyrene. However, Candida tropicalis, Aspergillus clavatus and the consortium culture achieved 100%, 90.28% and 96.38% removal of dibenzo (a,h) anthracene respectively, as well as 94.09%, 71.82% and 87.09% removal of benzo (a) pyrene, respectively (Table 3). Table 4 shows the result of the Gas Chromatographic analysis for the removal of PAHs in diesel by Candida tropicalis and Aspergillus clavatus as well as their mixed (consortium) culture. Candida tropicalis achieved 100% removal of the components: acenaphthene, acenaphthylene, phenanthrene, 1,2-benzanthracene, chrysene, benzo (k) fluoranthene, anthracene, naphthalene and fluorine, while benzo (a) pyrene and fluoranthene recorded 99.96% and 97.88% removal, respectively by Candida tropicalis in 16 days. However, Aspergillus clavatus and the consortium culture achieved 100% removal of acenaphthene, acenaphthylene, phenanthrene, chrysene, benzo (k) fluoranthene, anthracene and naphthalene. Also, Aspergillus clavatus and the consortium, recorded 98.71% and 80.75% respectively for the removal of 1,2-benzanthracene, 100% and 92.74% removal of chrysene, 78.0% and 75.02% removal of benzo (a) pyrene, 95.12% and 89.94% removal of fluoranthene, 78.07% and 100% depletion of fluorene in 16 days (Table 4). Table 5 indicates that most of the PAH components found in petroleum are also completely removed by both single and mixed culture of the two best potential strains: Candida tropicalis and Aspergillus clavatus. Candida tropicalis, Aspergillus clavatus and the mixed culture achieved 100% depletion of benzo (k) fluoranthene, anthracene and naphthalene in 16 days. Candida tropicalis and Aspergillus clavatus achieved 100% depletion of fluoranthene, while the consortium recorded 87.89% removal of fluoranthene. However, Candida tropicalis and the consortia achieved 100% removal of fluorine, while Aspergillus clavatus recorded 76.88% removal of fluorine over a 16 day period. Moreover, Candida tropicalis and Aspergillus clavatus recorded only 80.22% and 79.76% removal of benzo (a) pyrene respectively, while the consortium achieved 99.88% removal of benzo (a) pyrene in 16 days (Table 5).
Isolation and Characterization of Hydrocarbon Degrading Fungi from Used (Spent) Engine Oil Polluted
Soil and Their Use for Polycyclic Aromatic Hydrocarbons (PAHs) Degradation
Among the 4 isolates that showed high promise for hydrocarbon bioremediation potentials (Table 2), Candida tropicalis and Aspergillus clavatus displayed the fastest onset and highest extent of biodegradation of used engine oil (Fig. 1). Thus they were selected for PAHs degradation studies. The high rate of hydrocarbon degradation by the two fungi could emanate from their massive growth and enzyme production responses during their growth phases. This could be supported by the reports of Bogan and Lamar [36], which showed that extracellular ligninolytic enzymes of white rot fungi are produced in response to their growth phases.
In this study, it was observed that the isolates utilized comparatively less amount of diesel than petroleum and used engine oil from the media. The high rate of degradation observed in petroleum and used engine oil compared to diesel may be attributed to the effect of compositional and structural complexity on biodegradability of petroleum derivatives. Octane fuel has the simplest atomic structure and has the least amount of double bonds as compared to diesel and kerosene, thus does not resist microbial attack [37,38].
The results of gas chromatographic analysis on PAHs degradation in used engine oil, diesel and petroleum (Tables 3-5) showed that the two isolates Candida tropicalis and Aspergillus clavatus as well as their consortium culture exhibited biodegradation efficiency above 70% after 16 days incubation also confirmed their high degradation potentials. The abilities of these organisms in oxidizing the polycyclic aromatic hydrocarbons (PAHs) can be attributed to the non-specific nature of their enzymes especially the peroxidases on degrading chemicals. The fact that both the pure and consortium culture of the isolates were able to degrade PAHs very effectively suggests that the degradation Isolation and Characterization of Hydrocarbon Degrading Fungi from Used (Spent) Engine Oil Polluted Soil and Their Use for Polycyclic Aromatic Hydrocarbons (PAHs) Degradation of the aliphatic moieties could be easier and faster than the polycyclic aromatic moieties. The implication of these two organisms in hydrocarbon degradation from our results is similar to the findings of April et al. [33]. The inability of either the pure or consortium culture of the isolates to achieve a 100% depletion of Benzo(a)pyrene (BaP) in used engine oil, diesel and petrol in this study (Tables 3-5) could be attributed to the physical and chemical characteristics of the PAH. Numerous genera of microorganisms have been observed to oxidize PAHs [39]. While there is a great diversity of organisms capable of degrading the low molecular weight PAHs such as naphthalene, acenaphthalene and phenanthrene, relatively few genera have been observed to degrade the high molecular weight PAHs, such as BaP [40]. Bishnoi et al. [41] reported that PAH adapted fungal strain Phanaerochaete chrysosporium, isolated from the soil of petroleum refinery, have the ability to degrade phenanthrene, anthracene, acenaphthene, fluoranthene and pyrene in sterilized as well as unsterilized soil in optimum conditions.
Conclusions
This study revealed that the isolates Candida tropicalis and Aspergillus clavatus as well as their consortium culture has promising potential in bioremediation of polynuclear aromatic hydrocarbon polluted soil. They could also be utilized in bioremediation of used engine oil, diesel and petroleum contaminated soil. | 3,677.8 | 2016-01-01T00:00:00.000 | [
"Biology",
"Environmental Science"
] |