text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
Enhancing the laser power by stacking multiple dye-doped chiral polymer films.
We demonstrate a method for enhancing the laser efficiency by stacking multiple dye-doped chiral polymer films. No laser emission was observed from a single 8 mum film. By stacking two films together, the laser efficiency is dramatically enhanced. Further increasing the number of stacked films, the output laser power is further increased. It is also observed that the output laser power in the forward direction is almost the same as that in the backward for the two- and three-layered films. However, in the six-layered film the output laser power is much stronger in the backward direction than the forward one. This is due to the absorption of the laser dyes and the distributed feedback in the chiral polymer films.
Introduction
Mirrorless lasers in dye-doped cholesteric liquid crystals (CLCs) have been extensively studied in recent years due to its photonic crystal properties and simple fabrication process. [1][2][3][4][5][6][7][8][9][10][11][12]. A CLC cell is typically prepared by doping some chiral agents into a nematic LC host. Due to the twisting power of the chiral dopants, the LC molecules form a self-organized periodic helical structure. Since the host LC is a highly birefringent medium, the periodic helical structure gives a periodic modulation of the refractive index. Consequently, a one-dimensional (1D) photonic band gap (PBG) is established whose central wavelength is at np = λ , where p is the helical pitch and n the average refractive index. The abrupt change of the optical density of state at the band edges provides the possibility of generating laser emission when laser dyes are doped into the CLCs.
So far, however, the output laser power is still relatively low. For most of the applications, high output laser power is required. Several methods have been developed for enhancing the laser efficiency. For instances, the laser efficiency can be improved by choosing laser dyes and liquid crystals with higher order parameters, [13] adding a mirror reflector or passive CLC reflector to form a laser cavity, [14] and sandwiching the active CLC layer between a Fabry-Perot cavity. [15] In this paper, we demonstrate the laser power enhancement by stacking multiple dye-doped chiral polymer films. It is known that a cavity laser with a longer optical gain medium can produce higher output laser power because more photons are available. While in a mirrorless laser, the optical gain medium, such as dye-doped chiral polymer film, provides not only photons but also feedback amplification of the optical gain. Therefore, increasing the film thickness within a certain range can enhance the output laser power. However, because of the limited anchoring energy induced by the rubbing layer on the glass substrates, it is very difficult to sustain uniform planar cholesteric structure for a LC cell thicker than 15 μm. The imperfect planar cholesteric structure would introduce an appreciable amount of light scattering and, consequently, dramatically decreases the lasing efficiency. In order to increase the effective film thickness with perfect planar cholesteric structure, we stack multiple 8 μm chiral polymer films together. The output laser power is significantly enhanced with the increase of the stacked films. No laser emission can be observed from a single 8 μm film. By stacking two films together, the laser efficiency is dramatically enhanced. Further increasing the stacked films, the laser power is further increased. Moreover, it is found that the output laser power in the backward direction is much stronger than that in the forward for the sample stacked with six 8-μm films.
Sample preparation
The chiral monomer host mixture was prepared by mixing the reactive mesogen monomer RMM154, reactive monomer RM82, and chiral CB15 (all from Merck) together. Afterwards, we doped 1 wt% of a highly fluorescent laser dye 4-(Dicyanomethylene)-2-methyl-6-(4-dimethlyaminostyryl)-4H-pran (DCM, from Exciton) to the chiral monomer mixture. The whole mixture was thoroughly mixed before it was capillary-filled into the empty LC cell (8 μm cell gap) in an isotropic state. The inner surfaces of the glass substrates were first coated with a thin transparent conductive indium-tin-oxide (ITO) electrode and then overcoated with a thin polyimide layer. The substrates were subsequently rubbed in antiparallel directions to produce ~2-3 o pretilt angle. The sample was slowly cooled down to 55 °C to reduce the defect formation. Then a UV light was used to illuminate the sample for ~1 hr while keeping the temperature at 55 °C to turn the chiral monomer into chiral polymer film. Then, we separated the two glass substrates of the LC cell and peeled off the chiral polymer film from one of the substrates. The film was cut into several pieces and stacked together one by one very carefully to avoid the generation of the surface corrugation.
Results and discussion
To test the laser performance, we used a frequency-doubled, Q-switched, linearly polarized Nd:YAG pulsed laser (λ=532 nm, from Continuum) which produces a single 4-ns-long pulse as pumping light source. All the measurements were performed at 1 Hz laser repetition rate in order to reduce the accumulated thermal effect originating from dye absorption. A linear polarizer and a quarter-wave plate were used to convert the linear polarization into left-handed (LH) circular polarization to avoid the reflection from the reflection band of the cholesteric polymer film. A lens with 15 cm focal length focused the incident beam to a small spot of ~160 μm diameter at the sample. The output laser emission from the sample was collected by a lens to a fiber-optics based universal serial bus (USB) spectrometer (0.4 nm resolution; USB HR2000, Ocean Optics). The setup is illustrated in Fig. 1. First, we measured the emission intensity from the backward direction (the propagation direction is opposite to that of the pump laser) of the sample, as shown in Fig. 1. For a single 8 μm dye-doped chiral polymer film, only fluorescence emission was observed even when the pumping energy was increased to 140 μJ. However, by stacking two films together we detected laser emission when the pumping energy was around 11 μJ, as Fig. 2 shows. Further stacking one more film on top of the two combined films, the threshold is decreased to 3.9 μJ and the lasing efficiency is doubled. By stacking two 3-layered films together, the threshold pumping energy is further reduced to 2.5 μJ while the output laser power is tripled as compared to that of the 3-layered films, as shown in Fig. 3. In Fig. 3, the laser emission was measured from the backward direction (as shown in Fig. 1) of the samples. When we measured the laser emission intensity from the forward direction of the samples as a function of pump energy, we found that the output laser power from the forward direction is not always equal to that from the backward direction. In our experiment, the output laser power from the forward of the two-layered and three-layered films is almost the same as that from the backward. However, for the six-layered film, the output laser power from the forward direction is much lower than that from the backward, as Fig. 4 shows. The physical mechanism is discussed as follows. It is known that the laser emission can be generated through the feedback amplification only when the optical gain inside the medium is lager than the optical losses. For chiral polymer films, the feedback amplification is provided by the internal distributed feedback of the photonic band gap. The feedback efficiency is determined by the bandwidth of the band gap and the sample thickness. A thicker sample with a wider bandwidth can more efficiently amplify the optical gain traveling in the sample to overcome the losses in the medium. In our experiment, all the samples exhibit the same photonic bandwidth of about 70 nm. In a single 8 μm film, the optical gain amplification provided by the feedback is not sufficient to overcome the optical losses. Therefore, we cannot observe laser emission from a single 8 μm film. By stacking two 8 μm films together, the thickness is doubled. The feedback efficiency is dramatically enhanced due to the doubled internal distributed feedback, which can provide sufficient optical gain amplification to overcome the associated optical losses. Therefore, laser emission occurs.
By stacking more films together, the optical gain can be more efficiently amplified due to the longer internal distributed feedback. Therefore, the output laser power is increased. If the sample has no absorption at the pumping laser wavelength, then the output laser power from both sides of the sample should be equal. However, the laser dye employed in our experiments exhibits a relatively large absorption at λ= 532 nm. The pump light intensity will be gradually decreased by the absorption of the laser dyes when the pump light propagates through the sample. If the sample is thin and the pump light intensity is high enough, the attenuation of the pump energy by the absorption of the dye can be ignored and does not affect the laser output noticeably. Therefore, the output laser powers from both sides of the sample are almost the same. But when the pump laser passes through a thicker sample, most of the pump energy could be absorbed by the doped dyes. As a result, the pump intensity at the rear surface of the sample, where the pump laser leaves the sample, is much weaker than that at the front surface, where the pump laser enters. Consequently, much fewer dye molecules could be pumped to the excited state at the rear surface than at the front surface to produce photons. Since the internal distributed feedback can feedback the photons locally, the area with more photons can certainly get higher amplified optical gain and consequently higher output laser power. Thus, the laser power from the backward direction is much stronger than that from the forward direction.
Conclusion
We have demonstrated a method to enhance laser efficiency by stacking multiple dye-doped chiral polymer films together. No laser emission was observed from a single 8 μm film. By stacking two films together, the laser efficiency is dramatically enhanced. Further increasing the stacked films, the output laser power can be further increased. It is also observed that the output laser power from the forward direction is almost the same as that from the backward for the two and three stacked films. In the six-layered film, however, the laser power from the backward is much stronger than that from the forward, which is attributed by the absorption of the imbedded laser dyes and the internal distributed feedback of the chiral polymer films. | 2,445.8 | 2006-11-13T00:00:00.000 | [
"Materials Science",
"Physics"
] |
Malonate as a ROS product is associated with pyruvate carboxylase activity in acute myeloid leukaemia cells
Background The role of anaplerotic nutrient entry into the Krebs cycle via pyruvate carboxylase has been the subject of increased scrutiny and in particular whether this is dysregulated in cancer. Here, we use a tracer-based NMR analysis involving high-resolution 1H-13C-HSQC spectra to assess site-specific label incorporation into a range of metabolite pools, including malate, aspartate and glutamate in the acute myeloid leukaemia cell line K562. We also determine how this is affected following treatment with the redeployed drug combination of the lipid-regulating drug bezafibrate and medroxyprogesterone (BaP). Results Using the tracer-based approach, we assessed the contribution of pyruvate carboxylase (PC) vs. pyruvate dehydrogenase (PDH) activity in the derivation of Krebs cycle intermediates. Our data show that PC activity is indeed high in K562 cells. We also demonstrate a branched entry to the Krebs cycle of K562 cells with one branch running counterclockwise using PC-derived oxaloacetate and the other clockwise from the PDH activity. Finally, we show that the PC activity of K562 cells exclusively fuels the ROS-induced decarboxylation of oxaloacetate to malonate in response to BaP treatment; resulting in further Krebs cycle disruption via depletion of oxaloacetate and malonate-mediated inhibition of succinate dehydrogenase (SDH) resulting in a twofold reduction of fumarate. Conclusions This study extends the interest in the PC activity in solid cancers to include leukaemias and further demonstrates the value of tracer-based NMR approaches in generating a more accurate picture of the flow of carbons and metabolites within the increasingly inappropriately named Krebs cycle. Moreover, our studies indicate that the PC activity in cancer cells can be exploited as an Achilles heel by using treatments, such as BaP, that elevate ROS production. Electronic supplementary material The online version of this article (doi:10.1186/s40170-016-0155-7) contains supplementary material, which is available to authorized users.
Background
We have previously demonstrated the individual and combined anti-proliferative and pro-differentiating actions of a drug combination termed bezafibrate and medroxyprogesterone (BaP), consisting of the lipidregulating drug bezafibrate (BEZ) and the steroid contraceptive medroxyprogesterone (MPA) against primary acute myeloid leukaemia (AML) cells and cell lines [1][2][3][4], Burkitt's lymphoma (BL) cell lines [5] and primary chronic lymphocytic leukaemia cells [6]. In addition, phase II trials of BaP in both AML and BL have demonstrated in vivo anti-tumour activity in the absence of toxicity [7,8] and, in the case of AML, generated significant haematological responses in some patients [7].
We previously demonstrated that BaP treatment of KG1α, K562 and HL60 AML cell lines was associated with excess generation of reactive oxygen species (ROS), inducing a range of metabolic changes involving pathways related to the Krebs cycle, specifically increased succinate/fumarate ratios. The importance of ROS for novel therapies has also been recognised by others [9][10][11] who linked ROS to iron homeostasis [10] and AKT phosphorylation [11]. Our observations also implicated direct ROS-mediated chemical conversion of metabolites including the conversion of αketoglutarate (α-KG) into succinate and of oxaloacetate to malonate [4,12,13]. Malonate is known to inhibit succinate dehydrogenase [14], thus interrupting the conversion of succinate to fumarate.
Here, we present a detailed tracer-based analysis of metabolism in K562 AML cells with and without exposure to BaP to decipher the origin of malonate and the relative contributions of pyruvate carboxylase (PC) and pyruvate dehydrogenase (PDH) activity for the entry of nutrients into the Krebs cycle. In order to determine site-specific label incorporations, we used NMR 1 H-13 C-HSQC spectra. In such spectra, every signal arises from a CH group in a metabolite. When acquired with sufficiently high resolution in the 13 C-dimension, one can resolve couplings arising from adjacent 13 C atoms providing crucial information about site-specific label incorporation. Previously, we have used this approach to study the distribution of 13 C-labels arising from [1,2-13 C]glucose and from [3-13 C]glutamine in K562 cells [15]. Here, we have used this analysis to trace the origin of malonate. Our analysis also sheds new light on pyruvate carboxylase activity in cancer cells, an issue that has been raised in various tumours [16][17][18].
Sample preparation
K562 cells were cultured and polar cell extracts were prepared as described before [4]. For tracer-based metabolic analysis, standard glucose-or glutamine-free RPMI-1640 media (Gibco) was used and substituted with 2 g/l [1,2-13 C]glucose or 300 mg/l [3-13 C]glutamine (Isotec, Sigma), respectively. 5 × 10 7 exponentially growing K562 cells per control or treatment were pelleted by centrifugation at 8000g for 5 min and resuspended in the relevant media with BaP or solvent controls and incubated for 24 or 3 h at 37°C and 5 % CO 2 in a humidified incubator. For BaP treatment, bezafibrate 0.5 mM and medroxyprogesterone acetate 5 μM (Sigma) or the equivalent concentrations of DMSO and ethanol solvent control were added to the media at T = 0 h.
For 3-h labelling of 24-h drug treatments, media was exchanged for the last 3 h with media supplemented with labelled glucose or glutamine plus BaP or solvent control. All samples were dissolved in 100 mM phosphate buffer with 10 % D 2 O and 500 μM Trimethylsilylpropanoic acid (TMSP) added.
NMR data acquisition
All spectra were acquired at 298 K on a Bruker 600 MHz spectrometer with a TCI 1.7 mm z-PFG cryogenic probe. 1 H-13 C-HSQC spectra were acquired using the Bruker sequence hsqcetgpsp.2 with added gradients during echoes, using 4096 points in the directly observed dimension for a sweep width of 13 ppm. Four thousand ninety-six increments, two scans and an interscan delay of 1.5 s were used. The 13 C carrier was set to 80 ppm and a spectral width 159.0 ppm was used.
Processing of NMR spectra
All one-dimension spectra were processed using NMRLab [19] in MATLAB and further analysed using MetaboLab [20] as described before [4]. In HSQC spectra peaks were picked and assigned in a semi-automated manner using MetaboLab. To calculate percentage label incorporations, the cross peaks in labelled spectra and reference spectra were compared. The 13 C isotope constitutes about 1 % of naturally occurring carbon. Therefore, for more concentrated metabolites, cross peaks could be seen in the reference spectra. Peak intensities in control and reference spectra were used to calculate percentage incorporation of labels into particular carbons of metabolites as follows: The percentage incorporation of 13 C into peak X of metabolite Y in 13 C-labelled media % equals 100 N/ (D*S), where % = percentage incorporation, N = intensity of a signal of a metabolite in labelled media spectrum, D = intensity of a signal of a metabolite in the control spectrum, S = mean(ScaleFactor). ScaleFactor(i) = Nr/Dr, where Nr = intensity of peak i from reference metabolite in numerator spectrum, Dr = intensity of peak i from reference metabolite in denominator spectrum.
A reference metabolite was chosen as one of a group of metabolites that did not change intensity significantly between BaP and control spectra or between spectra for enriched and natural abundance media. Results were similar using myo-inositol, valine, leucine or isoleucine as the reference metabolite. Labelling was not considered significantly changed unless BaP treatment changed percentage label incorporation by a factor of 2.
For signals in spectra arising from [1,2-13 C]glucoselabelled cells that showed CC-couplings, the peak intensity was the sum of the intensities for the split peaks. In some spectra, peaks could not be seen. In such cases, the peak intensity was set to the estimated noise level in that spectrum. That noise level is evaluated by searching for the maximum of the absolute value of the intensity seen in a region devoid of real signals.
When calculating percentage incorporations of 13 C, it is the natural abundance reference spectrum peak intensity that may be missing. Substituting the dummy peak intensity tends to cause the percentage incorporation of 13 C to be (conservatively) underestimated.
Malonate spiking
For spiking, a sample from a BaP-treated HL60 or K562 cell extract grown on unlabelled media was split into two. To the first sample, metabolomics buffer was added. To the second sample, an equal amount of metabolomics buffer containing 1 mM malonic acid was added. 1 H-1D and 1 H-13 C-HSQC spectra were acquired. Spectra were aligned on TMSP and lactate, respectively, for overlay in 1D and 2D spectra.
Results
We employed [1-13 C]glucose, [1,2-13 C]glucose and [3-13 C]glutamine nutrient sources to decipher the carbon flow in K562 cells and changes in response to BaP treatment. Overall, we observe the largest amount of label incorporation into ribose moieties and lactate (see [15] and Additional file 1: Figure S1 for details on label distributions). As illustrated in Fig. 1 NMR spectra indicate that large amounts of aspartate and malate are PC derived As shown in Fig. 2, significant differences are observed between cells processing [1,2-13 C]glucose for 3 vs. 24 h for the resonances of malate and aspartate. These differences manifest predominantly in a change of NMRcoupling constants. The size of these coupling constant depends on the nature of the adjacent carbon atom: a carboxylic acid group yields a much larger coupling constant than a CH 2 group. The coupling constant for the 13 C 1 13 C 2 moiety in aspartate or malate is about 50-60 Hz whereas the coupling constant for a 13 C 2 13 C 3 moiety is about 35-40 Hz.
For a 24-h labelling period, Fig. 2 shows apparent coupling constants of 53 Hz for C 2 of malate and aspartate. This large apparent coupling constant shows that the coupling of C 2 with the adjacent C 1 carboxylic acid group predominates. This 13 C 1 13 C 2 coupling pattern (and also 13 For the short 3-h labelling period, the smaller apparent coupling constants of 47 and 41 Hz are observed for C 2 ( Fig. 2 and Additional file 1: Figure S1), indicating that the coupling to the adjacent methylene C 3 dominates. The presence of the 13 C 2 13 C 3 moiety at shorter labelling periods shows that the product arising from the PC activity dominates for shorter labelling periods.
It should be noted that the observed signal splittings do not represent the precise scalar coupling constants in a mixture of labelled compounds. Nevertheless, the observed change provides a clear indication of higher amounts of the PC product at shorter labelling periods in both malate and aspartate.
Further evidence for PC activity in subspectra of uracil In de novo pyrimidine synthesis, uracil is formed from aspartate via carbamoyl-aspartate, orotate and dihydroorotate. Therefore, labelling patterns in aspartate should be reflected in pyrimidine ring signals. Additional file 2: Figure S2 shows expected label incorporations for uracil. When the uracil base in UDP is synthesised from aspartate, the destination of the 13 Cs is as follows: C 1 , C 2 and C 3 of aspartate become respectively C 10 , C 11 and C 12 of UDP while C 4 is lost (see Additional file 2). In HSQC spectra, only C 11 and C 12 are able to be directly observed, as C 10 does not have an attached proton.
All spectra showed both the singlet and doublet signals at C 12 (Fig. 3). However, the intensity of the doublet relative to the singlet changed between 3 and 24 h data by a factor of three, again indicating a shift from PC to PDH-mediated labelling with time. The picture that emerges from several metabolites is that there is parallel activity of both PC and PDH, but the PC product is primarily channelled into products directly linked to oxaloacetate, yielding the high observed amount of PC product into these metabolites in a short-term exposure. Only at longer labelling periods is the PDH product observed in the left branch of the Krebs cycle.
BaP-induced malonate is derived from downstream PC activity in K562 cells
We have shown that malonate can be formed from oxaloacetate by chemical conversion under the influence of hydrogen peroxide and suggested in our previous study that malonate accumulation in response to BaP treatment was driven by treatment-induced elevation of ROS acting upon oxaloacetate [4]. Samples were spiked with malonate to confirm our assignment of the malonate methylene 1 H/ 13 C resonances in HSQC spectra (see Additional files 3 and 4). Subsequently, in this study, our tracer-based approach has allowed us to consider the origins of the observed malonate.
As shown in Fig. 4a, ROS-derived malonate arising from oxaloacetate that had been formed directly from pyruvate by PC activity using [1,2-13 C]glucose as the nutrient source would be expected to be labelled in positions C 1 and C 2 . In the alternative situation that PDH activity converts pyruvate into acetyl-coA upon entry into the Krebs cycle, ROS-mediated conversion of the resulting oxaloacetate into malonate would be expected to give rise to two products with label in the C 1 or in the C 1 and C 2 position in a 1:1 ratio because the Krebs cycle passes through symmetrical succinate and fumarate (see also Fig. 1). Figure 4b shows a representative 1D slice from HSQC spectra arising from 24 h BaP-treated [1,2-13 C]glucoselabelled cells and clearly demonstrates drug-induced generation of a strong malonate signal. In the corresponding HSQC spectra arising from 24 h BaP-treated [ Fig. 4 a Expected labelling patterns in malonate derived from oxaloacetate by decarboxylation and expected signal patterns in directly observed 13 C spectra. b Slices from HSQC spectra for malonate for [1,2-13 C]glucose-derived samples with (red) and without (blue) BaP treatment. c Peak patterns observed for malonate in 1 H-13 C-HSQC spectra, red for [1,2-13 C]glucose-labelled cells, blue for [1-13 C]glucose-labelled cells. d 13 C-NMR spectra for the carboxylic acid region showing the spectrum arising from [1,2-13 C]glucose with BaP in blue and the reference spectrum of unlabelled malonate in red. The lack of a centre peak proves that 13 COO is always adjacent to a labelled CH 2 . The asterisk denotes a non-malonate-derived carbon atom doublet (Fig. 4c shown as red peaks) with a splitting of 58 Hz, indicative of a labelled CH 2 group coupled to a carboxylic acid carbon. This is a clear indication of a C 1 ,C 2 (or C 2 ,C 3 )-labelled malonate with label in only one of the two carboxylic acid groups. Malonate labelled in all three positions that might arise from multiple passages through the Krebs cycle would be expected to show a triplet at the C 2 in the 13 C-dimension of HSQC spectra, as at least some percentage would be labelled in both COO groups (AX 2 coupling pattern). Therefore, the absence of a central signal in HSQC is highly indicative that malonate is derived from upstream PC activity. The absence of malonate derived from multiple Krebs cycles and PDH activity is also supported by the fact that malonate arising from 24 h labelling of BaP-treated cells with [1-13 C]glucose showed only a singlet (Fig. 4c shown as blue peak).
In order to further prove that drug-induced malonate is derived downstream of PC activity, we acquired 13 C-1D-spectra in which we could observe the COO resonances of malonate directly (Fig. 4d). A reference spectrum of 10 mM malonic acid in the same (pH 7) buffer as used for cell extracts confirmed the frequency of the COO resonance (Fig. 4d).
PC-mediated labelling is expected to yield a doublet at C 1 (arising from [1,2-13 C]malonate), whereas PDHmediated labelling is expected to give a 50:50 mixture of a doublet arising from [1,2-13 C]malonate and a singlet arising from [1-13 C]malonate (Fig. 4a). Although noisy even after 24 h of acquisition, the spectra derived showed a clear doublet with no residual signal in the middle (Fig. 4d) confirming that the PC-derived labelling product is dominant. From this, we conclude that malonate is indeed derived from ROS-mediated conversion of oxaloacetate originating entirely or at least predominantly from the PC activity and does not show any contribution from a possible PDH product even after a 24 h labelling period. Control experiments using [3-13 C]glutamine as a carbon source showed only minimal label incorporation into malonate. The malonate produced from [1,2-13 C]glucose was shown to be predominantly labelled via PC. Together, these two facts strongly suggest that malonate is produced predominantly from glucose via glycolysis and PC-mediated entry of pyruvate into the TCA cycle.
Evidence of parallel PDH activity Table 1 compares 1 J CC coupling constants and multiplet intensity patterns seen in the 3 and 24 h labelled datasets. In both the 3 and 24 h data sets, labelling at glutamate's C 4 is much greater than the labelling at C 4 and the splitting seen at C 4 indicates coupling to C 5 . This strongly suggests that PDH-mediated labelling is dominant for glutamate at both time points. The citrate C 2 is split by a large coupling to C 1 , again strongly suggesting that PDH-mediated labelling is dominant. These labelling patterns indicate a clear dominance of PDH products for the right branch of the Krebs cycle, leading from citrate to glutamate.
Computational multiplet analysis
Slices from the 1 H-13 C-HSQC were also quantitatively analysed by simulating 13 C-NMR spectra for a mixture of different isotopomers using the pyGamma software [21] from within the NMRLab software [19]. For glutamate, the multiplet analysis confirmed that PDH-mediated labelling was dominant at both 3 and 24 h irrespective of BaP treatment. For aspartate, the multiplet analysis confirmed that there was a shift from PC-mediated labelling at 3 h towards PDH-mediated labelling at 24 h. The simulated spectra are shown in Additional file 5: Figure S4, and Table 2 confirms qualitative results for aspartate and glutamate.
Discussion
This study sheds further light on the action of ROS in AML cell lines. The importance of ROS for the treatment of leukaemic cancers has previously been highlighted by us [4,22] and by others [9][10][11]. Our previous study also showed that high levels of ROS are associated with chemical conversion of oxaloacetate into malonate, and this phenomenon is common to a variety of AML cell lines [4].
A number of studies, including some very recent reports, have investigated PC activity in relation to cancer. DeBerardinis and coworkers showed that PC takes over as the alternative anaplerotic mechanism when glutaminolysis is silenced [23]. This is seen in the labelling pattern obtained for glutamate C 2 in tracer based metabolic analyses using glioblastoma cell lines. PC has been shown to be enhanced in human non-small cell lung cancers in xenograft mouse models [24]. It has also been shown that PC activity is high in human lung tumours [16,17,25] and critical for cell proliferation and colony formation in human non-small cell lung cancer cells [18]. Likewise, Phannasil et al. showed that PC was upregulated in breast cancer tissues, PC expression was higher in cell lines with greater metastatic potential, and that proliferation, migration and in vitro invasion ability is PC dependent [26]. Recent work shows that pyruvate carboxylation diverts glucose-derived carbons into aspartate biosynthesis in succinate dehydrogenase (SDH)-ablated kidney mouse cells [27]. In analogy to non-small cell lung cancers, we also observe high PC activity in the AML K562 cell line.
The patterns of metabolite labelling we observed and the kinetics of their changes with time lead us to conclude that the Krebs cycle is disrupted in K562 cells. We have clearly demonstrated the branched uptake of pyruvate into the cycle via 'left-hand' PC-mediated and 'right-hand' PDH-mediated entry. The PC-mediated entry was readily observed at smaller labelling periods (3 h). At this time, the conversion of PDH products in the right-hand branch of the Krebs cycle into malate and aspartate was low but became more evident at 24 h.
However, continued PC-mediated entry to the Krebs cycle even at longer flux periods was confirmed by the origin of malonate following BaP treatment. Our earlier study identified malonate accumulation in response to BaP. However, [1-13 C]glucose and [1,2-13 C]glucose trace labelling approaches have demonstrated that this malonate unexpectedly originates almost exclusively from PCderived oxaloacetate.
The generation of malonate in response to BaP treatment, in turn appears to drain the pool of oxaloacetate being formed from PC activity. At the same time, the accumulated malonate, which is known to block the SDH activity, appears to further disrupt the Krebs cycle as evidenced by a twofold reduction in the fumarate/succinate ratio following BaP treatment.
Importantly, we did not observe any visible contribution to the malonate pool from any PDH-derived products after one or several complete passages through the Krebs cycle providing remarkable evidence of direct conversion of oxaloacetate to malonate at the site of its formation by PC.
Whether the mechanisms described here require specific niche conditions, such as those in peripheral blood, cannot be answered from the existing study. Preclinical studies of BaP anti-cancer activities were demonstrated in non-hypoxic cultures [8] and translated to clinical efficacy in vivo including haematological responses as well as reduction in tumour load [7,8]. As malonate formation under high ROS has been observed in several AML cell lines and can be reproduced by treatment of cell extracts in vitro [4], we suggest that the role of malonate may be common to any cell when ROS is sufficiently high.
Conclusions
In a wider context, this study indicates that in the case of AML cells, malonate represents a marker of increased ROS, an observation that needs investigation in other cancer models. This is also important for the wider application of our findings for cancer treatments. As discussed above, there are a growing number of observations indicating that PC activity underpins the neoplastic characteristics of several cancers. Using BaP here as an example, we have shown that the PC activity of cancer cells can be exploited, as an Achilles heel, by fuelling ROS-generated production of malonate. On a broader perspective in cancer, this in turn has implications for the development of malonate derivatives as potential cancer therapeutics. malonate acid was added, and to the other sample, an equal volume of buffer was added. Regions from the resulting 1 H NMR spectra for the original and spiked samples are overlaid in blue and red, respectively. (PDF 1.03 mb) Additional file 4: Figure S3B. Malonate spiked sample. An unlabelled cell extract was split into two, and to one sample, buffer containing malonic acid was added and the pH adjusted to 7, and to the other sample, an equal volume of buffer was added. The regions of the resulting HSQC spectra containing the malonate and lactate resonances for the original and spiked samples are overlaid in blue and red, respectively. Lactate is shown as a sensitivity reference. (PDF 1.03 mb) Additional file 5: Figure S4. Simulation of spectral multiplets arising from 1 H-13 C-HSQC spectra for aspartate and glutamate. (PDF 383 kb) Abbreviations AML, acute myeloid leukaemia; BaP, bezafibrate and medroxyprogesterone; BL, Burkitt's lymphoma; HSQC, heteronuclear single quantum coherence; PC, pyruvate carboxylase; PDH, pyruvate dehydrogenase; ROS, reactive oxygen species; SDH, succinate dehydrogenase | 5,320 | 2016-08-04T00:00:00.000 | [
"Biology",
"Chemistry",
"Medicine"
] |
Identifying phenotype-associated subpopulations through LP_SGL
Abstract Single-cell RNA sequencing (scRNA-seq) enables the resolution of cellular heterogeneity in diseases and facilitates the identification of novel cell types and subtypes. However, the grouping effects caused by cell–cell interactions are often overlooked in the development of tools for identifying subpopulations. We proposed LP_SGL which incorporates cell group structure to identify phenotype-associated subpopulations by integrating scRNA-seq, bulk expression and bulk phenotype data. Cell groups from scRNA-seq data were obtained by the Leiden algorithm, which facilitates the identification of subpopulations and improves model robustness. LP_SGL identified a higher percentage of cancer cells, T cells and tumor-associated cells than Scissor and scAB on lung adenocarcinoma diagnosis, melanoma drug response and liver cancer survival datasets, respectively. Biological analysis on three original datasets and four independent external validation sets demonstrated that the signaling genes of this cell subset can predict cancer, immunotherapy and survival.
INTRODUCTION
Human tumors are complex ecosystems composed of multiple cell types [1].Fortunately, the increasing availability of omics data has provided important support for unraveling the complex features of tumors [2,3].Bulk data represent the average measurement of the entire tissue, while single-cell RNA sequencing (scRNA-seq) offers advantages in identifying cell types and therapeutic targets by revealing intratumoral heterogeneity [1,4,5].Cell types are typically annotated by marker genes [6], but determining the role of specific cells in driving sample phenotypes remains a challenge.Although scRNA-seq data can provide highresolution cell type information, it frequently lacks adequate sample phenotypes and clinical information due to its high cost [1].Conversely, publicly available databases such as TCGA [7] contain a large amount of bulk data with sample phenotypes and clinical information.
Integrating bulk and scRNA-seq data effectively leverages the benefits of both phenotype and single-cell information simultaneously.Using scRNA-seq data, significant genes were selected as features to build a predictive breast cancer prognosis model with bulk data [8].To identify subpopulations associated with sample phenotype, Scissor was developed with a sparse regression model [9].In addition, scAB was developed to detect clinically significant multiresolution cell states using a knowledge-and graph-guided matrix factorization method [10].As biological processes depend on complex interactions among different cells, we contend that incorporating cell group structure into the model will facilitate the identification of subpopulations associated with the phenotype.The implementation of Scissor and scAB relies on a correlation matrix, which comprises Pearson correlation coefficients of shared genes from bulk and scRNA-seq data.The screening of differentially expressed genes (DEGs) may potentially inf luence the performance of these methods.Thus, integrating the cell group structure into the model is likely to bolster its robustness.
Feature grouping has been considered in previous studies.Group lasso (GL) method was introduced to select features at the group level while performing regression [11].To achieve intragroup sparsity, the sparse group lasso (SGL) was formulated for applications in linear regression, logistic regression and Cox regression [12].A fundamental requirement for successfully applying SGL to bioinformatics is to group features beforehand.Although weighted gene co-expression network analysis (WGCNA) has been successfully applied to gene grouping of cancer bulk data [13,14], it is not readily applicable to scRNAseq data due to a large number of genes and cells.Therefore, identifying biologically meaningful group structures for scRNAseq data is a challenging problem.Fortunately, community clustering algorithms such as Louvain [15] and Leiden [16] present promising avenues to solve this problem.
Inspired by the similarity between community connectivity and cell-cell interactions, we considered the cell communities obtained by the Leiden algorithm on scRNA-seq data as cell groups.We then proposed LP_SGL which incorporates cell group structure to identify phenotype-associated subpopulations by integrating scRNA-seq, bulk expression and bulk phenotype data.The experimental results showed that LP_SGL outperformed Scissor and scAB on datasets related to lung adenocarcinoma (LUAD) diagnosis, melanoma drug response and liver cancer survival.The robustness of the three methods was tested on seven datasets, including six incomplete datasets obtained under different threshold conditions.The subpopulation identification performance of LP_SGL remained almost unchanged, while the latter two methods showed significant f luctuations.Furthermore, the biological analysis confirmed the effectiveness of the proposed method.
The structure of LP_SGL
LP_SGL is a specialized SGL [12] model that integrates scRNA-seq, bulk expression and bulk phenotype data.The model calculates the Pearson correlation coefficients between samples and cells by sharing genes and integrates scRNA-seq and bulk expression data into a correlation matrix.The letter 'L' indicates the use of the Leiden algorithm to obtain the cellular community structure from the scRNA-seq data.The letter 'P' represents the use of phenotype information to construct sample labels.The LP_SGL workf low was presented in Figure 1.
The Leiden algorithm [16] partitions nodes in a graph based on their similarity, which is analogous to each cell group representing a collection of cells with similar characteristics or functions.Therefore, it is reasonable to consider the cell communities obtained by the Leiden algorithm on scRNA-seq data as cell groups.Before executing the Leiden algorithm, the shared nearest neighbor graph was first constructed.Then, cells were divided into communities by maximizing the following modularity score: where m stands for the total number of edges in the graph, A ij represents the weight of the edge between cell i and j, γ > 0 is a resolution parameter, k i and k j are the degrees of cell i and cell j, respectively.c i denotes the community to which cell i is assigned, the δ function is 1 if c i = c j and 0 otherwise.The Leiden algorithm utilizes an iterative approach to enhance the initial partition by exchanging cells between communities to maximize the modularity score.This process continues until no further improvement is achievable.The algorithm was implemented through the R package 'leidenAlg'.
Let s be the number of the obtained cell groups, and p l be the number of cells in the lth group.Let x i be the ith row vector from the correlation matrix, and x (l) i be its subvector corresponding to the lth group.LP_SGL can be described as where l(β) is a loss function that depends on the phenotype information, n represents the number of samples, λ > 0 and 0 ≤ α ≤ 1 are regularization parameters, β is the regression coefficient vector and β (l) is its subvector corresponding to the lth group.If the phenotype information on cancer diagnosis (or treatment response) is utilized, then sample label y i is encoded as 1 or 0, and the negative log-likelihood function is adopted If the phenotype information on survival is utilized, then the following loss function is adopted: where D is the failure index set of samples determined by the occurrence of events, and R i is the index set of samples with survival time longer than that of the ith sample.The β in (2) can be solved through the R package 'SGL'.The regression coefficient ref lects the cell's impact on the phenotype, with positive and negative coefficients indicating associations with higher and lower value-encoding phenotypes, respectively.In cases where the phenotype represents survival information, positive coefficients correspond to cells that are consistently associated with worse survival outcomes.To simplify, we denoted cells as LP_SGL+ cells (positive coefficients), LP_SGL-cells (negative coefficients) and Background cells (coefficients equal to 0).
During the implementation of the LP_SGL model, three parameters need to be determined: the resolution parameter γ , regularization parameters α and λ.The γ acts as a threshold, requiring a minimum density of γ within each group.Higher values of γ result in more groups being obtained.We used a sequence of {0.3, 0.6, 0.9, 1.2, 1.5, 1.8} to test the impact of different γ values on the results, with detailed results presented in Supplementary Table 1 (see Supplementary Data available online at https://academic.oup.com/bib).Due to minimal f luctuations in the results as γ changed for each dataset, we simplified the experimental process by setting γ to 0.6.The λ determines the overall strength of the penalty term, while α balances the lasso and GL penalties.We created a search list of {0.005, 0.05, 0.1, 0.2, • • • , 0.8, 0.9, 0.95} in advance for α.For each fixed α, λ was determined through 5-fold cross-validation, and the optimal parameter pair (α * , λ * ) was determined through experimental results.
Datasets
The LUAD scRNA-seq data were downloaded from the Array-Express (accession numbers: E-MTAB-6149 and E-MTAB-6653), including 29 888 cells and 8 cell types [17]: cancer cell, endothelial cell, T cell, B cell, myeloid cell, alveolar cell, epithelial cell and fibroblasts cell.The bulk data of LUAD were downloaded from TCGA-LUAD.There are in total of 539 tumors and 59 normal samples, and 508 samples with overall survival time and status.An external bulk validation set of LUAD diagnosis was downloaded from GEO (accession code: GSE40419), including 87 tumors and 77 normal samples.
The melanoma scRNA-seq data (accession code: GSE115978) contained 6879 cells and 9 cell types [18]: T cell, CD4+ T cell, CD8+ T cell, B cell, macrophage, malignant cell, cancer-associated fibroblast (CAF), endothelial cell and Natural Killer (NK) cell.In reference [18], cells were defined as T cells based on the overall expression of established cell type markers (CD2, CD3D, CD3E, CD3G).T cells were further classified as CD8+ or CD4+ T cells if they expressed CD8 (CD8A or CD8B) or CD4, respectively, while the rest were still labeled as T cells.The melanoma bulk dataset PRJEB23709 was downloaded from [19].There are in total of 46 treatment responders and 27 nonresponders.External bulk validation sets for melanoma and thymic carcinoma were downloaded from GEO (accession codes: GSE91061 and GSE181815, respectively).
The liver cancer scRNA-seq data (accession code: GSE125449) contained 8853 cells and 7 cell types [20]: CAF, tumor-associated macrophage (TAM), malignant cell, tumor-associated endothelial cell (TEC), cells with an unknown entity but express hepatic progenitor cell markers (HPC-like), T cell and B cell.TCGA-LIHC provides bulk data of 370 liver cancer samples with survival information, while GEO (GSE14520) provides another liver cancer bulk validation set with survival and recurrence information.
The gene expression values were averaged for genes with multiple occurrences of the same name during data preprocessing.For bulk data, a logarithmic transformation with a base of 2 was performed on the original count data.For scRNA-seq data, the R package 'Seurat' was used for preprocessing.Genes expressed in at least 400 cells were retained, and the filtered expression matrix was normalized using the 'NormalizeData' function.Highly variable genes between cells were identified using the 'FindVariableFeatures' function with the default 'vst' method.Subsequently, standardization and principal component analysis were performed using the 'ScaleData' and 'RunPCA' functions, respectively.The shared nearest neighbor graph was constructed based on the first 10 principal components using the 'FindNeighbors' function.Two-dimensional cell visualization was achieved using the 'RunUMAP' function.
Testing and biological analysis
To assess the robustness of the model to incomplete or missing data, we deliberately removed some genes.We split the binary phenotype bulk data into two groups and used the R package 'limma' to identify DEGs between the two groups, based on the filtering criteria of Logarithm of fold change | log FC| greater than the threshold and P-value obtained by the default t-test less than 0.05.We set the threshold sequence as {0.5, 0.6, 0.7, 0.8, 0.9, 1} to obtain six different gene sets.The difference in gene sets resulted in different correlation matrices when integrating bulk data with scRNA-seq data.We evaluated the model's robustness using six different incomplete datasets.
We conducted functional enrichment analysis on DEGs between LP_SGL+ cells and LP_SGL-cells.To assess the activity level of the over-expressed gene set across different samples, we employed the R package 'GSVA' to conduct gene set variation analysis (GSVA).We calculated a statistical test between the two types of samples using the t-test.Furthermore, we performed gene set enrichment analysis (GSEA) to investigate the enrichment of DEGs under different biological conditions.GSEA was implemented by utilizing the 'gseGO' and 'gseKEGG' functions in the R package 'clusterProfiler'.P-values were calculated based on the hypergeometric distribution, and the false discovery rate (FDR) was calculated using the Benjamini-Hochberg method.
The lasso-cox model was implemented using the R package 'glmnet' based on DEGs between LP_SGL+ cells and LP_SGLcells.Subsequently, multivariable Cox regression was performed using the R package 'survival' for genes with nonzero coefficients.Samples were then divided into high-and low-risk groups based on the median of predicted prognostic scores.To assess the difference in survival time between the two groups, Kaplan-Meier (K-M) survival analysis was conducted using the R package 'survminer', with the log-rank test.In addition, the Concordance index (C-index) was calculated to measure the predictive ability of the model.To avoid the contingency of the results, 10-times
Identify cell subpopulations associated with LUAD and normal
We initially applied the LP_SGL method to LUAD dataset in order to identify cells that were associated with either the LUAD or normal phenotype.After preprocessing the data, 29 888 cells were assigned to 24 groups using the Leiden algorithm.The UMAP visualization of 24 cell groups and 8 cell types was, respectively, presented in Figure 2A and Supplementary Figure S1a (see Supplementary Data available online at https://academic.oup.com/bib).Subsequently, 1317 LP_SGL+ cells and 775 LP_SGL-cells were selected by implementing the LP_SGL.A bar chart of the distribution of LP_SGL+ cells and LP_SGL-cells with respect to cell groups was presented in Figure 2B and the corresponding UMAP visualization was displayed in Supplementary Figure S1b (see Supplementary Data available online at https://academic.oup.com/bib), and 63.25% (833/1317) and 36.45%(480/1317) of LP_SGL+ cells appeared in groups 12 and 21, respectively, while 100% of LP_SGLcells were presented in group 10.A bar chart of the distribution of LP_SGL+ cells and LP_SGL-cells with respect to cell types was presented in Figure 2C and 99.92% (1316/1317) of LP_SGL+ cells were cancer cells and 99.74% (773/775) of LP_SGL-cells were endothelial cells.The concentrated characteristics observed in the distribution of LP_SGL+ cells and LP_SGL-cells within both cell groups and cell types demonstrated the ability of the LP_SGL to accurately identify phenotype-associated subpopulations by introducing cell group structure.
We then evaluated the robustness of LP_SGL, Scissor and scAB by using seven datasets (including six different incomplete datasets obtained under different thresholds).The line chart of the proportions of cancer cells contained in the LUAD phenotype cells identified by these methods was shown in Figure 2D.In the original data, the proportion of cancer cells contained in the LUAD-associated cells identified by LP_SGL was 99.92%, which was 11.73 and 53.36% higher than that identified by Scissor and scAB, respectively.On six incomplete datasets, the results obtained by LP_SGL remained almost unchanged, while the other two methods exhibited some degree of f luctuation.
To further reveal the biological significance of the identified cells, we performed differential expression analysis (DEA) between LP_SGL+ cells and LP_SGL-cells.A total of 210 upregulated and 89 downregulated genes were identified by setting | log FC| greater than 1 and the FDR less than 0.05.The volcanic plot of the DEGs was shown in Figure 2E.Notably, some of these genes have been identified as important regulatory factors in LUAD, such as ENO1, which has been previously reported to promote tumor progression in LUAD [21].Similarly, YBX1 has been shown to induce the migration of LUAD cells and contribute to tumor metastasis [22].On the other hand, GPX3 has been found to play an inhibitory role in LUAD, with lower expression levels in tumors compared with normal tissues [23].These findings demonstrated the potential of LP_SGL for identifying significant DEGs that may be used as diagnostic or therapeutic targets for LUAD.
To assess the clinical relevance of the 210 over-expressed genes identified by LP_SGL, GSVA scores were calculated for each sample in bulk data.As shown in Figure 2F, the cancer samples exhibited significantly higher scores compared with the normal samples in the TCGA-LUAD dataset (P = 9.6e − 16).The same trend was observed in another independent LUAD dataset, as depicted in Figure 2G (p = 1.7e − 09).These results suggested that the identified upregulated genes were strongly correlated with LUAD.Furthermore, using survival information from the TCGA-LUAD dataset, 508 samples were divided into high-and low-risk groups based on the median predicted prognostic score.As presented in Figure 2H, the K-M survival curve indicated that samples with higher prognostic scores had significantly worse survival outcomes compared with those with lower scores.This analysis further supports the association of the identified LUADassociated subpopulations with poor prognosis.As a result, we have successfully demonstrated the utility of LP_SGL in accurately identifying cell subpopulations associated with a particular phenotype.
Identifying T cell subpopulations related to immunotherapy
Understanding the mechanism behind the immune checkpoint blockade (ICB) response is crucial as it significantly improves the 10-year survival rate of melanoma patients, despite the therapy not benefiting most treated patients [24].To address this issue, we employed LP_SGL to analyze melanoma data and identify T cell subpopulations associated with ICB response, and 6879 cells were assigned to 17 groups via the Leiden algorithm.The UMAP visualization of 17 cell groups and 9 cell types were, respectively, presented in Figure 3A and Supplementary Figure S2a (see Supplementary Data available online at https://academic.oup.com/bib).Then, 404 LP_SGL+ cells and 0 LP_SGL-cells were identified by implementing the LP_SGL.A bar chart of the distribution of LP_SGL+ cells with respect to cell types was presented in Figure 3B and the corresponding UMAP visualization was displayed in Supplementary Figure S2b (see Supplementary Data available online at https://academic.oup.com/bib).According to statistics, 99.26% (401/404) of LP_SGL+ cells were presented in group 1, showing the concentrated characteristic consistent with the experimental results on LUAD dataset.It is noteworthy that 99.26% (401/404) of LP_SGL+ cells were T cells (CD8+ T cells: 92.82%, 375/404; CD4+ T cells: 2.72%, 11/404; T cells: 3.72%, 15/404), with the remaining 0.75% being NK cells.Recent research has highlighted the great potential of NK cells in cancer immunotherapy [25].This result demonstrated that LP_SGL can accurately identify subpopulations related to ICB response, which has the potential to improve the effectiveness of immunotherapy for melanoma patients.
In addition, we tested the robustness of LP_SGL, Scissor and scAB by using seven datasets of melanoma.The line chart in Figure 3C showed the proportions of T cells contained in the response phenotype cells identified by these methods.In the original data, the proportion of T cells contained in the response phenotype cells identified by LP_SGL was 99.26%, which was 16.92 and 38.49% higher than that identified by Scissor and scAB, respectively.On six incomplete datasets, the results obtained by LP_SGL remained stable, while the other two methods exhibited significant f luctuations.
To gain a deeper understanding of the immunotherapy response mechanism, we performed DEA between LP_SGL+ cells and other cells, as LP_SGL-cells were not identified.A total of 253 upregulated and 131 downregulated DEGs were identified by meeting the criteria of | log FC| greater than 3 and FDR less than 0.05.The volcanic plot of the DEGs was shown in Figure 3D.Among them, many of these genes have been confirmed to be closely related to melanoma, such as the reduction of MITF level promoting melanoma invasion [26], tumor regression being abrogated by silencing CCL5 [27] and CST7 being significantly upregulated in melanoma patients who respond to ICB treatment [28].These results demonstrated that LP_SGL has the ability to identify gene signals related to immunotherapy responses.
We subsequently calculated the GSVA score of each sample to evaluate the clinical relevance of the identified DEGs.As shown in Figure 3E, the responder in the melanoma dataset had significantly higher scores than the nonresponder (P = 1.5e − 04).Moreover, the external melanoma validation set showed similar results, as depicted in Figure 3F (P = 2.4e − 02).Interestingly, we also tested whether the immunotherapy-associated cell subpopulations identified from melanoma dataset were applicable to thymic carcinoma samples.Surprisingly, as shown in Figure 3G, thymic carcinoma samples that responded to treatment had significantly higher scores than those that did not respond (P = 7.1e − 04).Furthermore, the GSEA of the overall DEGs revealed overactivation of immune response processes and suppression of lipid transport processes, as shown in Figure 3H.The GSEA of the upregulated DEGs was shown in Figure 3I, while no significant enrichment of biological processes was observed for downregulated DEGs.These findings were consistent with previous research demonstrating that inhibiting lipid transport to melanoma cells effectively reduces their growth and invasion [29].In summary, the LP_SGL identified cell subpopulations that were associated with ICB response, and the signal genes from these cells could reliably predict ICB response in melanoma and other types of cancer.
Identifying cell subpopulations associated with worse survival in liver cancer
To further evaluate the model's performance in survival phenotype data, we applied the LP_SGL method to the liver cancer dataset to identify cell subpopulations associated with poorer survival outcomes, and 8853 cells were assigned to 16 groups via the Leiden algorithm.The UMAP visualization of 16 cell groups and 7 cell types were, respectively, presented in Figure 4A and Supplementary Figure S3a (see Supplementary Data available online at https://academic.oup.com/bib), and 746 LP_SGL+ cells and 1243 LP_SGL-cells were identified.A bar chart of the distribution of LP_SGL+ cells with respect to cell types was presented in Figure 4B and the corresponding UMAP visualization was displayed in Supplementary Figure S3b (see Supplementary Data available online at https://academic.oup.com/bib), and 91.68% (684/746) of LP_SGL+ cells were composed of tumorassociated cells (TAM, CAF, TEC and malignant cell).Additionally, the cells identified by Scissor and scAB were labeled as Scissor+ cells, Scissor-cells and scAB+ cells, respectively, according to the habits of their respective papers.We applied scAB and Scissor to the liver cancer dataset and obtained the proportions of 85.14% (779/915) and 90.48% (19/21) tumor-associated cells in scAB+ cells and Scissor+ cells, respectively.LP_SGL identified a higher proportion of tumor-associated cells contained in cells associated with poorer survival phenotype compared with Scissor and scAB.Specifically, the proportion identified by LP_SGL was 1.2 and 6.54% higher than that identified by Scissor and scAB, respectively.
We conducted DEA between LP_SGL+ cells and LP_SGL-cells to explore potential biological mechanisms related to poorer survival.Figure 4C showed 77 upregulated and 12 downregulated DEGs that met the conditions of | log FC| greater than 1 and FDR less than 0.05.Among these DEGs, most of them have been reported to be associated with liver cancer, such as high expression of YBX1 and NUPR1, which were associated with poor overall survival in liver cancer [30,31].In addition, overexpression of IL32 has been found to inhibit cancer cell growth and may serve as a therapeutic target for various cancers, including liver cancer [32].We also identified DEGs between scAB+ cells and other cells, as well as Scissor+ cells and Scissor-cells, using the same criteria.Subsequently, we used the DEGs obtained from each method to construct lasso-cox models.The average C-index of the 10times experimental results corresponding to each method was presented in Figure 4D.We found that LP_SGL, scAB and Scissor We conducted a survival analysis on the TCGA-LIHC dataset.As depicted in Figure 4E, there was a significant survival difference between the two groups, with the high-risk group having almost four times lower median survival time than the low-risk group.To verify the generalization of the identified DEGs, we conducted a survival analysis on an independent external validation set by following the same steps.The K-M survival curves of the high-and low-risk groups were shown in Figure 4F.We found that the high-risk group in the independent validation set still achieved worse survival outcomes.We also predicted the recurrence risk of the samples based on DEGs using the recurrence time and status of the samples.Figure 4G showed a significant difference in recurrence between the high-and low-risk groups.Furthermore, we performed GSEA based on DEGs and found that the cholesterol metabolism pathway was significantly enriched (Figure 4H).As the liver is the main organ responsible for cholesterol metabolism, abnormal cholesterol metabolism has been associated with the occurrence of liver diseases [33].We compared the proposed LP_SGL with the currently mainstream phenotype-associated subpopulation identification methods, Scissor [9] and scAB [10], where the data preprocessing and parameter settings of both methods were consistent with their respective original literature.The LP_SGL selected the highest proportions of cancer cells and T cells when the three methods were applied to the LUAD diagnosis, melanoma drug response and liver cancer survival datasets, respectively.It is worth noting that compared with LP_SGL and Scissor, scAB consistently selects the highest number of cells, which may be the reason why the cells it identifies contain a lower proportion of cancer cells or T cells.The LP_SGL selected a larger number of cells than Scissor on both LUAD and liver cancer datasets.On the melanoma dataset, LP_SGL identified 404 LP_SGL+ cells in the optimal results.Moreover, when LP_SGL identified 1406 LP_SGL+ cells, which was more than the 1212 Scissor+ cells identified by Scissor, the proportion of T cells in LP_SGL+ cells was 95.87%, still higher than its proportion in Scissor+ cells.These results indicated that LP_SGL had a more accurate and comprehensive ability to identify phenotype-associated subpopulations.
Flow cytometry is a prevalent technique in experiments for identifying cell subpopulations [34].It enables the segregation of target cells from a mixed cell population based on the f luorescence signal of cell surface markers [35].However, since our research primarily focused on exploring phenotype-associated subpopulations using available transcriptomic data, there is currently no available f low cytometry data for identifying cell subpopulations.In ensuing studies, integrating f low cytometry data with our algorithm will be on our agenda.Moreover, the patients who underwent bulk RNA-seq in this study are different from those who underwent scRNA-seq.This rendered us incapable of scrutinizing the distribution of identified cells in response and nonresponse samples.Nevertheless, the comparison of performance among LP_SGL, Scissor and scAB, along with extensive biological analyses, proved the credibility of the proposed LP_SGL.Utilizing data from patients who have undergone both bulk RNA-seq and scRNA-seq may be advantageous in identifying phenotype-associated subpopulations.This will be a focus of our future research.
Key Points
• Our proposed method LP_SGL for integrating scRNA-seq, bulk expression and bulk phenotype data.• The group effects caused by cell-cell interactions were introduced into the model to guide the identification of phenotype-associated subpopulations.• LP_SGL identified a higher percentage of cancer cells, T cells and tumor-associated cells than Scissor and scAB on lung adenocarcinoma diagnosis, melanoma drug response and liver cancer survival datasets, respectively.• The biological analysis on three original datasets and four independent external validation sets demonstrated that the signaling genes of this cell subset have the ability to predict cancer, immunotherapy and survival.
Figure 2 .
Figure 2. Experimental results on the LUAD dataset.(A) UMAP visualization of 24 cell groups obtained using the Leiden algorithm.(B and C) Bar chart of the distribution of LP_SGL+ cells and LP_SGL-cells with respect to cell groups and cell types, respectively.(D) Line chart of the proportions of cancer cells contained in the LUAD phenotype cells identified by LP_SGL, Scissor and scAB.(E) Volcano map of DEGs between LP_SGL+ cells and LP_SGL-cells.(F and G) Box plot of GSVA scores for cancer and normal samples on TCGA-LUAD and GSE40419 datasets, respectively.(H) K-M survival curves of highand low-risk group samples divided by the median prognostic score in the TCGA-LUAD dataset.
Figure 3 .
Figure 3. Experimental results on the melanoma dataset.(A) UMAP visualization of 17 cell groups obtained using the Leiden algorithm.(B) Bar chart of the distribution of LP_SGL+ cells with respect to cell types.(C) Line chart of the proportions of cancer cells contained in the response phenotype cells identified by LP_SGL,Scissor and scAB.(D) Volcano map of DEGs between LP_SGL+ cells and other cells.(E-G) Box plot of GSVA scores for response and non-response in PRJEB23709, GSE91061 and GSE181815 datasets, respectively.(H) GSEA plots of upregulated and downregulated biological processes (BP) of the overall DEGs.(I) GSEA plots of upregulated BP of the upregulated DEGs.
Figure 4 .
Figure 4. Experimental results on the liver cancer datasets.(A) UMAP visualization of 16 cell groups obtained using the Leiden algorithm.(B) Bar chart of the distribution of LP_SGL+ cells with respect to cell types.(C) Volcano map of DEGs between LP_SGL+ cells and LP_SGL-cells.(D) Bar chart of the average C-index of the results from 10-times experiments results.(E) The K-M survival curves of high-and low-risk group samples in the TCGA-LIHC dataset.(F and G) The survival and recurrence K-M curves of the high-and low-risk groups in the GSE14520 dataset, respectively.(H) Gene set enrichment analysis plots of upregulated Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway of DEGs. | 6,459.6 | 2023-11-22T00:00:00.000 | [
"Biology",
"Computer Science",
"Medicine"
] |
Time Dependent Stochastic mRNA and Protein Synthesis in Piecewise-deterministic Models of Gene Networks
We discuss piecewise-deterministic approximations of gene networks dynamics. These approximations capture in a simple way the stochasticity of gene expression and the propagation of expression noise in networks and circuits. By using partial omega expansions, piecewise deterministic approximations can be formally derived from the more commonly used Markov pure jump processes (chemical master equation). We are interested in time dependent multivariate distributions that describe the stochastic dynamics of the gene networks. This problem is difficult even in the simplified framework of piecewise-determinisitic processes. We consider three methods to compute these distributions: the direct Monte-Carlo, the numerical integration of the Liouville-master equation and the push-forward method. This approach is applied to multivariate fluctuations of gene expression, generated by gene circuits. We find that stochastic fluctuations of the proteome and much less those of the transcriptome can discriminate between various circuit topologies.
The van Kampen Ω (system size) expansion [51] or equivalently, the central limit theorem [28,16] The reactions R D act on X D (the corresponding γ i have non-zero coordinates on discrete species, 145 γ D i = 0) and have propensities depending on X D only. The reactions R C act on X C (the corresponding 146 γ i have non-zero coordinates on continuous species, γ C i = 0) and have propensities depending on X C only.
147
The reactions R DC , R CD act on X C and X D , respectively, and their propensities depend on both X D 148 and X C .
149
In this paper we consider gene network models. For each gene, we model the transitions between 150 promoter states, as well as other processes such as transcription, translation, protein folding, and protein 151 8 and mRNA degradation. We will consider that the mRNA molecules and proteins are in sufficiently 152 large copy numbers to justify continuous approximations. The only discrete variables are in this case 153 the promoter states. The set R D contains transitions between discrete promoter states whose rates do 154 not depend on regulatory proteins. The set R CD contains transitions between promoter states whose 155 rates depend on concentrations of regulatory proteins. The set R C contains translation, maturation 156 (folding), degradation reactions. The set R DC contains transcription initiation reactions that depend on 157 the promoter state. 158 We further consider that the copy numbers of continuous species X C and the propensities of reactions 159 in the sets R C and R DC are "extensive", in other words, scale with the system size Ω, X C = Ωx c , with size, unless they are proportional to copy numbers of activator or repressor proteins. For a more 168 complete discussion of these scaling relations we refer to [8].
Using the first order Taylor series
xc,t) ∂xc
we obtain the Liouville-master equation 3) The (3.1) The mRNA and protein variables follow ODE dynamics For the sake of illustration let us consider the simple model of a single constitutive gene controlled 205 by a two state (ON/OFF) promoter. We denote the states of the promoter by 1 and 0 respectively. The 206 transition rate from 0 to 1 is f and from 1 to 0 is h. The protein and mRNA concentrations are x and 207 y, respectively. The transcription initiation rate in the state 1 is k 1 and in the state 0 is k 0 << k 1 .
208
The translation rate is b. The mRNA and protein degradation rates are ρ and a, respectively. The 3) The protein and mRNA concentrations follow the ODEs The probability distribution of the promoter state s results from the dynamics of the two state Markov 212 chain 213 dp 0 dt = −f p 0 (t) + h(1 − p 0 (t)), where p 0 = P[s = 0] = p(0, x, y) dxdy, p 1 = P[s = 1] = p(1, x, y) dxdy. 214 We also define the asymptotic occupancy probabilities p 0 = h h+f and p 1 = f h+f , representing the 215 probabilities, at steady state, that the promoter state is OFF and ON, respectively.
216
The single constitutive gene model and the advection fluxes of the Liouville-master equation are 217 illustrated in Figure 3. More complex, two gene circuits models are represented in Figure 4 and their Liouville-master equations are given in the Appendix 1. The PDP Monte-Carlo method is based on the direct simulation of the PDP process. A simple algorithm 227 has been proposed in [8]. For the sake of completeness we recall here the main steps of this algorithm.
230
(3) Integrate the system of differential equations obtained by adding to (3.2) the equation for the 231 survival function F of the waiting time to the next Markov chain transition Fokker-Planck equation [39].
265
In this paper we have used a finite-difference predictor-corrector scheme [47] to compute the solution of
298
Let us introduce our model beginning with the MEs describing the dynamics of the first switch, where, P (t) = (P 0 (t), P 1 (t)) T is the probability occupancy vector whose entries are the probabilities to 300 find the first switch in the OFF state (P 0 (t)) or in the ON state (P 1 (t)). The infinitesimal stochastic 301 18 matrix H 1 is given by: This is a basic telegraph process where the rates f 1 where y 1 is a random variable representing the copy number of mRNA in the cell coming from the first gene.
306
The random variable s 1 (t) follows the switch statistics, meaning that with probability P 1 (t), s 1 (t) = 1 at 307 time t and s 1 (t) = 0 with probability P 0 (t), again at time t. The production rate of mRNA is a function 308 of the random variable s 1 (t) following where K 1 is the highest level of mRNA production and K 0 is the basal one. The third equation describing 310 the activity of the first gene is for the random variable representing the protein density associated to it; where α is protein degradation rate and β is the translation rate. The last equation for the coupled gene 312 model is the one governing the probability occupancy of the second gene, where Q(t) = (Q 0 (t), Q 1 (t)) T encodes, in its entries, the information about the probability to find the 314 second gene ON (Q 1 (t)) or OFF (Q 0 (t)). The matrix H 2 (t) is given by (4.10) In the model at hand, the main source of stochasticity is the switching ON and OFF of the gene.
316
This noise is transmitted to mRNA synthesis process through the rate k(s 1 (t)) which is a function of a 317 random variable (s 1 (t)) and, so, a random variable itself. The first step of the push-forward method is 318 to compute the time dependent distribution probability of mRNA molecules y 1 (t) (which is perturbed by 319 the random variable s 1 (t)) once the probability distribution of the perturbation is known. To do so, we 320 begin by presenting the solutions of Eqs (4.5), where P (t 0 ) encodes the initial configuration (given at t = t 0 ) of the switch, and the matrices are (4.12) Explicitly, the solutions are given as are the asymptotic occupancy probabilities to find the gene OFF 324 20 and ON, respectively. Going on, we present the formal solution of the RDE governing mRNA dynamics, (4.14) where we have rescaled time t by the mRNA degradation rate and introduced the new time parameter, 326 τ = tρ, and also the dimensionless parameters k 0 = K 0 /ρ and k 1 = K 1 /ρ. Note that the integral (4.14) is 327 a basic Riemann integral, such that, if we consider a sufficiently fine partition [τ j , τ j+1 ] and the integral in (4.14) is approximated by solution for protein density is: applied to obtain the time dependent distribution probability for protein density, in an analogous way as 350 for mRNA. The integral that must be partitioned is that over τ , in the interval where, we have used the definition k(s 1 (τ )) = k 0 (1 − s 1 (τ )) + k 1 s 1 (τ ) to simplify the notation. As before,
352
we have illustrated our method by calculating the protein density for the same two regimes of switch 353 flexibility.
354
To analyze the influence of the first gene on the second one we have assumed that the action of the first 355 gene is to activate the second (see (4.10)). To do so, instead of solving the RDE describing the activity 356 of the second gene (it is an RDE because the perturbation x 1 (τ ) is a random variable) we have analyzed 357 the mean value of the occupancy probability of the second gene whose dynamics is given by and Q 1 (τ ) = 1 − Q 0 (τ ) . The general solution for Q 0 (τ ) is given by In Appendix 3 we show, in detail, how to obtain the exact functional shape of x 1 (τ ) . Nevertheless, its 360 structure is x 1 (τ ) = r 0 + r 1 e −τ + r 2 e −aτ + r 3 e − τ and, because of this, the integral in (4.21) cannot be 361 evaluated analytically and a numerical evaluation must be performed. This will also be the case for the 362
23
conditional probabilities that will be expressed as where we have set Q i (τ j−1 ) = s 2 (τ j−1 ) (with i = 0 or 1) expressing the fact that at the instant of time comparison is shown in Figure 10 for the two gene circuits G 1 → G 2 and G 1 G 2 that differ by the sign 412 of the interaction; one can notice that the protein fluctuation based distance is significant, whereas the 413 mRNA fluctuation distance is not, both for slow/slow and fast/fast genes. 414 We have also tested the significance of the correlation computed from bivariate mRNA or protein probability p < 5%) protein/protein correlation is obtained for moderate cell populations (N c > 100 for 420 p < 5%, see Figure 10). In order to obtain significant mRNA/mRNA correlation one has to use very large 421 numbers of cells (N c > 1000 for p < 5%, see Figure 10). This is possible for single cell sequencing and however, as seen in Section 2, our methods work also for promoters with more than two states.
442
As application of our numerical methods we tested the capacity of mRNA and protein copy numbers →, stand for activation and repression, respectively.
464
The corresponding Liouville-master equations are the following:
465
For the circuit G 1 → G 2 .
For the circuit G 1 G 2 .
For the circuit G 1 G 2 G 1 .
For gene circuits (4.2) read where c i (s), i ∈ [0, n g ] are positive constants depending on the discrete Markov chain state s.
474
The solution of this system is straightforward For the single constitutive gene model c The constants c i , i ∈ {0, 1, 2} for the two gene models are given in the Table 2. For a general gene network, 478 the constants table is precalculated symbolically and used to generate automatically the simulation code.
479
In order to obtain an expression for x 1 (τ ) we must solve a set of ordinary differential equations: where k(s 1 (τ )) = k 0 P 0 (τ ) + k 1 P 1 (τ ). The solution for y 1 (τ ) is: The general solution for protein mean value has the structure, where the coefficients are given by : Model Table 2: Constants c i for computing the next step waiting time in the two gene model. defined by (4.1). terms are chosen such that then mean mRNA and protein are the same in regulated and constitutive genes. The probability to color map relation is logarithmic. Figure 9: Steady state bivariate histograms of protein copy numbers from two interacting genes in circuits of different types and for four switching regimes of the promoters, obtained with the Monte-Carlo method.
The individual gene parameters are those used in Figure 5; f and h constants in f x i or hx i terms are chosen such that then mean mRNA and protein are the same in regulated and constitutive genes. The probability to color map relation is logarithmic. The parameters of the simulations are those used in Figure 9. 48 | 3,004.4 | 2018-03-07T00:00:00.000 | [
"Computer Science",
"Mathematics",
"Biology"
] |
Further results on Ulam stability for a system of first-order nonsingular delay differential equations
Abstract This paper is concerned with a system governed by nonsingular delay differential equations. We study the β-Ulam-type stability of the mentioned system. The investigations are carried out over compact and unbounded intervals. Before proceeding to the main results, we convert the system into an equivalent integral equation and then establish an existence theorem for the addressed system. To justify the application of the reported results, an example along with graphical representation is illustrated at the end of the paper.
Introduction
Delay systems are used to characterize the evolution processes in automatic engines, physiological systems and control theory. Shuklin and Khusainov [1] introduced a notion of delayed matrix exponential and used it to derive a representation of solutions to linear delay problems under the restriction of permutable matrices. Khusainov and Diblik [2] and Wang et al. [3] used the ideas of [1] to introduce a discrete matrix delayed exponential function and to consider a representation of solutions to linear delay systems.
Among the qualitative properties of differential systems, stability is an essential property. There are different types of stabilities, but recently researchers focused on the Ulam-Hyers-type stability. The idea of the aforesaid stability was initially introduced by Ulam [4], in 1940, when he addressed a mathematical colloquium. During his talk, he raised a problem regarding the stability of group homomorphisms. In the following year, Hyers [5] responded positively to this problem under the assumption that groups are Banach spaces. Since then, this stability was named as Ulam-Hyers stability. Rassias [6], in 1978, made an extension to the result of Hyer's theorem, where the bound of norm of Cauchy difference was presented in a more general form. This stability phenomenon is termed as Hyers-Ulam-Rassias stability. For more information about the topic, we refer the reader to [7][8][9][10][11][12][13][14].
In 2019, You et al. [3] studied the exponential stability and relative controllability of nonsingular delay differential equations of the form: where A, M and N are constant permutable matrices of dimension × n n, A is a nonsingular and Motivated from [3], and using the techniques of [15], we analyze the β-Hyers-Ulam-Rassias stability [16] of solutions for the nonsingular delay differential system (1.1). We carry out our investigations in two folds: stability results over a compact interval and stability results over an unbounded interval. Before proceeding to the main results, we convert system (1.1) into an equivalent integral equation and establish an existence theorem for its solutions. To justify the application of the reported results, an example along with a graphical illustration is presented at the end of the paper.
Essential background
Here we present some basic concepts and definitions that are essential in proving the main results. Let represent the set of all real numbers, + represent the set of all nonnegative real numbers and n the space of all n-tuples of . The interval = [ ] ⊆ θ 0, and = n , ( ) , , the Banach space of all continuous functions from to with the norm , for all , . for all ∈ , 1 .
The nonsingular delay differential system , depending upon f φ β , , , such that for any > ε 0 and for any solution such that .
Let α, ϖ and U be the real valued functions defined on J. Assume that ϖ and U are continuous and that the negative part of α is integrable on every closed and bounded subinterval of J. (a) If ϖ is nonnegative and U satisfies the integral inequality
Existence result
To discuss existence result of the given system, we need some assumptions: satisfies the Caratheodory condition
( ) t is nonnegative. A 5 : Assume the negative part of ( ) η φ t ϵ φ is integrable on every closed and bounded subinterval of J and it is nondecreasing. Proof. The unique solution of the Cauchy problem Thus, Since we know that , where , , 0 and 1.
β-Hyers-Ulam-Rassias stability on unbounded interval
Here, we study the β-Hyers-Ulam-Rassias stability on an unbounded interval. Consider some more assumptions: A 0 : The operator family { ( − ) ≥ ≥ } Z t s t s : 0 is exponentially stable, i.e., and there exists ∈ ( ) + , satisfying the Caratheodory condition for every ∈ + t and ′ ∈ ν ν , . Also, we assume that By considering inequality (2.2) and the aforementioned assumptions, we are in a position to state and prove our second result.
Proof. The unique solution of the semilinear nonautonomous differential system Thus for each ∈ + t , we get that,
Conclusion
In the last few years and along with the explosion in studying differential equations, the notion of stability has gained extensive interest by many mathematicians. Following the trend, in this paper we discuss the β-Hyers-Ulam-Rassias stability of a nonsingular differential system over compact and unbounded intervals. Different types of conditions were established for the sake of proving the main results. An example with specific parameters and matrices and graphical representation demonstrate consistency to our theoretical findings. | 1,204 | 2020-01-01T00:00:00.000 | [
"Mathematics"
] |
Limitations in a frataxin knockdown cell model for Friedreich ataxia in a high-throughput drug screen
Background Pharmacological high-throughput screening (HTS) represents a powerful strategy for drug discovery in genetic diseases, particularly when the full spectrum of pathological dysfunctions remains unclear, such as in Friedreich ataxia (FRDA). FRDA, the most common recessive ataxia, results from a generalized deficiency of mitochondrial and cytosolic iron-sulfur cluster (ISC) proteins activity, due to a partial loss of frataxin function, a mitochondrial protein proposed to function as an iron-chaperone for ISC biosynthesis. In the absence of measurable catalytic function for frataxin, a cell-based assay is required for HTS assay. Methods Using a targeted ribozyme strategy in murine fibroblasts, we have developed a cellular model with strongly reduced levels of frataxin. We have used this model to screen the Prestwick Chemical Library, a collection of one thousand off-patent drugs, for potential molecules for FRDA. Results The frataxin deficient cell lines exhibit a proliferation defect, associated with an ISC enzyme deficit. Using the growth defect as end-point criteria, we screened the Prestwick Chemical Library. However no molecule presented a significant and reproducible effect on the proliferation rate of frataxin deficient cells. Moreover over numerous passages, the antisense ribozyme fibroblast cell lines revealed an increase in frataxin residual level associated with the normalization of ISC enzyme activities. However, the ribozyme cell lines and FRDA patient cells presented an increase in Mthfd2 transcript, a mitochondrial enzyme that was previously shown to be upregulated at very early stages of the pathogenesis in the cardiac mouse model. Conclusion Although no active hit has been identified, the present study demonstrates the feasibility of using a cell-based approach to HTS for FRDA. Furthermore, it highlights the difficulty in the development of a stable frataxin-deficient cell model, an essential condition for productive HTS in the future.
Background
In the past 20 years, many genes involved in different rare genetic disorders have been identified. However, many causative genes encode proteins of unknown or partially known function, and with no predictable catalytic sites enabling in vitro drug-target modeling. This makes drug design and identification a difficult task for these orphan diseases. However, new strategies based on automated techniques for high-throughput screening (HTS) have been developed, enabling cell-based assays to identify drug-candidates.
Friedreich's ataxia (FRDA), the most common autosomal recessive ataxia associating spinocerebellar ataxia and cardiomyopathy [1,2], is most often due to a (GAA) n repeat expansion within the first intron of the gene encoding the mitochondrial protein frataxin [3,4]. This frequent mutation leads to a severely reduced level of frataxin as a consequence of transcriptional silencing either through heterochromatin formation or through the formation of a triplex helix [5][6][7]. Although much progress has been made in understanding the physiopathology of FRDA, the exact role of the frataxin protein is still unclear. Early studies showed iron deposits in cardiac tissue of FRDA patients [8] and in the yeast strain deleted for frataxin (ΔYFH1) thereby linking impaired iron homeostasis to the disease [9]. This led to the hypothesis that elevated levels of mitochondrial iron, as a consequence of frataxin deficiency, could generate cell-damaging superoxide and hydroxyl radicals through Fenton reaction. In support of this, several studies have suggested an increased levels of oxidative stress in patients [10][11][12][13], as well as an increased sensitivity to oxidative stress in FRDA patients cells or ΔYFH1-yeast model [14][15][16][17][18]. However, more recent studies have found no evidence of increased oxidative damage in FRDA patients [19][20][21]. Moreover, experimental data from the conditional FRDA mouse models demonstrated that an increased superoxide production could not explain by itself the FRDA pathology [22] and that the mitochondrial iron accumulation is a late event in the disease [23]. In different mouse and cellular models, the primary biochemical event is the impaired function of ironsulfur cluster (ISC) proteins such as the aconitases and respiratory chain complexes I-III [23][24][25][26]. The role of frataxin as a mitochondrial iron-chaperone for ISC biogenesis is now widely accepted. In addition to severe alteration of mitochondrial and extramitochondrial ISC proteins in frataxin-deficient yeast, mice or human cells [23,[27][28][29][30], reconstitutional and in vivo studies demonstrate that the yeast frataxin homolog Yfh1 is required, although not essential, for ISC biosynthesis [27,29]. Furthermore, frataxin has been demonstrated to interact with the ISC biosynthesis scaffold complex IscU/Nfs1/ISD11 [31][32][33][34][35].
Several therapeutic strategies for FRDA have been developed based on the potential implication of these different pathways in the pathogenesis. Some pharmacological compounds such as antioxidants ( [36] for review) or iron chelators [37][38][39] have shown promise in improving some of the symptoms of the disease. Recently, new therapeutic strategies have been developed to address the frataxin deficiency itself: pharmacological compounds increasing frataxin protein levels (such as recombinant human erythropoietin) or reversing frataxin gene silencing (such as histone deacetylase inhibitors) have been successfully tested in clinical or preclinical trials [40,41]. However there is currently no effective pharmacological treatment available that would slow down the neurological progression of the disease in affected FRDA patients.
The development of a cellular model which reproduces accurately the major aspects of the pathogenesis is the preliminary condition before being able to make a pharmacological screen. To date, there is no appropriate mammalian cell model for FRDA. Indeed, no easily available patient's tissue or cell lines spontaneously express the generalized ISC enzyme deficiency, and exogenous stress conditions have to be used in order to reveal a differential phenotype with control fibroblasts [14,17,18,42]. Moreover, the reproducibility and relevance of the results obtained with such systems has been contested [10]. More recently, RNA interference strategies have been developed to reproduce frataxin deficiency in mammalian cell lines [16,25,[43][44][45][46][47]. Both transient and stable frataxin silencing to undetectable levels in HeLa cells lead to significant reduction of cell growth and activities of ISC proteins [45,47]. Although it seems evident that transient frataxin silencing is not suitable for HTS experiments, it is unclear whether the stable frataxin silencing clone in HeLa cells is an appropriate model for HTS experiments as the clone is reported to have roundish and grainy cells that easily detached from the plate [47]. The RNAi models have been studied to unravel consequences of frataxin deficiency, but no HTS on a FRDA cell-model has yet been published.
In this study, we have developed and characterized a cellular model with partial frataxin deficiency using targeted ribozyme strategy. This model displayed a specific ISC deficit, faithfully and spontaneously reproducing a key feature of the human disease. The growth delay of the frataxin-deficient clone was used as a quantifiable parameter to screen the Prestwick Chemical Library for potential drug-candidates. However, the absence of confirmed active hit and the instability of the cellular model illustrate the difficulty to identify drug-candidates in a small compound library and in accurately replicating the FRDA pathogenesis in a long-term controlled cellular model.
Complementary oligonucleotides that encode the antisense sequence, including the 24 highly conserved nucleotides of hammerhead ribozymes (underlined in the sequences below) flanked by 24 and 15 nucleotides of murine frataxin sequence in exon 2 for R2, and by 19 and 20 nucleotides of murine frataxin sequence in 3'-UTR at the end of exon 5 for R5, were synthesized and annealed at 42°C. The sequences of the oligonucleotides were as follows: , which include four mismatches in the two frataxin complementary sequences of exon 2 (bold in the primer sequences). The resulting duplexes were ligated into the EcoRI and SpeI sites of pZeoU1EcoSpe to create pZeoFxnR2 and pZeoFxnR2 m . All ligation junctions were sequenced to verify the identity and orientation of the insert.
Stable transfection of Frda L2/Lcells
Murine fibroblast cell lines derived from mice carrying the wild type (Frda +/+ ), conditional (Frda +/L3 ) or compound heterozygous for the deleted and conditional (Frda L3/L-) frataxin alleles [23] were established using the primaryexplant technique [50], and then immortalized by transfection with a Large Antigen T construct [51] using the Fugene 6 Transfection Reagent kit (Roche, Indianapolis, Indiana), according to the manufacturer's protocol. As the mice also expressed an inducible recombinase (Cre-ER T ) [52], the deletion of the neomycin resistance cassette was obtained by tamoxifen treatment. The resulting cell line will be noted Frda L2+/Lin the text.
Frda L2+/Limmortalized cells were transfected with linearized pZeoFxnR2 or pZeoFxnR2 m . Stably transfected fibroblasts were grown in DMEM media (Sigma, Saint Louis, Missouri) with 10% fetal calf serum and 50 μg/ml gentamycin, supplemented with 250 μg/ml Zeocin (Invi-voGen, San Diego, California). Zeocin selection was maintained one month. Monoclonal cell lines were obtained by dilution cloning. Aliquots of the cell line were frozen for cryopreservation in liquid nitrogen, and a new aliquot was used for each screening experiment.
Quantitative RT-PCR (Q-RT-PCR)
Expression levels of murine genes (Fxn, Mthfd2, Hprt) were determined by Q-RT-PCR as previously described [22,23]. The following primers were used for amplification of human methylenetetrahydrofolate dehydrogenase 2 (MTHFD2) and 18S ribosome in the same conditions:
Measurement of cell proliferation rates
Fibroblasts were plated in 6-well plates at day 1. After cell detachment using trypsin, cell densities were determined daily by visual counting with a hemocytometer. Four counts were performed per well.
Biochemical analyses
Cells were harvested in PBS and dry pellet was immediately frozen in liquid nitrogen. The activities of the respiratory chain enzyme complexes succinate cytochrome c reductase (SCCR), cytochrome c oxydase (COX), the mitochondrial and cytosolic aconitases (Aco) and the isocitrate deshydrogenase (IDH) as internal standard were measured spectrophotometrically as previously described [23,53].
Miniaturization and screening
The Prestwick Chemical Library (Illkirch, France) which consists of 1120 drugs and bioactive natural compounds approved by the Food and Drug Administration (FDA) was screened on both R2C1 and R2 m cell lines in 96-well format. Chemical compounds (~5 mM) dissolved in dimethyl sulfoxide (DMSO) were stored at -20°C into 96 well plates. Just before treatment, compounds were diluted with cell culture medium (DMEM media with 10% fetal calf serum and 50 μg/ml gentamycin) into 96 well plates.
Fresh R2C1 and R2 m cell aliquots were thawed for each screening experiment. At the time of passage, R2C1 and R2 m cell suspensions were prepared in order to be seeded into 96-well plates (Greiner Bio-one, France, ref 655090) using a Biomek 2000 Laboratory Automation Workstation (Beckman Coulter inc) (100 μl/well, 10,000 cells/ ml).
After sedimentation for 30 minutes, test compounds were added to the wells (100 μl per well, one compound per well) to result in a final concentration of 25 μM. Vehicle control wells contained 0.5% DMSO alone (columns 1 and 12 of each plate). At this DMSO concentration we did not detect any side effect on cells.
Cell proliferation was assessed 72 hours after cell treatment using the commercial CellTiter-Glo Luminescent Cell viability assay (Promega, Madison, WI) generating a luminescent signal directly proportional to the amount of ATP present in metabolically active cells. After incubation, 100 μl of medium was removed and replaced by 100 μl of Promega CellTiter-Glo reagent to each well using the Biomek 2000 workstation. Plates were shaken for 5 min on a plate shaker before reading luminescence on a Victor 3 plate reader (PerkinElmer, Norwalk, CT). Luminescence in each treated well was compared to the mean signal obtain in non-treated wells (columns 1 and 12). Positive hits were designated for any compounds with luminescent signal over three standard deviations when compared to untreated control wells. Luminescent ratio between the R2 m and R2C1 cell lines was also calculated to check the reproducibility of proliferation delay in the frataxin deficient clone during the experiments (when calculated for non treated cells) and to check the specificity of drug action (when calculated for treated cells).
Partial loss of frataxin leads to a proliferation defect in frataxin ribozyme cell lines
To generate viable FRDA cellular models with reduced frataxin level, we used a ribozyme antisense strategy in cells derived from mice carrying the conditional allele [23]. Two ribozymes, one targeted against exon 2 of the murine frataxin mRNA and one against the 3'-UTR (exon 5), were constructed in the U1snRNA based ribozyme vector pZeoSV [49]. Immortalized compound heterozygous Frda L2+/Lcells were transfected with either a frataxin-specific ribozyme (pZeoFxnR2 or pZeoFxnR5) or a control construct bearing mutant frataxin sequences (pZeoFxnR2 m ). The transfected cell lines were subsequently brought through the identical selection process to generate polyclonal and monoclonal colonies of stable transfectants. Two clones were obtained with pZeoFxnR2 (R2C1 and R2C2) and four with pZeoFxnR5 (R5C1 to R5C4). The amount of frataxin was assayed by Q-RT-PCR and western blot. A better frataxin knockdown was obtained with the pZeoFxnR2 clones compared to the R5 clones (Fig. 1). Clones transfected with mutant-pZeoFxnR2 m showed no significant difference compared to non-transfected Frda L2+/Lcells (data not shown). For subsequent experiments, we selected the two R2 clones (R2C1 and R2C2) which presented significant decrease in frataxin mRNA and protein expression levels (frataxin residual protein level: 9.4% ± 6.1 (p < 0.001) and 18.0% ± 8.4 (p < 0.001), respectively; frataxin mRNA level: 16.5% ± 4.7 (p < 0.001) and 24.9% ± 4.0 (p < 0.001), respectively) (Fig. 1B).
The frataxin deficient clones R2C1 and R2C2 consisted of a homogeneous cell population with no gross morphological phenotype or cell death. Electron microscopy analyses confirmed the absence of morphological phenotype, with normal mitochondria having clear thin cristae and no iron deposits (data not shown). However, growth analysis uncovered a clear proliferation defect in both frataxin deficient clones (Fig. 2). Indeed, the two clones (R2C1 and R2C2) grew slower than Frda L2+/Lnon-transfected cells, leading to 90% and 77% reduction of cell number, respectively, after 4 days in culture. In addition, both R2C1 and R2C2 grew slower than R2 m clones (see below). Figure 1 Depletion of frataxin in ribozyme clones. (A) Representative western blot analyses of protein extracts from different ribozymes clones revealed with polyclonal frataxin antibody is shown (five independent experiments were performed for quantitative analysis in B). Normalization was done using β-tubulin as a loading control. (B) Frataxin mRNA (white) and protein (grey) levels in antisense ribozyme fibroblast clones R2C1, R2C2, R5C1 and R5C2, compared to the Frda L2+/Lnon-transfected cell line. Three quantitative-RT-PCR analyses were performed on whole cellular RNA extracts, comparing frataxin expression to the reference Hprt gene. *p ≤ 0.05; ***p ≤ 0.005.
Ribozyme cell models have a significant decreased in ISC protein activities
Frataxin deficiency in FRDA patients and in model organisms (mice, drosophila, and yeast) leads to a specific deficit in the ISC protein activities [23,30,54]. The enzymatic activities of mitochondrial respiratory chain succinate cytochrome c reductase (SCCR) and cytochrome c oxydase (COX), the tricarboxylic acid cycle enzymes aconitases (Aco), and isocitrate dehydrogenase (IDH) were measured spectrophotometrically in the frataxin-deficient clones compared to Frda L2+/Lnon-transfected cells. Interestingly, the two ISC enzymes (SCCR and Aco) showed a decreased activity in frataxin deficient clones (Fig. 3) with a SCCR/COX ratio of 1.56 ± 0.23 and an Aco/IDH ratio of 1.94 ± 0.16 in the frataxin deficient clones while the SCCR/COX and the Aco/IDH ratios were 1.97 ± 0.09 and 3.66 ± 0.34 in the control cells, respectively (Fig. 3). Moreover the data parallel previous results from both FRDA patients and mouse models suggesting that the aconitases are more sensitive to frataxin deficiency than SSCR. Indeed, we found a 45% decrease in activity of aconitases with only a 19% decrease in SSCR activity in the ribozyme clones.
Medium-scale pharmacological screen of the ribozyme cell model
The proliferation delay of frataxin-deficient clones is an assay readout that is amenable to downscaling to 96-well plate format for easy automation. Set-up experiments were performed in order to select the best mutant cell line (R2C1 or R2C2), the seeding concentration (100 to 1,000 cells per well) and the time of culture before reading (3 or 4 days). Four different methods were tested to measure cell proliferation, based on nucleic acid detection (CyQUANT cell proliferation assay, Molecular Probe), ATP production (CellTiter-Glo Luminescent Cell Viability Assay, Promega) or reduction of tetrazolium salt (MTT test and CellTiter 96 AQueous Cell Proliferation Assay, Promega). The CellTiter-Glo assay showed the best results with a growth ratio (R2 m control cells/R2C1 frataxin deficient cells) of 1.87 (corresponding to a 47% decrease growth rate for the frataxin-deficient clone) and the smallest standard deviation after 3 days. No major decantation effect or edge effect was detected. Due to contact inhibition of the cells occurring when confluence is reached into 96-well plates, the optimal time for output readings was 72 hrs after seeding.
Using the established screening conditions, the Prestwick Chemical Library, a collection of off-patent drugs and alkaloids, was screened to find candidate pharmacological compounds that could rescue the proliferation defect of the frataxin deficient cell line. The 1,120 molecules of the library were tested at a single concentration of 25 μM in 96-well format (Fig. 4A) using the CellTiter-Glo luminescent cell viability assay. We performed the screen on both frataxin deficient (R2C1) and control (R2 m ) cell lines. The Z'-factor classically used in evaluation and validation of HTS assays [55] cannot be evaluated in this screen as positive controls on cell growth were not available. The statistical analysis of luminescence intensity for frataxin-deficient (R2C1) and control (R2 m ) untreated cell lines during the primary screening showed a good growth ratio with small standard deviation (Fig. 4B). The coefficients of variation (CV) are within the acceptable range for a cell-based assay; i.e. below 10%. The control cell line proliferation rate was 2.10 ± 0.14 times higher when compared to the frataxin-deficient clone. Positive hits were designated for any compound with cellular growth increment over three standard deviations (increase in luminescence intensity as compared to untreated control wells of Frataxin depletion impaired cell growth Figure 2 Frataxin depletion impaired cell growth. Growth curves on the frataxin-deficient clones R2C1 and R2C2 compared to the Frda L2+/Lnon-transfected cell line. Frataxin depletion impaired ISC-enzymes activities Figure 3 Frataxin depletion impaired ISC-enzymes activities. ISC-proteins activities in frataxin-deficient R2 clones (R2C1 and R2C2) and non-transfected control cell line. As the data was equivalent, data of both frataxin deficient clones were pooled for representation. SCCR (succinate cytochrome c reductase, respiratory chain complex II) and aconitases (Aco) are ISC proteins while COX (cytochrome c oxidase, respiratory chain complex IV) and IDH (isocitrate dehydrogenase) do not contain an ISC. the same cell line). Eighty-seven primary hits were identified in the primary screening: fifty molecules increased proliferation rate in frataxin deficient cell line only, fortysix in the control cell line only and nine molecules had proliferative effect on both cell lines (Fig. 4C). All eightyseven primary hits were tested for confirmation at two concentrations (2.5 and 25 μM) in duplicate, using the same hit selection criteria. Eighteen hits were retained with proliferative though minor effect, and were followed-up in a dose-response analysis. Each compound was tested at five concentrations (1, 3, 10, 30 and 100 μM) with four wells per concentration. However at this stage, no molecule presented a significant and reproducible strong effect on the proliferation rate of the frataxin deficient cell line (Fig. 4A).
Instability of the frataxin deficiency in the ribozyme cell lines
It is important to note that the R2C1 ribozyme cell line underwent numerous passages over a period of two years during the entire model set up and screening development. We therefore were interested in testing the stability of frataxin deficiency on different aliquots throughout the screening period. Despite the presence of the ribozyme (both at the genomic and RNA level, data not shown), Q-RT-PCR demonstrated that the frataxin mRNA level had increased by 1.8 fold (with a basal frataxin expression from 16 ± 5% to 29 ± 7% of wild type level). This "adjusted level" of frataxin deficiency was still associated with a deficit in the proliferation rate (by manual counting: 1,97 ratio at day 3 and 65% reduction of cell number at day 4), but the ISC protein activities (SCCR and aconitases) deficit was no longer present. The R2C1 cell line therefore became similar to FRDA patient's fibroblasts where no ISC enzyme deficiency can be found. However, Q-RT-PCR demonstrated that the "adjusted level" R2C1 cell line, in addition to the proliferation deficit, presented an increase of the Mthfd2 transcript (Fig. 5A), a transcriptional change associated with frataxin deficiency in mouse models [22] and FRDA fibroblasts (Fig. 5B).
Discussion
Cellular models are of great value for drug screening strategies and often represent an essential tool for both investigating molecular mechanisms of genetic diseases and identifying pharmacologically active compounds. We have used an antisense ribozyme strategy to establish a cell line with reduced frataxin level. Two clones presented a significant decrease in frataxin with a clear decrease in cell proliferation. Moreover, these frataxin deficient cells initially showed a deficit in the activity of two ISC enzymes (45% decrease in aconitase activity and 19% decrease in SCCR activity), a specific and characteristic feature of the human disease.
Our results on the ribozyme cell lines are in agreement with data observed in both transient and stable frataxin silencing models in HeLa cells obtained by RNAi strategies [45,47]. Upon frataxin depletion below ~20%, the three models display reduction of growth and decreased activity of the ISC proteins, specifically aconitases and succinate dehydrogenase. However, one notable difference is the altered morphology observed in the stable RNAi HeLa cell line, with roundish and grainy cells that easily detached from the plate [47]. This abnormal morphology is most likely due to the strongly reduced level of frataxin (almost undetectable) in the stable RNAi HeLa model compared to the normal phenotype of cells presenting approximately 10 to 20% of residual frataxin ( [45] and this manuscript). Indeed, no clone with a frataxin level below 9% was found in the present study. In agreement with these results, we have recently shown that complete absence of frataxin in murine fibroblasts inhibits cell divi-Mthfd2 expression level in ribozyme clones and in FRDA patient's fibroblasts sion and leads to cell death (unpublished results). These observations suggest that a threshold level of frataxin is necessary for cell proliferation and survival, consistent with the early embryonic lethality of the classical knockout mouse model [56].
While some candidate-based pharmacological compounds such as antioxidants ( [36] for review) or iron chelators [37] have shown some promising results in clinical trial in providing protection on certain aspects of the disease, there is still no effective therapy for FRDA. Moreover clinical drug trials are difficult to organize for a disease like FRDA, due to the slowly progressive and chronic nature of the disease, the very large individual variations (due in part to the unstable expansion mutation), the low frequency of the disease, and the ethical issues raised by double-blind testing. Cellular models of the human disease short-cut all these problems, and should allow to obtain faster and more reliable results on efficacy of new compounds in pre-clinical trials. We have therefore used our frataxin-deficient cell line to perform a pharmacological robotized screening. Indeed, HTS has revealed to be a successful strategy for drug discovery, especially when rational design based on the knowledge of the molecular cause of the pathological dysfunction is unclear. To circumvent the limitations of screening large libraries, we have chosen to study a limited number of already available drug molecules, in accordance to the "selective optimization of side activities" (SOSA) approach [57]. Indeed, the Prestwick Chemical Library consists in 1,120 compounds that are structurally and therapeutically very diverse with known safety and bioavailability in humans. Despite the identification of 87 primary hits, 18 of which were confirmed at the secondary step, no molecule of the library has revealed a significant and reproducible positive effect on cell growth of the frataxin-deficient clone on a dose-response curve. This failure may be due to the rather aspecific assay readout chosen to follow the efficiency of the molecules. Indeed, cell proliferation is under the control of multiple regulatory pathways and could lead to high rate of false positive and negative results. The activities of ISC proteins would probably be a more suitable end-point for FRDA screening but these enzymatic measurements are not available for automated quantification. A second explanation to this unproductive screen could be the rather low number of tested molecules. Knowing that such screening strategies retrieves between 0.1 and 1% of hits, it is statistically not surprising not to find a hit in a 1,120 compounds library. The screen of a larger compound library could allow to find active compounds. In spite of the absence of positive hit, this first cell-based automated screen on a FRDA cell model demonstrates the feasibility of the strategy.
The primary screen also identified one hundred and fifty one molecules decreasing ATP production in frataxin defi-cient and/or control cell lines. Two molecules specifically reduced the proliferation rate of the frataxin deficient clone. Although at first sight, the mecanism of action of both molecules (one expectorant and one vasodilatator) is hard to link to the current knowledge on frataxin, further investigation may uncover new pathways linked to frataxin.
Long term maintenance of the R2C1 frataxin-deficient clone has revealed a cellular adaptation to frataxin deficiency. Despite the persistence of the ribozyme construction, frataxin mRNA level has increased from 16 to 29% by an unknown compensatory mechanism, either by an increase in transcription or stability of the endogenous Fxn transcript, or by a reduction of the ribozyme efficiency. This slight increase in frataxin level was sufficient to restore normal ISC enzyme activities whereas proliferative deficit of the frataxin-deficient clone was still present. Furthermore, the Mthfd2 transcript, a mitochondrial enzyme involved in reduction of folate cofactors, was increased in the R2C1 clone, as previously observed at very early stages of pathogenesis in the cardiac mouse models [22] and also in FRDA fibroblasts (this manuscript). These data demonstrate the difficulty in obtaining stable cell line deficient in frataxin and could explain the paucity of long term stable models, necessary for pharmacological screening. As the long term persistence of frataxin deficiency was not tested in the stable RNAi model [47], we do not know if this adaptative mechanism is specific to our antisense strategy or is a more general feature characteristic of frataxin deficient cell lines.
Conclusion
The present work demonstrates the feasibility of a cellbased high-throughput screening assay in Friedreich ataxia but highlights the necessity of developing longterm robust cellular model that reproduce the physiopathology of the disease. Furthermore, the readout for the screen should be as specific as possible, in accordance to technical limitations. | 6,119.4 | 2009-08-24T00:00:00.000 | [
"Biology",
"Medicine"
] |
Diffractive rho + lepton pair production at an electron-ion collider
In high energy electron-ion colliders, a new way to probe nucleon structure becomes available through diffractive reactions, where the incident particle produces a very energetic almost forward particle. QCD describes these reactions as due to the exchange of a Pomeron which may be perturbatively described as a dressed two-gluon state, provided a hard scale allows the factorization of the amplitude in terms of two impact factors convoluted with a Pomeron propagator. We consider here a process where such a description allows to access hadronic structure in terms of the generalized parton distributions, namely the electroproduction of a forward $\rho$ meson and a timelike deeply virtual photon, separated by a large rapidity gap. We explore the dependence of the cross section on the kinematic variables and study the dependence on the non-perturbative inputs (generalized parton distributions, distribution amplitude). Our leading order studies show the cross section is mainly sensitive to the GPD model input, but the small size of the cross sections could prohibit straightforward analysis of this process at planned facilities.
I. INTRODUCTION
The advent of high luminosity high energy electron-ion colliders [1][2][3][4][5] will allow us to open a new chapter in the quest for the understanding of quark and gluon confinement in hadrons, through a precise tomography of the nucleon enabled-among various tools-by the extraction of quark and gluon generalized parton distributions (GPDs) [6,7]. Besides the famous exclusive processes which have been studied in the last 20 years-namely deep electroproduction of a photon (deeply virtual Compton scattering, DVCS) or a meson (deeply virtual meson production, DVMP) and their timelike related processes-a new class of processes is worth studying which adds the merits of diffractive processes. These diffractive processes have been shown to constitute a sizeable part of the total cross section at very high energy.
In a hard regime characterized by a hard scale Q, a diffractive reaction is seen as the scattering of a small transverse-size [Oð 1 Q Þ] colorless dipole on a nuclear target. This justifies the use of perturbative QCD methods for the description of the process. In the Regge inspired k T -factorization approach which is known to be applicable at high energy, total invariant mass W ≫ Q ≫ Λ QCD , one writes the scattering amplitude in terms of two impact factors with, at leading order, a two "Reggeized" gluon exchange in the t-channel. The Born amplitude may be calculated using a two gluon exchange (see diagram of Fig. 1), while higher order QCD corrections are taken into account by applying Balitsky-Fadin-Kuraev-Lipatov (BFKL) evolution techniques [8][9][10].
In a previous paper [11], it has thus been proposed to replace in the timelike Compton scattering (TCS) reaction [12] γðr; εÞ þ Nðp 1 ; λ 1 Þ → γ Ã ðq 0 ; ε 0 Þ þ N 0 ðp 2 ; λ 2 Þ; ð1Þ the incoming photon by a Pomeron (P) Here, ε 0 ; ε denote the polarization states of the photons, while λ 1 , λ 2 denote those of the nucleon. Momentum labels follow those of Fig. 1. As Fig. 1 shows, this subprocess (2) may be extracted from the study of the diffractive process in the adequate kinematical domain (see details below) where the overall process is calculated in the k T -factorization approach, and r ≡ q − q ρ . This applies collinear QCD factorization to describe (i) the Pomeron production by an impact factor where the ρ meson enters by its distribution amplitude (DA) and (ii) the Pomeron-nucleon interaction, with the hadronic response parametrized by nonperturbative GPDs, as depicted further down in Fig. 2.
In this paper, we develop the phenomenological study of this diffractive ρ þ lepton pair production process 1 within the kinematical conditions expected at the future electronion colliders. In Sec. II, we describe the kinematics where we expect our framework to be valid. Section III collects the necessary ingredients for the calculation of the scattering amplitude, and the resulting expressions. Section IV presents our results for the cross sections calculations. Finally, Sec. V sums up our conclusions.
II. KINEMATICS
We study the process (3) at large squared photon-nucleon energy s γN ¼ ðq þ p 1 Þ 2 , in the forward limit where the ρ meson flies in the same direction as the virtual initial photon and in the kinematical regime of a large rapidity gap between the ρ 0 and the final state virtual photon, i.e., We define and decompose momenta on a Sudakov basis as with p and n the light-cone vectors The particle momenta read with M and m ρ the masses of the nucleon and of the ρ meson. The total squared center-of-mass energy of the γ Ã -N system is Neglecting masses and Δ ⊥ , we have Moreover, in the large rapidity gap regime, we have The squared subenergies are Additionally, momentum conservation in p μ leads to FIG. 1. The diffractive ρ þ lepton pair amplitude is written in the k T -factorization approach as the convolution of two impact factors Φ 1 and Φ 2 and the Pomeron propagator, which at lowest order is a two gluon exchange. Labels between brackets are the particle four momenta. and combined with the expressions for s 1 and s 2 we have The skewness variable ξ is thus expressed in terms of Q 02 and of the squared subenergy s 2 (we neglect here Q 2 compared to s γN ) as: which can be compared to a similar expression for ξ in TCS [12].
III. THE SCATTERING AMPLITUDE
In the kinematical regime described above, it is legitimate to calculate the scattering amplitude in the following way [11]. Using the k T factorization procedure, we write the amplitude as a two-dimensional integral over the transverse components of the exchanged gluon momenta as Here the gluon propagator numerators are replaced by −g μν → −2p μ n ν s , μ (resp. ν) acting on the upper (resp. lower) impact factors. The impact factors Φ 1 and Φ 2 are calculated (see Fig. 2) within the collinear factorization framework (at with Impact factor Φ 1 has been calculated years ago at leading order [13,14] and is now known at next to leading order [15]. Its nonperturbative part is the longitudinal ρ meson distribution amplitude (DA) defined, at the leading twist 2, by the matrix element [16] h0jūð0Þγ μ uðxÞjρ 0 ðp ρ ; ε ρL Þi and by a similar expression with opposite sign for d quarks. ϕðzÞ will be parametrized by either its asymptotical form shape [17]. Impact factor Φ 2 has been calculated in [11] as a convolution of the C-odd quark generalized parton distributions (GPDs) and a leading order coefficient function which selects the so-called ERBL (Efremov Radyushkin Brodsky Lepage) region jxj ≤ jξj in the GPD variables domain of definition [6,7]. There is no contribution from the axial nor from the transversity quark GPDs and no contribution from the gluon GPDs [11].
IV. CROSS SECTIONS
Using the amplitude calculated in Ref. [11], we thus calculate the scattering cross sections for the process (3). The unpolarized differential cross section for (virtual) photoproduction reads where we summed and averaged over all polarizations, and Ω Ã l is the solid angle of the final lepton in the lepton pair center-of-mass frame. Expressions for the Compton form factors (CFFs) H d , E d are found in Ref. [11]. Let us note that a distinctive property of our description of the process under study is the angular distribution of the final state leptons. Since the lepton pair originates from a longitudinally polarized virtual photon, this angular distribution reads, in the center-of-mass system of the lepton pair, This distribution may help to distinguish our process from the production of a misidentified π þ π − meson pair. In the results shown here, we integrate over this lepton solid angle. In some panels, we show cross sections differential in ξ instead of s 2 , using the Jacobian Note that the s γN dependence in the first equality of Eq. (20) cancels with an identical factor in the matrix element [11], which results in a photoproduction cross section that is energy independent, see the second equality of Eq. (20).
In the figures below, we show our results for the photoproduction cross section of Eq. (20) at the minimal value of −t ρ and at some accessible (small) value of −t N . Our collinear framework does not allow us to calculate the t ρ -dependence of the cross section. Phenomenological studies of diffractive electroproduction at HERA have shown [18,19] that this dependence is very steep and may be parametrized as with K ≈ 6 GeV −2 for a ρN final state and K ≈ 2 GeV −2 for diffractive dissociation, i.e., a ρX final state. Our case may be seen as intermediate between these two reactions. One may take these two dependences as a range for an educated guess for our case. Electroproduction cross sections in the approximations used here are given by with only the longitudinal polarization of the incoming virtual photon contributing. In Figs. 3 and 4, we show photoproduction cross sections where we compare three nucleon GPD parametrizations, namely GK16 [20,21], VGG [22] and MMS [23]. The GPD parametrizations were interfaced through the PARTONS framework [24]. Unless mentioned otherwise, all calculations use the asymptotic DA for the ρ vertex. We show several values of s 2 , where the minimum s 2 value depends on t N with a larger jt N j resulting in smaller possible values of s 2 (which yield larger cross sections when all other variables are kept fixed). We show plots of the differential photoproduction cross section Eq. (20) as a function of Q 02 or ξ. The range in the former maps to a range in the latter for all other kinematic variables fixed, see Eq. (15). We consider a minimum Q 02 ¼ 2 GeV 2 to have a sufficient hard scale in the GPD diagram. The maximum Q 02 value is determined by the maximum ξ value allowed by the choice of t N . In Fig. 3 we show results for t N ¼ −0.1 GeV 2 , Fig. 4 has t N ¼ −0.2 GeV 2 . We show panels for several values of Q 2 in each figure, where we consider Q 2 values lower than the limit we expect that is required to get a small-sized color dipole in our formalism. One immediately notices that the cross sections are tiny, even at the kinematics (small Q 2 ; Q 02 ; small s 2 , small −t N ; −t ρ ) that maximize its size. The cross section drops quite quickly with Q 02 , as expected from Eq. (20), with the steepest drop occurring at the lowest Q 02 values. All three GPD parametrizations result in curves with similar features, with variations in magnitude up to a factor of ∼2, and GK16 exhibiting the slowest drop of the cross sections with Q 02 . In Fig. 5, we compare free proton and neutron cross sections using the GK16 parametrization. Neutron cross sections are significantly smaller than the proton counterparts, and drop off steeper with increasing ξ or Q 02 . In Fig. 6, we plot the Q 2 dependence at a fixed Q 02 ¼ 3 GeV 2 . We extend the range in Q 2 again to values below which our formalism should be valid, and observe that the cross section drops by more than 2 decades over 1 decade of Q 2 , with the steepest drop at the smallest Q 2 values. Figure 7 illustrates that the imaginary part of the Compton form factors-originating from the imaginary part of Eq. (18)-entering in the amplitude dominate the cross section, contributing 80% or more of the total strength. Two choices of DA parametrization (asymptotic and holographic) are compared in Fig. 8, with the difference in results between the two 20% or smaller, far smaller than the difference between the different GPD parametrizations. Electroproduction cross sections for the highenergy configuration of the planned U.S. electron-ion collider at Brookhaven National Laboratory, are shown in Fig. 9 as a function of y and Q 2 and in Fig. 10 as a function of y and Q 02 . We observe that minimizing any of these variables maximizes the cross sections but that rates are still small, showing limited promise at the moment of this process being worthy of detailed studies at an electronion collider. Note, however, the remarks with regard to NLO corrections in the concluding section.
V. CONCLUSION
Our studies show that the cross sections at leading order for the diffractive ρ þ dilepton production process are quite small, which can make a straightforward analysis of the process at the luminosities of planned electron-ion collider facilities very difficult. In terms of nonperturbative inputs, the calculations show much greater sensitivity to the nucleon GPD input than to the ρ meson DA one. This GPD model sensitivity is due to the quite unique fact (see however [25][26][27] for a similar dependence) that the amplitude only depends on their behavior in the ERBL region, which is quite unrestricted by current data analysis of the DVCS process. The cross section is dominated by the imaginary part of the Compton form factors and is maximized at small values of the hard scales Q 2 ; Q 02 where higher order corrections to the formalism would be needed. The small magnitude of our cross sections deters us from studying in detail competing processes such as a quasi-Bethe-Heitler contribution where the dilepton originates from a virtual photon radiated from the electron line. Our kinematics do not favor such a production process, since a lepton pair emitted from the electron line is likely to lead to a large value of s 2 ≥ s γN which is quite opposite to the kinematical domain here explored. This quasi-Bethe-Heitler process is thus expected to be significantly suppressed compared to the QCD contribution, in contradistinction to the double DVCS process [28].
The fact that the cross section is rather small must be blamed on the presence of two hard scales (Q 2 and Q 02 ) which each play a crucial role in keeping a part of the process to be controlled by small-size hadronic configurations. This is to be contrasted with the diffractive process [25,27] eN → e 0 ρπN 0 with a large transverse momentum ρ meson, where this large transverse momentum was the single large scale controlling the perturbative treatment of both impact factors. The size of the diffractive ρ þ dilepton production cross section is more reminiscent of the deep electroproduction of a large invariant mass dilepton [29,30] or diphoton [31]. The expected cross sections increase rather quickly at small Q 2 , which opens the need for a better understanding of the diffractive ρ þ dilepton reaction in the region where the Pomeron becomes soft while the large mass of the final state lepton pair still pleads for a collinear factorization approach of the lowest impact factor which probes the nucleon GPDs. We shall tackle this problem in future studies.
The study of NLO QCD effects, both for the Pomeron exchange propagators using the BFKL framework and the impact factors in the collinear framework, remains to be done. This is not an easy task although complete NLO studies for quite similar reactions already exist [14,15,32], but with the important difference of the timelike vs spacelike nature of the virtual photon, where one expects analytic continuation effects to be quite important [33,34]. Although one cannot accurately predict the order of magnitude of this correction without completing the calculation, it is quite fair to say that such NLO corrections should not give more than a 100% correction to the Born order calculation of the amplitude without demanding a resummation procedure to be carried out before a theoretical estimate can reliably be quoted. This would undoubtedly be a very interesting but quite intricate problem. Measuring an anomalously large experimental rate for our process would indeed be valuable information to question the validity of the theoretical approach developed here, namely the hybrid framework where part of the amplitude is discussed in terms of k T -factorization while the other part relies on collinear QCD factorization. | 3,826.2 | 2021-03-02T00:00:00.000 | [
"Physics"
] |
Anti-Müllerian Hormone Recruits BMPR-IA in Immature Granulosa Cells
Anti-Müllerian hormone (AMH) is a member of the TGF-β superfamily secreted by the gonads of both sexes. This hormone is primarily known for its role in the regression of the Müllerian ducts in male fetuses. In females, AMH is expressed in granulosa cells of developing follicles. Like other members of the TGF-β superfamily, AMH transduces its signal through two transmembrane serine/threonine kinase receptors including a well characterized type II receptor, AMHR-II. The complete signalling pathway of AMH involving Smads proteins and the type I receptor is well known in the Müllerian duct and in Sertoli and Leydig cells but not in granulosa cells. In addition, few AMH target genes have been identified in these cells. Finally, while several co-receptors have been reported for members of the TGF-β superfamily, none have been described for AMH. Here, we have shown that none of the Bone Morphogenetic Proteins (BMPs) co-receptors, Repulsive guidance molecules (RGMs), were essential for AMH signalling. We also demonstrated that the main Smad proteins used by AMH in granulosa cells were Smad 1 and Smad 5. Like for the other AMH target cells, the most important type I receptor for AMH in these cells was BMPR-IA. Finally, we have identified a new AMH target gene, Id3, which could be involved in the effects of AMH on the differentiation of granulosa cells and its other target cells.
Introduction
Anti-Müllerian hormone (AMH) also called Müllerian inhibiting substance (MIS) is a member of the TGF-b superfamily. AMH is well known for its role in Müllerian duct regression in male fetuses [1]. In female fetuses, the lack of AMH expression allows the development of the Müllerian ducts into oviduct, uterus, cervix and the upper part of the vagina. Postnatally, AMH is secreted by granulosa cells (GCs) of small growing follicles (preantral and small antral) [1]. In mice, AMH is a negative regulator of the primordial to primary follicle transition [2]. In addition it decreases FSH sensitivity of growing follicles [3]. In clinics, serum AMH is now widely used in oncology and gynecology. Indeed, it is a very useful diagnostic and prognostic tool, as an early indicator of relapse of ovarian GC tumors [4] and as a reliable marker of the ovarian follicular status. AMH assay is also an important tool to control ovarian hyperstimulation [5]. However, despite the increasing interest of ovarian AMH in clinics, little is known on its mechanism of action on GCs.
Members of the TGF-b family signal through a type II transmembrane serine/threonine kinase receptor which forms a complex with a type I serine/threonine kinase receptor [6]. The type II receptor phosphorylates serine and threonine residues of type I receptor. Once activated, the type I receptor phosphorylates the receptor-regulated Smads (R-Smad) which interact with a common partner Smad4. The Smad complex accumulates into the nucleus and regulates target gene expression [7]. TGF-b and activins activate TbetaRI and ActR-IB type I receptors and R-Smads 2 and 3, whereas Bone Morphogenetic Proteins (BMPs) mediate their effects through ActR-IA, BMPR-IA and BMPR-IB type I receptors and R-Smads 1, 5 or 8. This canonical signalling pathway is regulated at different levels, in particular by coreceptors which amplify or antagonize TGF-b family members action.
AMH has a single specific type II receptor (AMHRII, also known as MISRII) [8,9]. Indeed, Amh and Amhr2 inactivation causes the same phenotype in males [10], a complete retention of an ectopic female reproductive tract, indicating that the AMH type II receptor is the only type II receptor that transduces AMH signal [11]. Moreover, in human, mutations of AMH or AMHRII are involved in Müllerian duct derivative persistance [12]. Regarding the other components of AMH signalling pathway, it was first shown in different cell lines of gonadal origin that AMH phosphorylates R-Smad 1/5/8, meaning that it uses ActR-IA, BMPR-IA or BMPR-IB type I receptors to transduce its effects [13]. Then the disruption of these type I receptors and R-Smads in mice led to the conclusion that BMPR-IA is the primary type I receptor required for Müllerian duct regression but that ActR-IA is capable of transducing the AMH signal in the absence of BMPR-IA, and that R-Smad 1/5/8 function redundantly [14][15][16]. Similarly, in the immature Sertoli cell line SMAT-1, AMH mediates its signal through BMPR-IA and ActR-IA [17]. In contrast, the type I receptors and R-Smads involved in AMH effects on post-natal GCs remain unknown. In addition, to date, no co-receptor has been found for AMH. Because AMH share with BMPs its type I receptors and R-Smad proteins, it could have the same co-receptors, like the Repulsive Guidance Molecule (RGM) [18] which were first shown to induce axonal guidance during neurogenesis [19] and have three isoforms: RGMa, RGMb (Dragon) and RGMc (Hemojuvelin).
To date, only few AMH target genes have been identified. In males, AMH inhibits Sertoli and Leydig cells differentiation through repression of several steroidogenic proteins like P450scc (Cyp11a1), 3b-HSD (Hsd3b1) or P450C17 (Cyp17a1) [17,20,21]. Little is known on the genes involved in the inhibitory effect of AMH on GCs differentiation, preventing the study of AMH mechanism of action on these cells. Aromatase (Cyp19a1) and LH receptor (Lhcgr) are down-regulated by AMH in rat and porcine GCs [22]. In human GCs, FSHR (Fshr) is also repressed by AMH [23] which could explain the inhibitory role of this hormone on follicles sensitivity to FSH. Because BMPs share their signalling pathway with AMH and regulate GCs differentiation, BMPs target genes involved in this process could also be modulated by AMH. This could be the case for the family of Inhibitor of differentiation/Deoxyribonucleic Acid-Binding (Id) genes which are BMP2 and BMP4 target genes in mice [24]. These genes have been shown recently to be related to the status of GC differentiation [25]. They lack a domain required for DNA binding and act as negative antagonists of bHLH transcription factors and gene expression in mammals [26].
In this study, we used primary GCs isolated from immature mice to identify target genes and co-receptors potentially implicated in AMH effects on GCs differentiation and to define the involvement of the different type I receptors and Smads proteins in this process. We also took advantage on conditional mutant mice for different actors of AMH signalling pathway to confirm our data. Our studies reveal that Id3 is a new AMH target gene in GCs. In addition, this work indicates that BMPR-IA and Smad 1/5 are the main components of AMH signalling pathway in GCs.
Ethics Statement
Housing and care, method of euthanasia and experimental protocols were conducted in accordance with the recommendations of the French Accreditation of Laboratory Animal Care and in compliance with the NIH Guide for Care and Use of Laboratory Animals. The animal facility is licensed by the French Ministry of Agriculture (agreement NuC92-023-01). All animal experiments were supervised by Dr. Soazik Jamin (agreement delivered by the French Ministry of Agriculture for animal experiment Nu92-299). Animals were sacrificed with CO 2 . All efforts were made to minimize animal suffering.
Reagents
Recombinant human AMH was produced from the culture medium of Chinese hamster ovary (CHO) cells stably transfected with a human AMH cDNA, purified and cleaved by plasmin (plasmin-cleaved AMH or PC-AMH) as previously described [27]. PC-AMH subsequently called AMH was used at a concentration of 8 or 40 nM. BMP2 was a kind gift of Prof. Walter Sebald (University of Würzburg, Germany) and was used at a concentration of 10 nM [28]. TGF-b was purchased from R&D Systems (Lille, France) and used at 1 nM. Smad1-Gal4, Smad5-Gal4, Smad8-Gal4 and Gal-Luc plasmids were kindly provided by Dr. Azeddine Atfi (Inserm UMR S938, Paris).
Mouse genotyping
Genomic DNA was extracted from tail biopsies using a NucleoSpin Tissue kit (Macherey-Nagel, Hoerdt, France) according to the manufacturer's instructions. The primers used to detect Acvr1 and Bmpr1a alleles are listed Table S1. The amplification conditions were 95uC for 5 min, followed by 94uC for 45 s, 58uC for 45 s, and 72uC for 45 s (35 cycles), with a final extension at 72uC for 10 min. The amplified PCR fragments were analysed on 2% agarose gels.
Primary cultures of mouse granulosa cells
The preparation of mouse GCs primary cultures was adapted from a rat primary GC culture protocol [33]. Immature ovaries from 3 weeks old mice were collected in RPMI medium (Invitrogen). Then, they were exposed to 6.8 mM EGTA, 0.2% BSA in Medium 199 (Invitrogen) for 15 min at 37uC. After a 5 min centrifugation at 500 rpm, ovaries were placed in a hypertonic solution (0.5 M sucrose, 1.8 mM EGTA, 0.2% BSA) in Medium 199 for 5 min at 37uC. Three volumes of Medium 199 were added to stop the reaction. After a 5 min centrifugation at 500 rpm, ovaries were placed in DMEM/F-12 medium with 1% Fetal Bovine Serum (FBS, Life technologies) and GCs were dissociated with a blunt spatula. Cells were pelleted by a 10 min centrifugation at 1300 rpm. Supernatant was discarded and cells were resuspended in DMEM/F-12 without phenol red, 10% FBS. The collected cells were counted in the presence of Trypan blue (0.07%) and seeded in DMEM/F-12 without phenol red, 10% FBS and 1% penicillin/streptomycin (Eurobio, Courtaboeuf, France) at 37uC in 5% CO2. 36 h after plating, GCs were exposed to AMH, BMP2 or TGF-b in DMEM/F-12 without phenol red, 1% FBS and 1% penicillin/streptomycin (Eurobio) for 24 h for RNA analyses. For protein analyses, GCs were starved in serum-free medium for 1 h prior to a 3 h AMH, BMP2 or TGF-b exposure.
b-galactosidase and pmaxGFP transfection
Primary GCs seeded in 6-well plates (5610 5 cells/well) were transfected using Fugene6 reagent (Roche Diagnostics, Meylan, France) with 1 mg b-galactosidase reporter gene or pmaxGFP vector when cells were 80% confluent. After 24 h, b-galactosidase activity was detected with X-gal staining and GFP signal was directly visualized.
Immunohistochemistry
Ovaries from 3 weeks old C57BL/6J females were collected and fixed in 4% PFA for 4 h at 4uC. Samples were then washed with PBS (pH 7.4), dehydrated in graded ethanol, embedded in paraffin and cut into 5 mm thick sections. For antigen retrieval, samples were boiled in citrate buffer 10 mM pH 6.0 at 80uC for 45 min. Thereafter, slides were washed with PBS and blocked with PBS-BSA 10% for 1 h before an overnight incubation with a primary anti-RGMb antibody (1:100 in DAKO buffer, Santa Cruz). Specimens were washed with PBS and incubated with a secondary biotinylated antibody (anti-rabbit 1:500) 1 h at room temperature. After washing, slides were incubated 1 h in ABC reagent (ABC kit, Vectastain, Abcys) and stained with DAB reagent during 5 min.
siRNA gene knockdown
Small-interfering RNAs (siRNAs) for Acvr1 (59932), Bmpr1a (160872), Bmpr1b (60258), and a negative control siRNA #1 were obtained from Ambion (Life Technologies) and siRNA Rgmb (J05553408) was obtained from Dharmacon (Fisher Scientific, Illkirch, France). siRNAs were used at a final concentration of 100 nM. Primary GCs seeded in 6-well plates (5610 5 cells/well) were transfected using Oligofectamine reagent (Life Technologies) with 100 nM siRNA when cells were 50% to 80% confluent. The medium was removed after 3 h and replaced by DMEM/F-12 1% FBS. The cells were then cultured 3 h or 24 h in the presence of 8 nM AMH.
siRNA GAPDH-cy3 (4390849, Life Technologies) used to show transfection efficiency was transfected as described above and the fluorescence was analysed by flow cytometry (FACS Calibur, Becton Dickinson).
Luciferase Assay
Primary GCs seeded in 6-well plates (5610 5 cells/well) were cotransfected with a luciferase reporter (UAS-luc, 300 ng), expression construct (Smad-Gal4, 200 ng) [34] and pRLTK (1 ng, Promega) as a control for transfection efficiency. Transfection was performed using FugeneHD according to the manufacturer's instructions (Roche). Cells were subsequently treated 24 h with AMH (8 nM), washed with PBS and lysed in passive lysis buffer (Promega). All lysates were analysed for Firefly and Renilla luciferase activity according to the manufacturer (Dual Luciferase kit, Promega). Results were expressed as a percentage of stimulation of Firefly luciferase activity (after normalization to Renilla luciferase activity) in the presence of AMH compared to cells cultured in control medium.
RNA isolation and Reverse Transcription
Total RNAs were isolated using the RNeasy Minikit (QIAGEN) according to the manufacturer's instructions. Reverse transcription was performed in a total of 20 ml with the Omniscript Reverse Transcription Kit for RT-PCR (QIAGEN) using 500 ng or 1 mg total RNA, Omniscript reverse transcriptase, oligo-dT primers (1 mM) and random hexamers (10 mM) as recommended by the manufacturer. The samples were incubated 1 h at 37uC.
PCR amplification
cDNA from primary GCs or total ovary were used to amplify Inha, Lhcgr, and Fshr by PCR. The primer pairs used are shown Table S2. PCR was performed using PCR mix (Qiagen). The amplification conditions were 95uC for 5 min, followed by 35 cycles of 94uC for 45 s, 58uC for 45 s, and 72uC for 45 s, with a final extension at 72uC for 10 min. The amplified PCR fragments were analysed on 2% agarose gels.
Quantitative real-time PCR
Quantification of the content of Amhr2, Acvr1, Bmpr1a, Bmpr1b, Cyp11a1, Fshr, Id3, Inha, Rgma, Rgmb, Rgmc, Smad1, Smad4, Smad5, Smad8, Star and Hprt mRNA was performed by real-time PCR using the TaqMan PCR method. The primers and the UPL probes (Roche Diagnostics, Mannheim, Germany) used to amplify these genes are indicated Table S2. Real-time PCR was performed with one fifth dilution of the cDNAs using the Lightcycler 480 Probes Master kit (Roche Diagnostics, Mannheim, Germany). Primers were used at a concentration of 5 mM and probes at 1 mM. The PCR protocol used an initial denaturating step at 95uC for 10 min followed by 45 cycles of 95uC for 10 s, 58uC for 30 s, 72uC for 1 s. To generate standard curves, different concentrations of the purified and quantified PCR products were amplified. Relative gene expression was normalized to an endogenous control gene (Hprt).
Western blot
Mouse primary GCs were seeded into 6 wells plates at a density of 5610 5 cells/well in 2ml of culture medium. Cells were then harvested, and lysed in 50 mM Tris HCL (pH 7.4)/150 mM NaCl, 1% protease inhibiting cocktail and 1% Triton. Insoluble material was removed by centrifugation at 12,0006g, for 5 min at 4uC. The supernatants were recovered, and protein concentrations were measured using the BCA protein assay kit (Pierce). Equivalent amounts of protein lysates (8 to 20 mg) were subjected to 4-20% SDS-PAGE (Biorad) and electrophoretically transferred onto nitrocellulose membranes. After the blocking of non-specific binding sites for 1 h in Tris-buffered saline (25 mM Tris and 150 mM NaCl, pH 7.6) containing 5% non-fat milk and 0.1% Tween 20, the membranes were exposed to the primary antibodies (anti-phospho-Smad 1/5/8 and anti-phospho-Smad2/3 (Cell Signalling Technology) both at 1:500) overnight at 4uC. Reactive proteins were detected with horseradish peroxidase-conjugated secondary antibodies (1:5000) for 1 h at room temperature and developed with West Pico Western blotting detection reagents (Pierce). The membranes were stripped with a stripping buffer (Pierce), then reprobed with a mouse monoclonal antibody to b-Actin (Sigma Aldrich) or mouse monoclonal anti-a-Tubulin (Sigma Aldrich). Western blots were quantified using ImageJ software.
Statistical analysis
All experimental data are presented as means 6 SEM. Data were analyzed using t-test or one-way ANOVA followed by Tukey test for all-pair comparisons to compare respectively two or several means. A difference was considered statistically significant when the p-value was ,0.05. * p,0.05, ** p,0.01, *** p,0.001. All calculations were made using GraphPad Prism 5.1 (GraphPad Software, La Jolla, CA).
Primary cultures of immature mouse granulosa cells
To identify the different components of the AMH signalling pathway in GCs, we first isolated immature GCs from 3 weeks old mice [33] and we characterized them for different criteria. Immature ovaries are mainly composed of small growing follicles whose GCs express AMH [35]. Therefore, we used an AMH antibody coupled to an Alexa fluor-488 secondary antibody to evaluate the purity of the primary GCs by immunofluorescence ( Fig. 1A and 1B). About 90% of the cells expressed AMH which indicated an efficient enrichment of GCs ( Fig. 1A and 1B). Since we had to transfect plasmid DNA in GCs, we then assayed transfection efficiency in these cells using a b-galactosidase reporter gene (Fig. 1C) or pMax-GFP vector (Fig. 1D). After X-Gal staining, we found that about 20% of the cells stained positively for b-galactosidase activity, indicating that they had been properly transfected (Fig. 1C). The transfection capacity of the primary GC culture was confirmed with pMax-GFP transfection by visualizing GFP-expressing cells (Fig. 1D). We also checked siRNA transfection efficiency. GCs were transfected with a fluorescently-labelled control siRNA (GAPDH-cy3 siRNA). The next day, the cells were trypsinized and the cell suspension was analysed by flow cytometry. We found that 97.5% of the GCs were transfected with the fluorescent siRNA, demonstrating a high siRNA transfection efficiency (Fig. 1E). We then checked for the presence of GCs markers (Fig. 1F). As in the total ovary, GCs expressed FSH receptor (Fshr) and a inhibin (Inha) (Fig. 1F). On the other hand, LH receptor (Lhcgr), a theca cell marker at this stage, was detected in the 3 weeks old ovary, as expected, but absent in the GCs. These results confirmed that the culture was at least 90% pure with little or no theca cell contamination.
Granulosa cells can respond to plasmin cleaved AMH
We next studied the expression of the different components of AMH signalling pathway in GCs. Using real time PCR, we observed that all of the serine/threonine kinase receptors candidate genes (Acvr1, Bmpr1a, Bmpr1b and Amhr2) ( Fig. 2A) and the three candidate Smads (Fig. 2B) were expressed in primary GCs. The results showed that Amhr2 was expressed at higher levels (between 3.5 and 37 times) than the type I receptors ( Fig. 2A), Bmpr1a being the most represented ( Fig. 2A). Among Smads, Smad4 was the most expressed, the levels of Smad1 and Smad5 were similar and Smad8 protein was poorly represented (Fig. 2B). We then tested if primary GCs were responsive to AMH (plasmincleaved AMH). We analysed the response of GCs after a 3 h exposure to AMH by Western blot using a phospho-Smad 1/5/8 antibody (Fig. 2C). Both AMH and BMP2, which is the primary inducer, could increase phospho-Smad 1/5/8 levels ( Fig. 2C and 2D) while TGF-b had no effect on this pathway. We then checked if AMH could also activate the alternative Smad2/3 pathway. TGF-b which is a primary inducer of the Smad 2/3 pathway was used as a positive control in parallel with AMH (Fig. 2E). As expected, in the presence of TGF-b, phospho-Smad 2/3 levels were increased in primary GCs ( Fig. 2E and 2F). However, neither AMH nor BMP2 were able to activate the Smad2/3 pathway (Fig. 2E and 2F). Thus, the primary GCs displayed all the characteristics necessary for our study. They expressed all of the factors of the AMH signalling pathway, they were properly transfected and they were sensitive to AMH. These primary GCs were then used for the rest of the studies described in this paper.
AMH target genes in granulosa cells
To monitor AMH signalling, we then screened for downstream target genes. GCs were treated with AMH (8 nM) during 24 h, and the expression of candidate genes was analysed by real time PCR. We first tested AMH target genes previously identified in the ovary. Aromatase (Cyp19a1) and LH receptor (Lhcgr) were expressed at very low levels and we were not able to detect any variation of their expression after AMH exposure (data not shown). We then studied a inhibin (Inha) (Fig. 3A) and FSH receptor (Fshr) (Fig. 3B) whose expression was not modulated by AMH in GCs. Because AMH regulates genes encoding steroidogenic enzymes in Sertoli and Leydig cells, we tested whether their expression was also affected by AMH in GCs. 3 b-HSD (Hsd3b1) expression was too low to detect any variation (data not shown). Star (Fig. 3C) and P450scc (Cyp11a1) (Fig. 3D) were not regulated by AMH. Finally, we tested the effect of AMH on Id genes expression which are known to be up regulated by BMPs. Id1, 2 and 3 were up-regulated by BMP2 in GCs but not by TGF-b (data not shown). Similarly to BMP2, AMH increased all Id genes expression but only Id3 gene was significantly up-regulated (Fig. 3E). After 24 h in the presence of AMH, Id3 expression in GCs is increased by 50%. Therefore, Id3 gene was used in this study to assay siRNA knockdown experiments.
Involvement of the different serine/threonine kinase receptors in AMH signalling
To determine which type I serine/threonine kinase receptor(s) was important for AMH signalling pathway, we used siRNA for gene knockdown (Fig. 4). Primary GCs were isolated from immature ovaries and transfected when 50 to 80% confluence was reached with different siRNAs against Acvr1, Bmpr1a and Bmpr1b. After another 24 h of culture, the cells were exposed to AMH and total RNA were extracted 24 h later. We first checked the expression of each serine/threonine kinase type I receptor gene after siRNA transfection using real time PCR (Fig. 4A-C). For each type I receptor gene, we tested three different siRNAs (data not shown) and selected the most efficient one for the rest of the study. siRNA transfection led to a 80% decrease of Acvr1 mRNA levels (Fig. 4A), a 70% decrease for Bmpr1a (Fig. 4B) and a 70% decrease for Bmpr1b (Fig. 4C). Since the down regulation was significant for each of the genes, we then analysed the responsiveness of these knocked-down GCs to AMH by Western blot with a phospho-Smad1/5/8 antibody (Fig. 4D). In parallel, the GCs were transfected with a negative control siRNA which does not interfere with any of the targeted RNAs. As expected, the negative control siRNA had no effect since the transfected cells could respond to AMH through the activation of the Smad1 pathway ( Fig. 4D and 4E). Similarly, the Acvr1 and Bmpr1b knockdown cells were sensitive to AMH. However, the effect was not significant for Acvr1 because of the large standard deviation. On the other hand, in presence of siRNA against Bmpr1a, the effect of AMH on phospho-Smad1/5/8 levels was reduced significantly compared to GC transfected with the control siRNA. This result indicates that BMPR-IA is important for AMH signalling in GCs (Fig. 4D and 4E). We then analysed the effect of AMH on Id3 expression in GCs transfected with the different siRNAs (Fig. 4F). Id3 expression was up-regulated by 83% after AMH exposure in GCs transfected with control siRNA. Id3 was also up-regulated by 158% in GCs transfected with siRNA against Acvr1 and by 76% in GCs transfected with Bmpr1b siRNA. In contrast, in GCs transfected with siRNA against Bmpr1a, AMH was unable to up-regulate the expression of Id3. siRNA technology allowed us to show that BMPR-IA is important to transduce AMH signal in GCs (Fig. 4D, 4E and 4F).
Granulosa cells from Bmpr1a cKO mice no longer transduce AMH signalling
To confirm the siRNA results, we generated Acvr1 and Bmpr1a conditional knockout (cKO) mice, using Amhr2-cre line to delete these genes in GCs. Amhr2 +/cre ; Acvr1 +/2 or Amhr2 +/cre ; Bmpr1a +/2 males were bred to Acvr1 fx/fx or Bmpr1a fx/fx females to generate females that were conditionally null either for Acvr1 or Bmpr1a in GCs. Amhr2 +/cre ; Acvr1 fx/2 and Amhr2 +/cre ; Bmpr1a fx/2 are designated Acvr1 cKO and Bmpr1a cKO, respectively. GCs were isolated from these cKO mice ovaries and we tested the response of these cells to AMH by Western blot using a phospho-Smad 1/ 5/8 antibody. GCs from Acvr1 cKO mice were as sensitive to AMH as GCs from control mice ( Fig. 5A and 5B). In contrast, GCs from Bmpr1a cKO mice had lost their capacity to respond to AMH (Fig. 5C and 5D). The effect of AMH on phospho-Smad1/ 5/8 level was reduced significantly in GCs from Bmpr1a cKO compared to WT GCs (Fig. 5D). These results were consistent with the siRNA experiments and indicated that BMPR-IA was essential for AMH signalling in GCs.
RGMb is not required for AMH signalling in granulosa cells
We then tested whether the BMP co-receptors RGMs could also be AMH co-receptors. We analysed the expression of Rgma, Rgmb and Rgmc in the ovary and primary GCs by RT-PCR (Fig. 6A) and q-PCR (Fig. 6B). Rgma and Rgmc expression was detected in total ovary but not in GCs (Fig. 6A and 6B). In GCs, we only detected Rgmb (Dragon) (Fig. 6A and 6B). In parallel, we confirmed the localization of RGMb in the ovary by immunohistochemistry where it is mainly present in GCs (Fig. 6C). To determine the potential role of RGMb in these cells, we transfected them with a siRNA directed against this co-receptor. We first checked whether this gene was down-regulated by q-PCR (Fig. 6D) and then analysed the sensitivity of these cells to AMH by Western blot using a phospho-Smad 1/5/8 antibody (Fig. 6E). siRNA transfection led to a 90% decrease of Rgmb mRNAs (Fig. 6D). The Rgmb knockdown GCs were as sensitive to AMH as control GCs (Fig. 6E and 6F). We then analysed the effect of AMH on Id3 expression in GCs transfected with siRNA against Rgmb (Fig. 6G). AMH stimulation led to a 60% increased in Id3 expression in GCs Figure 1. Characterization of granulosa cells in primary culture. Granulosa cells (GCs) were collected from 3 weeks old C57BL/6 mice ovaries and seeded at a density 1610 5 cells/well. 24 h later, GCs were incubated without primary antibodies (IgG) for the control condition (A) or with an anti-AMH antibody (B). The secondary antibody was coupled to FITC and DAPI was used to visualize the nucleus. AMH expression was detected in the cytoplasm as expected. To asssay the transfection efficiency, primary GCs were transfected either with a b-galactosidase vector (C, 1 mg), pMax-GFP vector (D, 1 mg) or GAPDH-cy3 siRNA (E). After 24 h, GC were fixed, stained with X-Gal and counterstained with nuclear fast red (C) or fluorescence was visualized (D). Alternatively, siRNA transfection efficiency was assayed with a GAPDH-cy3 siRNA on isolated GCs using flow cytometry analysis (E). (F), Markers of immature GCs were analysed by RT-PCR to confirm their status. doi:10.1371/journal.pone.0081551.g001 either transfected with control siRNA or not transfected. Id3 also remained up-regulated by AMH at the same level in GCs transfected with siRNA against Rgmb. Altogether, these results showed that AMH can transduce its signal in the absence of RGMb indicating that this co-receptor was not essential for AMH signalling pathway in GCs. Figure 2. AMH activates Smad1 pathway in granulosa cells. GCs were isolated from 3 weeks old mice ovaries and seeded at 5610 5 cells/well in 6 wells plates. RNA was extracted to test AMH signalling pathway actors expression. The main known actors were analysed by real time PCR, including AMH type II and type I receptors (A, n = 6) and Smads proteins (B, n = 6). Data were analyzed using one-way ANOVA followed by Tukey test for all-pair comparisons. * p,0.05, ** p,0.01, *** p,0.001. GCs were exposed or not to 8 nM AMH, 10 nM BMP2 and 1 nM TGF-b. Proteins were extracted and analysed by Western blot using a phospho-Smad1/5/8 antibody (C, n = 4) or a phospho-Smad2/3 antibody (E, n = 4). Western blots were quantified and normalized to actin levels (D, F, n = 4). AMH could only activate the Smad1/5/8 pathway in GCs (C, D) but not the Smad2/3 one's (E, F). As controls, BMP2 only phosphorylated the Smad1/5/8 proteins (C, D) while TGF-b activated exclusively the Smad2/3 pathway (E, F). Data were analyzed using one-way ANOVA followed by Tukey test for all-pair comparisons. * p,0.05, ** p,0.01, *** p,0.001 doi:10.1371/journal.pone.0081551.g002
Smad1 and 5 are the main Smads used by AMH in GCs
To investigate which Smad was important for AMH signalling in GCs, we first used siRNA for gene knockdown but we were not able to decrease Smad expression more than 50% (data not shown). We then used a reporter gene assay. 24 h after the preparation of GCs they were transfected with two plasmids: an expression plasmid which encodes a fusion protein (Smad1-Gal4-DBD/Smad5-Gal4-DBD or Smad8-Gal4-DBD) and a reporter plasmid which codes for a luciferase gene placed under the control of a promoter which contains UAS sequences (UAS-luc). These sequences are known to specifically bind Gal4 [36]. 24 h after transfection, the cells were treated during another 24 h with AMH (8 nM) and luciferase activity was measured (Fig. 7). As a control, we transfected Gal4-DBD with the reporter plasmid. As shown on Figure 7, AMH significantly increased Smad1 and Smad5 activity (110% and 80%, respectively) while it had no effect on Smad8. These results indicated that Smad1 and Smad5 were equally important for AMH signalling in GCs.
Discussion
The aim of this study was to define the different actors of AMH signalling pathway including new target genes in immature GCs. . Involvement of serine/threonine kinase type I receptors. siRNA transfection for each type I receptor gene was performed when cells were 50% to 80% confluent. 24 h later GCs were exposed (&) or not (%) to 8 nM AMH during another 24 h. The effect of siRNA on target gene expression was determined by real time PCR (A-C, n = 4). Data were analyzed using ANOVA followed by Tukey test for all-pair comparisons. The effect of Acvr1, Bmpr1a and Bmpr1b knockdown on AMH sensitivity was analysed by Western blot using a phospho-Smad1/5/8 antibody (D, n = 4) and was quantified and normalized (E). The effect of Acvr1, Bmpr1a and Bmpr1b knockdown on Id3 expression was analyzed by real-time PCR (F, n = 3). Data were analyzed using paired t-test. * p,0.05, ** p,0.01, *** p,0.001. Only GCs transfected with siRNA against Bmpr1a present a significant decrease of AMH response. doi:10.1371/journal.pone.0081551.g004 We report that AMH up-regulates Id3, through BMPR-IA and that only Smad 1/5 are activated by AMH in these cells. We used in this study primary culture of mouse GCs and when available, we checked our results on GCs from conditional knockout mice for type I receptors.
GCs were prepared from 3 weeks old mouse immature ovaries. At this stage, ovaries are mainly composed of growing follicles expressing AMH and very few theca cells [22,33]. As expected, immunocytochemistry showed that more than 90% of the cells express AMH, indicating that the culture predominantly contains GCs. Furthermore, these cultures expressed Fshr and Inha but not Lhcgr, a theca cell marker at this stage. The expression of Fshr is compatible with the presence of antral follicles at 3 weeks and with the fact that Fshr is detectable before the follicles become sensitive to FSH. These GCs were also properly transfected by siRNAs or plasmids allowing quantitative studies of the AMH signalling pathway. Finally, they expressed Amhr2, the AMH specific type II receptor, the three type I receptors and the R-Smads (Smad1 pathway) necessary to respond to AMH [37]. In keeping with this result, GCs in primary culture maintained their ability to respond to AMH by activating the Smad1 pathway.
We also addressed whether a new family of BMP co-receptors [18] the RGMs, RGMa, b (DRAGON) and c (Hemojuvelin), were expressed by GCs and could be AMH co-receptors. This hypothesis was based on the fact that BMPs and AMH share their signalling pathways, and that both RGMs and AMH are involved in anti-proliferative effects [38][39][40][41][42][43]. Indeed, Rgmb knockdown results in enhanced proliferation, adhesion, and migration in breast cancer cells [42] and in increased migration and invasion in PC-3 cells [43]. On the other hand, AMH has been described as a tumor suppressor gene in the mouse testis [41]. Moreover, in vitro studies have shown that AMH inhibits the growth of ovarian, endometrial and breast cancer cell lines [38][39][40]. We report that RGMb is the only RGM expressed in GCs but that the decrease of its expression using siRNA does not alter AMH responsiveness, indicating that RGMb is not essential for AMH signalling in GCs. Unfortunately, Rgmb knockout mice die within 2 to 3 weeks after birth suffering from immune and inflammatory disorders, precluding the study of the female reproductive tract [44].
We had previously shown in the adult mouse GC line AT29C-U493 that AMH activated the Smad1/5/8 signalling pathway [45]. Here we seek to identify which one of these Smad was preferentially phosphorylated and activated by AMH in GCs. We demonstrated, using reporter genes specific for each Smad, that only Smad1 and 5 are activated by AMH. Our results are in agreement with the phenotype of the transgenic models for Smad1/5/8. Indeed Smad8 knockout mice are viable and fertile [46][47][48] and Smad8 is the least expressed Smad in GCs indicating that this Smad is not important for ovarian physiology. In contrast, Smad1 and Smad5 function together in the ovary to suppress ovarian tumorigenesis [49]. Regarding the involvement of Smad proteins in AMH signalling, partial retention of Müllerian ducts is observed when Smad5 is inactivated with or without another Smad, suggesting that Smad5 is the most important Smad required for Müllerian ducts regression [16]. However, complete Müllerian Figure 5. Granulosa cells from Bmpr1a cKO mice do not respond to AMH. Amhr2-Cre; Acvr1 +/2 or Amhr2-Cre; Bmpr1a +/2 males were bred to Acvr1 fx/fx or Bmpr1a fx/fx females to generate females that were conditionally null for Acvr1 (A, n = 3) or Bmpr1a (B, n = 7) in GCs. GCs were exposed (&) or not (%) to 8 nM AMH. The AMH response was tested in GCs from these cKO mice by Western blot using a phospho-Smad1/5/8 antibody (A,C). Western blots were quantified and normalized to actin levels (B n = 3, D n = 7). AMH induced the phosphorylation of Smad1/5/8 in GCs from Acvr1 cKO (A, B) mice but not in those from Bmpr1a cKO mice (C, D). Data were analyzed using paired t-test. * p,0.05, a: p = 0.058. Only Bmpr1a conditional mutant GCs present a significant decrease of AMH response. doi:10.1371/journal.pone.0081551.g005 duct retention in males occurs only when the three genes Smad1/ Smad5/Smad8 are conditionally inactivated [16]. Consistently, using siRNA against the different Smads, we were not able to decrease Smad expression more than 50% (data not shown).
We then studied which type I receptor was important in the AMH signalling pathway testing the ability of AMH to phosphorylate R-Smad 1/5/8 and to stimulate Id3 expression in GCs either transfected with siRNA against Acvr1, Bmpr1a and Bmpr1b, or extracted from conditional KO mice for Acvr1 or Bmpr1a. We did not consider Acvrl1, another type I receptor involved in R-Smad 1/ 5/8 pathway whose functional ligands are BMP9 and BMP10 [50]. Indeed, previous studies on AMH signalling in different cell types did not show the involvement of this receptor [13][14][15]17,[51][52][53]. We showed that transfection of GCs with siRNA against Bmpr1a, prevents AMH to induce Smad1/5/8 phosphorylation and to regulate Id3 expression. The use of corresponding conditional knockout mice, support these results. GCs isolated from Bmpr1a cKO mice ovaries do not transduce the AMH signal. These results indicate that BMPR-IA is the main AMH type I receptor in GCs. This is also the case for other AMH target cells. Indeed, BMPR-IA is necessary for AMH to mediate Müllerian duct regression since only Bmpr1a disruption induces Müllerian duct retention in mice [15]. Similarly, BMPR-IA is essential for AMH to activate Smad 1 in the Sertoli cell line SMAT-1 [17] and to induce Leydig cells differentiation in mice [53]. In keeping with a role of BMPR-IA in folliculogenesis, the majority of Bmpr1a cKO female mice are infertile due to a decrease of spontaneous ovulations and an inhibition of follicular development [24]. Concomitantly, 9 month-old Bmpr1a cKO mutant females exhibited increased follicular atresia [24]. Interestingly, the Amhr2 KO mice also display a follicular atresia [54].
However, there is a redundancy among the type I receptors to transduce AMH effects. Indeed, Müllerian duct regression is blocked in about 50% of the conditional mutant males for Bmpr1a and occurs normally in 100% of the conditional mutant males for Acvr1a, but 100% of the males generated completely retained Müllerian duct derivatives only when both Acvr1a and Bmpr1a are conditionally inactivated, [16]. These findings indicate that BMPR-IA is the primary type I receptor required for Müllerian duct regression but that ActR-IA is capable of transducing the AMH signal in the absence of BMPR-IA [16]. Similarly, ActR-IA can compensate for the lack of BMPR-IA in SMAT-1 Sertoli cells [17]. Here we show that both GCs transfected with siRNA against Acvr1 or isolated from Acvr1 cKO mice ovaries are able to respond to AMH. Therefore, in contrast to Müllerian duct, Sertoli and Leydig cells, ActR-IA does not act as a secondary type I receptor for AMH in GCs. In addition, BMPR-IB does not have any compensatory effect in the absence of BMPR-IA. Interestingly, Bmpr1b KO mice are viable but females are infertile [55]. These females develop severe defects in cumulus expansion and insufficient uterine endometrial gland development. To complete our siRNA results, it would be interesting to study GCs isolated from Bmpr1b KO mice.
We needed some AMH target genes to validate the siRNA knockdown experiments. Cyp19a1 and Lhcgr have been described as AMH target genes in a previous study [22]. However, we were unable to use them in the current study because their expression levels were too low in our GCs culture (data not shown), which is consistent with their stage of differentiation. Because AMH regulates genes encoding steroidogenic enzymes in Leydig and Sertoli cells namely Hsd3b1 and Cyp11a1 [20,56], we have assayed the expression level of these genes after AMH exposure and could not detect any changes. Finally, since BMPs share some of their signalling pathway components with AMH, we tested some proven BMPs target genes. Id genes are BMP2 target genes in osteoblastic cells [57] and in the breast cancer cell line MCF-7 [58]. These proteins function as dominant negative basic helix loop helix (bHLH) transcription factors. They regulate many genes required for growth and differentiation, through the binding to E-box sequences located in the promoter of target genes [59]. Id genes are selectively up or down regulated depending on cell conditions [60][61][62]. In the ovine and hen GCs, the exposure to respectively BMP6 or BMP2 leads to an increase in all Id genes expression [60,61]. Conversely, activin A has an inhibitory effect on Id gene expression [61]. Similarly, in porcine GCs expression levels of Id2 and Id3 are regulated by FSH and cumulus-oocyte-complex in an Figure 6. RGMb is not essential for AMH signalling in granulosa cells. Rgma, b and c expression was analysed by RT-PCR (A) or real time PCR (B, n = 6). (A) RT-PCR showed that all three Rgm were expressed in the mouse immature ovary (left lane) while Rgmb seemed predominantly expressed in GCs (right lane) (A). Real-time PCR confirmed that Rgmb was more expressed than Rgma and Rgmc in GCs (B). Data were analyzed using one-way ANOVA followed by Tukey test for all-pair comparisons. Mouse immature ovary was subjected to immunohistochemistry using an RGMb antibody (C). The left panel was the control without the primary antibody. The right panel shows that RGMb is expressed in the cytoplasm of granulosa cells (scale bar = 50 mm; insert, scale bar = 20 mm). siRNA transfection targeting Rgmb was performed when cells were 50% to 80% confluent (D-G). GCs were exposed (&) or not (%) to 8 nM AMH. Real-time PCR was used to quantify the decrease in Rgmb expression. Rgmb expression dropped about 75% in GC transfected with the siRNA targeting Rgmb when compared to a control siRNA (D, n = 6). Western blot with a phospho-Smad1/5/8 antibody (E, n = 4) and real-time PCR on Id3 gene (G, n = 6) showed that the knowkdown of Rgmb does not affect AMH signalling pathway. Western blots were quantified and normalized from 4 experiments (F, n = 4). Data were analyzed using paired t-test. * p,0.05, ** p,0.01, *** p,0.001, a: p = 0.074. AMH response was not significantly different between control siRNA and Rgmb siRNA transfected GCs. doi:10.1371/journal.pone.0081551.g006 Figure 7. Smad1 and 5 are the main Smads activated by AMH in granulosa cells. GCs were co-transfected with a luciferase reporter construct (UAS-luc) and different expression constructs (Smad-Gal4). Four different expression constructs were transfected in combination with the reporter construct: Smad1-Gal4, Smad5-Gal4, Smad8-Gal4 and Gal4 as a control. Cells were stimulated (&) or not (%) with 8 nM AMH. In the absence of AMH, the fusion protein Smad-Gal4 remained in the cytoplasm which is reflected by a basal expression level of luciferase. After 24h of treatment with AMH, the Smad-Gal4 protein was phosphorylated and could translocate into the nucleus and increase luciferase expression. Firefly luciferase activity measured in control medium was set to 100 arbitrary units. The results are expressed as a percentage of stimulation of Firefly luciferase activity measured in the presence of AMH (n = 3). Data were analyzed using paired t-test. ** p,0.01. doi:10.1371/journal.pone.0081551.g007 opposite way [62]. A recent study has demonstrated a functional relationship between the expression of all Id isoforms and the status of GC differentiation [25]. In our study, Id3 gene expression is upregulated by AMH in immature GCs. Interestingly, Lhcgr expression is inversely correlated with Id3 transcript level in hen undifferentiated GCs [25]. Because knockout studies have led to the conclusion that AMH plays an inhibitory role on follicle maturation [3], Id3 might be one of the genes involved in this complex process.
In conclusion, using siRNA and transgenic mice for the different components of the AMH signalling pathway, we have shown that, like for the other AMH target cells, the most important type I receptor for AMH in GCs is BMPR-IA. Moreover, the main Smad proteins used by AMH in these cells are Smad 1 and Smad 5. Finally, we have identified a new AMH target gene in these cells, Id3, which could be involved in the effects of AMH on the differentiation of GCs and its other target cells. | 9,770.6 | 2013-11-28T00:00:00.000 | [
"Biology"
] |
A comparison of stem cell-related gene expression in the progenitor-rich limbal epithelium and the differentiating central corneal epithelium.
PURPOSE
Corneal epithelium is maintained by a population of stem cells (SCs) that have not been identified by specific molecular markers. The objective of this study was to find new putative markers for these SCs and to identify associated molecular pathways.
METHODS
Real time PCR (rt-PCR) was performed in 24 human limbal and central corneal epithelial samples to evaluate the gene expression profile of known corneal epithelial SC-associated markers. A pool of those samples was further analyzed by a rt-PCR array (RT²-PCR-A) for 84 genes related to the identification, growth, maintenance, and differentiation of SCs.
RESULTS
Cells from the corneal epithelium SC niche showed significant expression of ATP-binding cassette sub-family G member 2 (ABCG2) and cytokeratin (KRT)15, KRT14, and KRT5 genes. RT²-PCR-A results indicated an increased or decreased expression in 21 and 24 genes, respectively, in cells from the corneal SC niche compared to cells from the central corneal epithelium. Functional analysis by proprietary software found 4 different associated pathways and a novel network with the highest upregulated genes in the corneal SC niche. This led to the identification of specific molecules, chemokine (C-X-C motif) ligand 12 (CXCL12), islet-1 transcription factor LIM/homeodomain (ISL1), collagen-type II alpha 1 (COL2A), neural cell adhesion molecule 1 (NCAM1), aggrecan (ACAN), forkhead box A2 (FOXA2), Gap junction protein beta 1/connexin 32 (GJB1/Cnx32), and Msh homeobox 1 (MSX1), that could be used to recognize putative corneal epithelial SCs grown in culture and intended for transplantation. Other molecules, NCAM1 and GJB1/Cnx32, potentially could be used to positively purify them, and Par-6 partitioning defective 6 homolog alpha (PARD6A) to negatively purify them.
CONCLUSIONS
Knowledge of these gene and molecular pathways has provided a better understanding of the signaling molecular pathways associated with progenitor-rich limbal epithelium. This knowledge potentially could give support to the design and development of innovative therapies with the potential to reverse corneal blindness arising from ocular surface failure.
corneoscleral limbus. The basal epithelial cells of the limbal region are not homogeneous, but rather consist of diverse populations of SCs, transient amplifying cells, and terminally differentiated cells for which the total number and distribution are unknown [1][2][3][4]. Limbal SC deficiency (LSCD) syndrome occurs if limbal epithelial SCs (LESCs) are critically reduced and/or dysfunctional due to a multitude of conditions including genetic disorders (i.e., anirida), cicatrizingautoimmune pathologies (i.e., Steven-Johnson syndrome, mucous membrane pemphygoid), severe infections, or external factors such as chemical or thermal burns, ultraviolet and ionizing radiation, contact lens wear, and multiple surgeries. The consequence of LSCD is a chronic pain inflammatory syndrome and loss of vision, greatly affecting quality of life and productivity [5].
Current treatment of LSCD relies on the inhibition of inflammation, protection, and provision of LESCs for reconstruction of the damaged corneas [5][6][7]. Strategies based on transplantation of ex vivo expanded LESCs are becoming widely accepted today. The most frequently chosen technique includes harvesting autologous or allogenic limbal tissue that is then cultivated on amniotic membranes or fibrin matrices. Transplantation of these cultured cells has shown promising results [8][9][10][11][12]. However, it is usually not known what percentage of the transplanted cells is actually composed of SCs. It is likely that the success of each transplantation depends upon the number of SCs included. For example, enrichment of transplants with LESCs expressing the marker p63 increases the success rate [10]. It is therefore essential to improve the purity of the LESCs being transplanted to ensure good long-term transplantation results.
Identifying LESCs is crucial for enrichment and characterization. Unfortunately, to date, no direct methods have been established because no single specific LESC marker is known. A variety of SC markers has been proposed to identify the LESC population. In addition, a diversity of differentiation markers has also been proposed to differentiate LESCs from terminally differentiated corneal epithelial cells [13][14][15][16]. Until now, the combination of positive and negative SC markers seems to be the most trustworthy way to characterize the putative SCs in the limbal epithelium. Typically, the major positive markers used are the transcription factor p63, the drug-resistance transporter ATPbinding cassette sub-family G member 2 (ABCG2), and some cytokeratins (KRTs) like KRT15 and KRT14. Among the most used as negative markers are KRT3 and KRT12, and the gap junction protein connexin 43, which are all typical of terminally differentiated cells [10,13,15,16].
Recently, great efforts have been made toward the identification of new molecular markers that may better distinguish LESCs from transient amplifying cells and terminally differentiated cells [16,17]. However, the variety of putative LESC markers and their role for the identification the LESC population is controversial [15,18]. The finding of new molecules that specifically identify LESCs would significantly enhance the purity of LESCs grown in culture and intended for transplantation. In addition, a better understanding of the molecular signaling pathways associated with the stemness of the limbal epithelium could facilitate a better diagnosis of LSCD and could also give support to the design and development of new and promising treatments. Therefore to discover new putative LESC markers, we analyzed the expression of 84 genes related to the identification, growth, maintenance, and differentiation of human SCs. Using a real time reverse transcription polymerase chain reaction array (RT 2 -PCR-A) with human corneal and limbal samples, we found increased and decreased expression of selected genes operating in 4 different pathways constituting signaling networks in the cells from the limbal stem cell niche.
Epithelial cell collection:
Human tissue was used in accordance with the Declaration of Helsinki. Normal human corneoscleral tissues (n=24) were obtained 3 to 5 days postmortem from the Barraquer Eye Bank (Barcelona, Spain). Limbal and central cornea epithelial cells were obtained using a modification of a previously described method [19][20][21][22][23]. In brief, a 7.5 mm trephine was used to isolate the cornea from the limbus, and the epithelium in the central button of the cornea was scraped to harvest differentiating epithelial cells for analysis of gene expression. Later, each corneoscleral rim was trimmed, and the endothelial layer and iris remnants were removed. The limbal rim was incubated with dispase II (5 mg/ ml; STEMCELL Technologies, Grenoble, France) at 37 °C for 2 h. The limbal epithelial sheets were then collected and treated with 0.25% trypsin with 0.03% EDTA (Invitrogen-Gibco, Inchinnan, UK) at 37 °C for 10 min to isolate single cells. There were, therefore, 24 samples of 2 different types of epithelial cells: differentiating corneal epithelial cells and stem cell-containing population of limbal epithelial cells derived from the corneal epithelial stem cell niche. RNA isolation and reverse transcription: Total RNA was extracted by Qiagen RNeasy Mini Kit (QIAGEN Inc., Valencia, CA) under standard conditions, and treated with RNase-free DNase following our previously described method [24][25][26]. Briefly, samples were collected in RNA lysis buffer (1:100 β-mercaptoethanol-buffer RLT), purified in QIAshredder columns, and treated with RNase-Free DNase I Set (QIAGEN Inc.) following the manufacturer's instructions. Agarose gel electrophoresis and ethidium bromide staining were used to check the integrity and size distribution of the purified RNA. The first strand of cDNA was synthesized with random hexamer using M-MuLV Reverse Transcriptase (Amersham Pharmacia Biotech Europe GmbH, Barcelona, Spain) [24][25][26]. Real time polymerase chain reaction (rt-PCR): The cDNA from the limbal and corneal epithelial cells was mixed with Taqman assay primers and minor groove binder probes specific for glyceraldehyde 3-phosphate dehydrogenase (GAPDH), KRT3, KRT5, KRT7, KRT12, KRT14, KRT15, KRT19, p63 and ABCG2 (Table 1) and with a Taqman Universal PCR Master Mix AmpErase UNG (Applied Biosystems, Foster City, CA) in a 7500 Real Time PCR System (Applied Biosystems) according to the previously described method [27][28][29][30][31]. An aliquot of 2 μl containing 20 ng of cDNA was used for PCR in a total volume of 20 μl containing: 7 µl double-distilled water, 1 μl of 20× target primers and probe, 10 µl of 2× Taqman Universal PCR Master Mix. PCR parameters consisted of uracil N-glycosylane activation at 50 °C for 2 min, pre-denaturation at 95 °C for 2 min, followed by 40 cycles of denaturation at 95 °C for 15 s, and annealing and extension at 60 °C for 1 min.
Assays were performed in triplicate. A nontemplate control and total RNA without retrotranscription were included in all experiments to evaluate PCR and DNA contamination of the reagents. GAPDH was used as an endogenous reference for each reaction to correct for differences in the amount of total RNA added. To verify the validity of using GAPDH as an internal standard control, the efficiencies of the genes and GAPDH amplifications were compared.
The comparative cycle threshold (Ct) method, where the target fold=2 -ΔΔCt , was used for analyzing the results (Applied Biosystems User Bulletin, No.2, P/N 4303859) [27][28][29][30][31]. Corneal mRNA served as the calibrator control. The results were reported as a fold upregulation when the fold-change for limbal cells was greater than one compared to corneal cells. If the fold-change was less than one, the negative inverse of the result was reported as a fold down-regulation. Significant differences (p<0.05) were evaluated by Student's t-test. Real time PCR array: The samples were pooled, creating 4 groups of 6 each, and used for further study. Analysis using a real time PCR (rt-PCR) array was performed according to the manufacturer's recommendations using the Human Stem Cell RT 2 Profiler™ (SuperArray Bioscience, Izasa, S.A., Barcelona, Spain) that used SYBR® Green I dye detection.
We studied the expression of 5 housekeeping genes, 3 RNAs and PCR quality controls, and 84 human genes related to:
1.
SC specific markers (cell cycle regulators, chromosome and chromatin modulators, genes regulating symmetric/asymmetric cell division, selfrenewal, cytokines and growth factors, genes regulating cell-cell communication, cell adhesion molecules and metabolism),
3.
Signaling pathways important for SC maintenance (Notch and Wnt pathways).
The following components were mixed in a 5-ml tube: 1,275 μl of the 2× SuperArray PCR Master Mix, 102 μl (100 ng) of the diluted first strand cDNA synthesis reaction, and 1,173 μl double-distilled H2O. This mixture and template cocktail (25 μl each) was added to each well of the PCR array. Real time PCR (7500 Real Time PCR System) was then performed as follows: 10 min at 95 °C, 40 cycles of 15 s at 95 °C, and 1 min at 60 °C. Assays were performed in duplicate. A melting curve program was run and a dissociation curve was generated for each well in the entire plate to verify the identity of each gene amplification product.
For data analysis, the Ct method was performed using an Excel-based PCR Array Data Analysis template that was downloaded from the SuperArray website. This program automatically performed the following calculations and interpretation of gene expression based upon threshold cycle data from a real-time instrument:
1.
Changed to 35 all Ct values greater than 35 and Ct values not detected. At this point, any Ct value equaled to 35 was considered a negative call.
2.
Examined the threshold cycle values of the genomic DNA control, reverse transcription control, and positive PCR control wells.
3.
Calculated the ΔCt for each gene in each plate.
We used the average of the five housekeeping gene Ct values as a normalization factor. The results are reported as a fold upregulation or down-regulation in the same way as previously explained for real time PCR (above).
Pathway analysis: Excel spreadsheets containing gene identifier lists together with the corresponding expression values were uploaded into Ingenuity Pathways Analysis (IPA; Ingenuity® Systems, Redwood City, CA) to identify relationships among the genes of interest. The basis of the IPA program consisted of the Ingenuity Pathways Knowledge Base (IPKB) that was derived from known functions and interactions of genes published in the literature. Thus, the IPA tool allowed the identification of biologic networks, global functions, and functional pathway(s) of a particular data set. Each gene identifier was mapped to its corresponding gene object in the IPKB. Networks of the genes were then algorithmically generated based on their connectivity. Each gene product was assigned to functional and subfunctional categories. IPA software then used the associated library of canonical pathways to identify the most significant ones in the data set. Benjamini-Hochberg multiple testing correction was used to calculate a p-value to determine the probability that each biologic function or canonical pathway assigned to the data set was due to chance alone. In addition, significance of the association between the data set and the canonical pathway was calculated as a ratio of the number of genes from the data set that mapped to the pathway divided by the total number of genes that map to the canonical pathway. The 'Pathway Designer' tool of the IPA software was used for the graphical representation of the molecular relationships between gene products. Gene products were represented as nodes, and the biologic relationship between two nodes was represented as an edge (line). All edges were supported by at least one reference from the literature, from a textbook, or from canonical information stored in the IPKB.
Real time PCR analysis for corneal and limbal epithelial cell markers:
To select the purest population of corneal and limbal epithelial cells, we performed rt-PCR assays to evaluate the expression of markers considered to be abundant in the limbal stem cell niche. These markers included KRT14, KRT15, ABCG2, and transcription factor p63 [13,20,[32][33][34][35]. For terminally differentiated corneal epithelial cells, we looked for the expression of KRT3,KRT7, and KRT12 [3,36], as well as for other cytokeratins like KRT5 and KRT19 [15]. In the 24 samples analyzed, all of the studied KRT genes were expressed ( Figure 1). In the limbal epithelial cells, expression was significantly reduced for most cytokeratin genes that are normally expressed in large amounts in terminally differentiated epithelial cells [15,35]. The reductions for KRT3, KRT7, KRT12, and KRT19, which varied between 2.03 and 3.54 fold, were all significant except for KRT12 (p<0.05 for KRT3 and KRT17, p>0.05 for KRT12, and p<0.00001 for KRT19, Figure 1). In contrast, KRT5, KRT14, and KRT15 were more highly expressed in the limbal than the corneal epithelial samples, with increases ranging from 2.29 to 29.46 fold (p<0.05, <0.001, and <0.00001, respectively, Figure 1).
Gene expression of associated LESC niche markers ABCG2 and p63 were found in all of the samples analyzed. Expression levels of ABCG2 were 39.1 fold greater in the limbal epithelial cell population than in the corneal epithelial one (p<0.00001, Figure 1). However, expression of transcription factor p63 was the same in both cell populations.
Summarizing our results so far, the purest SC-containing population of limbal epithelial cells had significantly higher expression of ABCG2 (39 fold), KRT15 (29.5 fold), KRT14 (5.6 fold), and KRT5 (2.3 fold) than did the corneal epithelial cell population. Furthermore, the limbal cells had significantly lower expression of KRT3, KRT7, and KRT19. Neither KRT12 nor p63 were useful as gene markers to differentiate between the two cell populations. Real time PCR array: The 24 samples previously analyzed by real time PCR were pooled to perform the PCR array. The dissociation curve was analyzed for the 84 genes studied, and no DNA contamination was detected. The results indicated increased expression of 21 genes and decreased expression of 24 genes for limbal cells compared to corneal cells. Eleven genes had a greater than ninefold increased expression and 10 genes had a greater than fourfold decreased expression ( Table 2).
Among the 11 most upregulated genes ( Signaling pathways-Seventy canonical signaling pathways were significantly affected across the entire data set identified by IPA (Table 3, Figure 2). The highest upregulated gene was SOX (9.2 fold, Figure 2) in the Wnt/β-catenin signaling pathway, also known as SRY (sex determining region Y)-box 2, Entrez Gene 6736). The most downregulated gene was GJA1 (6.9 fold), also known as gap junction protein, alpha 1 (Entrez Gene 2697).
Predicted functional effects-The IPA program determined if groups of genes with significantly changed expression levels were associated with altered biologic functions and diseases (Table 4). Here IPA identified 71 functional categories that were significantly affected. The most prominent cellular and molecular functions implicated were cellular development, cell death, gene expression, Genes with higher (+) and lesser (-) expression in limbal epithelial cells compared to terminally differentiated corneal epithelial cells. Fold change was calculated by PCR array using the comparative Ct method. *Indicates cellular location where protein is expressed. cellular assembly and organization, and cellular growth and proliferation. The most frequent significant physiologic system developments were tissue, organismal, embryonic, nervous system, and organ development. Gene networks-The IPA program constructed 4 gene networks that were significantly interconnected. The first network ( Figure 3A The second network ( Figure 3B) contained 17 genes associated with cancer, connective tissue development and function, and skeletal and muscular system development and function. Upregulated genes included ACAN, BMP2, CD8B, COL2A1, and CXCL12. Down-regulated genes included ACTC1, BMP1, CCNA2, CCNE1, CD3D, CD8A, CDC42, FZD1, GJA1, GJB2, PARD6A, and S100B.
Customized gene network-Using the IPKB, we explored possible functional relationships among the six highest upregulated limbal epithelium progenitor-rich cell (6) KRT15. We obtained a network with 29 genes. The protein products of 14 genes were active in the nucleus, one in the cytoplasm, six in the plasma membrane, and seven in the extracellular space ( Figure 5). CXCL12, also called stromal cell-derived factor 1 (SDF1), encodes for small cytokines that belong to the intercrine family (Entrez Gene 6387). We chose it as the central gene in the network because in humans it directly or indirectly interacts with the other genes that we added. Among the 6 most upregulated genes, only ISL1, which encodes for a member of the LIM/homeodomain family of transcription factors and may play an important role in regulating insulin gene expression (Entrez Gene 3670), did not have any connections with other genes in this network.
DISCUSSION
Isolation and characterization of tissue specific SCs to study their functional properties is one of the main research aspirations for regenerative medicine. In the context of ocular surface therapy, the ability to identify, purify, and characterize LESCs is an essential goal. However, the lack of LESC specific markers has been an obstacle for their isolation and subsequent biologic and functional characterization. Using Figure 2. Wnt/β-catenin signaling pathway generated by Ingenuity Pathway Analysis (IPA). The IPA depicted the genes involved, their interactions, and the cellular and metabolic reactions that constituted the pathway. Colored molecules represented genes that appeared in the data set studied. Red and green molecules were up-and down-regulated, respectively, in limbal epithelial cells. Gray molecules did not meet the user defined cutoff of 2. Our goal was to provide new information for molecules that are predominantly expressed in the stem cell-containing population of human limbal epithelial cells. Knowledge regarding these LESC potential markers could be used to enhance isolation of the cells and develop a better understanding of their biologic functions.
To know the gene expression pattern of the isolated cell samples, we first performed a PCR analysis for corneal and limbal epithelial markers. The limbal epithelial cell population expressed high levels of ABCG2, KRT5, KRT14 and KRT15 and low levels of KRT3, KRT7, and KRT19. Unexpectedly, we did not find significant differences between limbal and corneal epithelial cells for transcription factor p63 expression. In 2001, Pellegrini et al. [33] proposed p63 as the first positive marker of LESCs. This has generated a certain level of controversy because several groups have since found that p63 is also expressed by most of the terminally differentiated basal epithelial cells throughout the cornea [18,37,38]. Our findings are consistent with the idea that p63 is not specific enough to be a definitive marker for LESCs, although perhaps it could be helpful for identifying incompletely differentiated corneal epithelial cells [18]. It is worth noting that the α isoform of ΔNp63 has been proposed to be a rather more specific and useful marker for LESCs than the other isoforms of this transcription factor [10,39].
Several microarray studies have attempted to identify markers and signaling pathways associated with different ocular surface cell phenotypes [32,[40][41][42][43][44][45][46][47][48][49]. We chose the RT 2 -PCR-A system because it utilizes real-time PCR in combination with microarray analysis to detect the simultaneous expression of many genes. We used IPA to analyze our results from the PCR array, creating three different analysis types that responded to three different questions: (1) What well characterized cell signaling and metabolic canonical pathways are most relevant to our data set? (2) What regulatory networks exist among the genes and proteins of our data set? (3) What previously unknown, unique customized networks that can serve as biologic models are present in our data set?
Among the 84 genes we studied, 11 were highly upregulated and 10 were highly down-regulated; however less highly regulated genes may also be important in relation to SC properties. The most highly expressed in the limbal epithelium progenitor-rich cells compared to central corneal epithelial cells was the chemokine CXCL2. To explore molecular signatures of progenitor cells, we further analyzed six highly expressed genes, starting with the chemokine CXCL12, to create our customized gene network with a total of 29 molecules. Chemokines are 8-to 10-kDa proteins that are potent activators and chemoattractants for different leukocyte subpopulations and some non-hematopoietic cells such as epithelial cells, fibroblasts, and endothelial cells [50]. CXCL12 and its receptor CXCR4 are expressed in cultured human corneal fibroblasts [51]. They may play a key role in angiogenesis and be involved in ocular neovascularization as well as in the recruitment of inflammatory or vascular endothelial cells to sites of corneal injury. In a recent microarray analysis of pig limbal side population cells, CXCR4 had the greatest overexpression ratio [42]. CXCR4 is also upregulated in pig and human conjunctiva side population cells [41,42]. Based on all of these findings, the CXCL12/CXCR4 pair could serve as a suitable marker to identify ocular surface SCs in a species-independent way. CXCL12/CXCR4 signaling is also critical for the mobilization and recruitment of mesenchymal SCs (MSCs) to infarcted hearts and fracture sites in bones [52,53]. Additionally, Ye et al. [54] recently reported that systemically transplanted bone marrow MSCs can engraft to injured cornea and promote wound healing by differentiation, proliferation, and synergizing with hematopoietic SCs. Thus we hypothesize that corneal homing of MSCs after ocular surface wounding could be mediated by release of CXCL12 from limbal epithelial cells and corneal fibroblast. Potentially, CXCL12 topical administration could be used to enhance MSC homing to injured corneal and limbal areas, facilitating the regenerative processes.
In addition to locating the SCs of the epithelium, the ideal SC marker should also allow for isolation and enrichment of viable SCs from a heterogeneous epithelial cell population. For that reason, cell surface proteins such as cell-cell and cellmatrix adhesion molecules, as well as cell surface receptors, may be the best candidates for new positive and negative putative LESC markers. Based on our results and others [15,20], the plasma membrane transporter ABCG2 appears to be the most useful cell surface marker for the identification and isolation of LESCs.
An example of a negative potential marker, one that indicates the absence of SC properties, could be PARD6A. This gene is a member of the PAR6 family and encodes a cell membrane protein involved in the control of epithelial cell polarity and tight junction assembly [55,56] and in epithelialto-mesenchymal transition [57]. Expression of PARD6A in cells from the limbal stem cell niche was reduced fivefold compared to the corneal epithelial cells.
Another such negative marker is the gap junction protein connexin 43 (GJA1) that is abundantly expressed in the Figure 3. Networks generated by IPA related to the development and the function of the auditory, vestibular, skeletal and muscular systems and to the cancer development. Auditory and vestibular system development and function, organ development, and cancer network (A), and cancer, connective tissue development and function, skeletal and muscular system development and function network (B) generated by IPA. The networks contained nodes composed of genes/gene products and edges that indicated a relationship between the nodes in the cellular and subcellular locations indicated. Classes of nodes were indicated by shape to represent different functionalities. Colored molecules represented genes that appeared in the data set studied. Red and green molecules were upregulated and down-regulated, respectively, in the limbal epithelial cells. Gray molecules did not meet the user defined cutoff of 2. White indicated the molecule was added from the IPKB. corneal but not in the limbal epithelium [19,58,59]. Membrane channel connexins (Cxs) form gap junctions that have been implicated in the homeostatic regulation of multicellular systems [60]. It is assumed that SCs of the limbal epithelium lack connexins and metabolite transfer capacity due to apparent self-sufficiency and absence of necessity for direct cell-to-cell communication [58]. However, our results showed upregulated expression of a related gene, Cx32 (GJB1), in limbal cells which was reported to be absent in human corneal epithelial cells [46]. Furthermore, Figueira et al. [32] recently described the expression of Cx32 in human fetal limbus and in cultured adult primary limbal explant epithelium. Similarly, hematopoietic cells were assumed not to express Cxs; however, hematopoietic SCs express Cx32 in response to chemical insult and also while maintaining the quiescent, noncycling state of primitive hematopoietic progenitor cells [61,62]. Although further investigations are required to confirm the role of Cx32 in LESCs, we propose this cellular surface protein as a new putative positive marker for the identification and isolation of human LESCs.
Expression of the neural cell adhesion molecule 1 (NCAM1) was highly upregulated in the limbal epithelial cells. NCAM is broadly expressed during development and plays a essential role in cell division, migration, and differentiation [63]. A decrease in NCAM expression during the development of the ocular lens has been associated with lens epithelial cell differentiation [64]. However NCAM is also expressed in cells of many fully developed tissues and organs including the cornea and lens epithelium [65]. For that reason, we believe it is not specific enough to serve as a potential single LESC marker.
The limbal epithelium may contain a higher proportion of immune-related cells such as macrophages, lymphocytes, and antigen presenting cells than does the central corneal epithelium [66,67]. Thus the presence of significant portions of marker transcripts derived from these kinds of cells is not surprising. The best example of this is CD8, a plasma membrane specific marker of T cells [68], that was overexpressed in the limbal-derived cells. This confirms the greater presence of immune-related cells in the limbal epithelium than in the corneal epithelium [66,67].
Analysis of our RT 2 -PCR-A data with IPA software recognized that the most significantly affected canonical pathway was Wnt/β2-catenin signaling, consistent with the recent findings of Bian et al. [43]. Wnt signaling is involved in practically every aspect of embryonic development and also controls homeostatic self-renewal in several adult tissues [69]. Among the studied molecules that belong to this pathway, SOX2 and Wnt were the highest upregulated genes, 9.2 and 6.8 fold, respectively. The SOX2 gene encodes a Figure 4. Networks generated by IPA related to drug metabolism, small molecule biochemistry and cell morphology and to cancer, cell cycle, skeletal and muscular disorders. Drug metabolism, small molecule biochemistry and cell morphology network (A), and cancer, cell cycle, skeletal and muscular disorders network (B) generated by IPA. The network contained nodes (gene/gene product) and edges (indicating a relationship between the nodes) showing the cellular/subcellular location as indicated. Function classes of nodes were indicated by shape to represent functional class. Colored molecules represented genes that appeared in the data set studied. Red and green molecules were upregulated and down-regulated, respectively, in limbal epithelial cells. Gray molecules did not meet the user defined cutoff of 2. White indicated the molecule was added from the IPKB. member of the SRY-related HMG-box (SOX) family of transcription factors implicated in the regulation of embryonic development and in the determination of cell fate [70]. Wnt signaling is required for the establishment of hair follicles, playing a key role in the activation of bulge SCs to progress toward hair formation [69,71]. Zhou et al. [48] prepared a transcriptional profile of mouse limbal and corneal epithelial basal cells. Consistent with our results, they found elevated expression of certain genes that were also upregulated in the hair follicular bulge SCs, suggesting the existence of a common cluster of epithelial SC genes. As we found, they also detected an elevated expression of the Sry gene in mouse limbal basal cells, associating it with increased proliferation. They proposed that it is involved in SC activation, maintaining proliferative capacity needed for expansion of precursor cell populations, and for wound healing [48]. Similarly, Figueira et al. [32] in a microarray analysis to identify phenotypic markers of human limbal SCs in fetal and adult corneas, detected that Wnt-4 was differentially overexpressed in fetal limbus compared with central cornea. Its expression was restricted to the basal and immediate parabasal limbal epithelium of both the adult and fetal corneas. They suggested that, since Wnt-4 functions in diverse developmental phases involved in common morphogenic events, it was not Figure 5. Customized gene network based upon the six most highly upregulated limbal epithelial cell genes. We explored possible functional relationships between the six highest upregulated limbal epithelial cells genes (in red) using the IPKB. Our customized pathway contained nodes composed of genes/gene products and edges that indicated a relationship between the nodes in the cellular and subcellular locations indicated. White indicates that the molecule was added from the IPKB.
surprising that this gene was expressed by the basal limbal epithelium that plays a crucial role in differentiation [32]. Wnt-4 overexpression, together with high levels of KRT15, KRT14, and P-cadherin in limbal basal epithelium cells, was in concordance with the molecular expression profile of stratified epithelial tissues. These data are in complete agreement with our RT 2-PCR-A results that confirm an upregulated expression for both Wnt and KRT15 molecules in limbal-derived epithelial cells.
Analysis of our RT 2 -PCR-A data with IPA software also constructed 4 networks that are distinct from canonical pathways because they were generated de novo based on our input data. The resulting networks require further studies to find the most useful genes for defining a potential LESC profile.
Conclusions-In conclusion, our study has led to the identification of novel molecules, CXCL12, ISL1, COL2A, NCAM1, ACAN, FOXA2, GJB1/Cnx32, and MSX1, that potentially could serve to recognize LESCs. Other markers, NCAM1 and GJB1/Cnx32 positively and PARD6A negatively, could be used to separate the stem cell-containing population of limbal epithelial cells derived from limbal niche cells grown in culture and intended for transplantation. Furthermore, the functional analysis of our results has provided a better understanding of the signaling molecular pathways associated with the progenitor-rich limbal epithelium. This knowledge potentially could give support to the design and development of innovative therapies with the potential to reverse corneal blindness arising from ocular surface failure due to LSCD.
ACKNOWLEDGMENTS
We thank the Barraquer Eye Bank of Barcelona for their support in providing human corneoscleral tissues. The authors thank Victoria Sáez for her technical assistance and B. Bromberg (Certified Editor in Life Science of Xenofile Editing, www.xenofileediting.com) for his assistance in the final editing and preparation of this manuscript. This work was supported by Instituto de Salud Carlos III (CIBER-BBN CB06/01/003) and Junta de Castilla y León, Spain (Centro en Red de Medicina Regenerativa y Terapia Celular de Castilla y León; SAN673/VA/28/08). M. L-P. and S. G. were supported by fellowships from the Junta de Castilla y León. | 7,289.8 | 2011-08-10T00:00:00.000 | [
"Biology",
"Medicine"
] |
Effects of Wolbachia on ovarian apoptosis in Culex quinquefasciatus (Say, 1823) during the previtellogenic and vitellogenic periods
Background Apoptosis is programmed cell death that ordinarily occurs in ovarian follicular cells in various organisms. In the best-studied holometabolous insect, Drosophila, this kind of cell death occurs in all three cell types found in the follicles, sometimes leading to follicular atresia and egg degeneration. On the other hand, egg development, quantity and viability in the mosquito Culex quinquefasciatus are disturbed by the infection with the endosymbiont Wolbachia. Considering that Wolbachia alters reproductive traits, we hypothesised that such infection would also alter the apoptosis in the ovarian cells of this mosquito. The goal of this study was to comparatively describe the occurrence of apoptosis in Wolbachia-infected and uninfected ovaries of Cx. quinquefasciatus during oogenesis and vitellogenesis. For this, we recorded under confocal microscopy the occurrence of apoptosis in all three cell types of the ovarian follicle. In the first five days of adult life we observed oogenesis and, after a blood meal, the initiation step of vitellogenesis. Results Apoptoses in follicular cells were found at all observation times during both oogenesis and vitellogenesis, and less commonly in nurse cells and the oocyte, as well as in atretic follicles. Our results suggested that apoptosis in follicular cells occurred in greater numbers in infected mosquitoes than in uninfected ones during the second and third days of adult life and at the initiation step of vitellogenesis. Conclusions The presence of Wolbachia leads to an increase of apoptosis occurrence in the ovaries of Cx. quinquefasciatus. Future studies should investigate if this augmented apoptosis frequency is the cause of the reduction in the number of eggs laid by Wolbachia-infected females. Follicular atresia is first reported in the previtellogenic period of oogenesis. Our findings may have implications for the use of Wolbachia as a mosquito and pathogens control strategy. Electronic supplementary material The online version of this article (doi:10.1186/s13071-017-2332-0) contains supplementary material, which is available to authorized users.
Background
Culex quinquefasciatus is a common house mosquito, and because they pierce the hosts' skin to consume blood, they are a competent vector of neurotropic viruses, such as West Nile Virus and human and veterinary encephalitis viruses. They can also transmit filarial worms and cause substantial nocturnal discomfort and allergic reactions [1]. Despite its medical and veterinary importance, few factors regarding reproduction and fitness of this insect have been explored, and one of these factors is the cell death of ovarian cells, that have an important role on the quantities of laid eggs [2,3].
We believe that cell death of ovarian cells might be influenced by presence of Wolbachia. Approximately half of all insect species are infected by Wolbachia [4][5][6], and several Brazilian populations of Culex quinquefasciatus are naturally infected by these bacteria [7]. Wolbachia is maternally transmitted, and it induces incompatible crossings (cytoplasmic incompatibility) and reproductive disturbance in most of its arthropod hosts. Because of this, researchers have proposed the use of this bacterium as a new strategy to control insects and thereby the diseases carried by the mosquitoes (see [8]) and, so far, two main strategies can be used. The first is using cytoplasmic incompatibility for population substitution and control [9,10], and the second is related to the influence of this bacterium on the replication of pathogens that infect the vector concomitantly, leading to non-transmission of the pathogen [8].
Apoptosis is genetically programmed cell death (PCD), which commonly occurs because of physiological necessity or because cells fail to develop [11]. It is advantageous compared to necrosis, because extravasation of cellular material does not occur, preventing the inflammatory response, and nutrients can be absorbed and utilised by the surrounding tissue [12]. Programmed cell death has been described in ovaries from different organisms [13,14], including in the insect holometabolous model, Drosophila [2,15,16]. PCD in reproductive tissue of Drosophila is well described and is known to occur at different times of oogenesis and in different ways [17], but this kind of study is scarce in mosquitoes.
It is known that one of the factors that can induce cell death is the presence of the iron molecule, which causes oxidative stress in cells, leading to apoptosis [18]. In some mosquitoes, it was described that the presence of Wolbachia causes an increase of the transcripts related to the fight against oxidative stress [19,20].
The occurrence of natural follicular death (atresia) has been described in some mosquitoes, such as Aedes aegypti, which seems to occur primarily between 26 and 30 h after a blood meal [3]. In Culex pipiens pallens, it occurs in the first stages of the vitellogenic period and between the second and third days after a blood meal [21][22][23]. In these studies, however, observations were made by indirect or non-specific methods, and information was not obtained for the previtellogenic period.
Mature ovaries of Cx. quinquefasciatus consists of ovarioles containing primary and secondary follicles [24]. Each primary follicle (which develops after a blood meal) has three types of cells. Follicular cells are responsible for transport of nutrients and the production of the chorion. Seven nurse cells are responsible for the synthesis of ribosomes and mRNA for the oocyte. Finally, the single oocyte is responsible for the accumulation of nutrients to be used by the embryo [25][26][27]. The secondary follicle is composed of undifferentiated cells and will turn into a primary follicle after oviposition [28].
The development of ovaries in adult mosquitoes is divided into two stages, the previtellogenic period (PVP; before the blood meal) and vitellogenic period (VP; after the blood meal). During the previtellogenic period, the action of the juvenile hormone triggers preparation of the ovarian follicle cells for absorption and processing of vitellogenin [29]. Next, the 20-hydroxyecdysone hormone inhibits the development of follicle cells, maintaining them in a resting stage until the blood meal [30]. The vitellogenic period begins with the blood meal, and the follicular cells begin the production of chorionic proteins. The fat body begins rapid production of vitellogenin and rapid intake through the oocyte by the vitellogenin receptor and by patency [31][32][33]. Patency is the development of intercellular channels between follicular cells that permit the transport of yolk from the hemolymph directly into the oocyte [34].
The time required for ovary development depends on the mosquito species. In Cx. quinquefasciatus, the previtellogenic phase is completed by approximately the fourth or fifth day of adult life and, the vitellogenic period is completed between the third and fourth day after a blood meal when the female is ready to lay eggs [32]. In previous work, we found that Wolbachia in Cx. quinquefasciatus caused cytoplasmic incompatibility, reduction in the number of eggs (in 4 consecutively gonotrophic cycles) and reduction of eggs viability (principally on the second gonotrophic cycle) [35].
Considering that the natural infection of Wolbachia significantly alters egg-related traits of Cx. quinquefasciatus, we hypothesised that this endosymbiont bacterium also alters the apoptosis in the ovarian cells. The aim of this study was to comparatively describe the occurrence of apoptosis in Wolbachia-infected and uninfected ovaries of Cx. quinquefasciatus during oogenesis, either in the previtellogenic and vitellogenic periods.
Animals
Founder specimens of Cx. quinquefasciatus were initially collected near the banks of the Pinheiros River, São Paulo City, Brazil (23°35′S, 46°41′W). The mosquitoes have been reared in a local insectary since 1995, and were raised at 26-28°C and 70-80% relative humidity under a photoperiod of 12 h dark-12 h light. Larvae were fed with powdered fish food (Sera® Vipan, Heinsberg, Germany) and adults were fed a 10% sucrose solution ad libitum. When necessary, adult females were fed on BALB/c mice, anaesthetized with 80 mg kg −1 of ketamine and 10 mg kg -1 of xylazine hydrochloride. In 2005, the presence of Wolbachia was detected by electron microscopy and confirmed by PCR and sequencing [see 35].
Detection of Wolbachia
Rapid molecular detection of the wsp gene was performed using wsp primers 183F and 691R, according to the methods of Zhou et al. [36]. Amplifications were checked by agarose electrophoresis. The offspring of 20 females that were PCR-positive for the presence of Wolbachia were selected to originate the Wolbachia-infected mosquito group (wPip+).
Tetracycline treatment to obtain uninfected mosquitoes (wPip-)
Approximately 600 adult mosquitoes were fed with sucrose solution containing tetracycline hydrochloride solution (pH 7.0) at a final concentration of 1 mg ml -1 for seven days. Five days after treatment completion, female mosquitoes (parental generation) were allowed to feed on mice to initiate egg laying. Adults from the next generation (F1) were submitted to the same treatment and five days later were blood-fed (see details in [35]). After the individual oviposition, the treated F1 females were subjected to PCR, and the offspring of females (F2) who had no infection were separated. When they became adults, they were fed on mice, generating new offspring. After the fourth consecutive generation with no Wolbachia detection (20 females tested per generation), we began preparation of the infected (wPip+) and uninfected (wPip-) material for microscopy. From this point on, every two months PCRs were performed in both colonies to confirm Wolbachia status.
Ovary morphology
Mosquito ovaries were dissected in 4% paraformaldehyde in phosphate buffered saline (1× PBS) at 12-24 h (1 day); 36-48 h (2 days); 60-72 h (3 days); 84-96 h (4 days); and 108-120 h (5 days) after adult emergence (PVP). On the sixth day of adult life, the mosquitoes were fed blood, and their ovaries were dissected at 6 h, 12 h, 24 h, 36 h, 48 h and 72 h after the blood meal (VP). Unlike the VP periods, the PVP periods were not precise because the emergence of mosquitoes did not occur synchronously; however, the mice were offered to mosquitoes for 1 h, after which they were removed from the cages.
The ovaries were maintained in the fixative solution for 30 to 60 min. Depending on the stage of development: 30 min for ovaries on the first and second day of PVP; 40 min for ovaries on 3-5 days of PVP and 6 to 12 h of VP; and 60 min for ovaries on 24 h or more of VP. The samples were processed using the terminal deoxynucleotidyl transferase dUTP nick marker end labelling (TUNEL) kit to detect apoptosis and atresia.
We used the TUNEL kit Click-It® Alexa Fluor® 488 Imaging Assay (Invitrogen, Carlsbad, USA) following the manufacturer's instructions with modifications. The ovaries were permeabilized with Triton X-100 1% in PBS (1×) for 30-60 min (using the same pattern as the fixative), incubated with fluorescent nucleotides (fluorescein-12-dUTP) and the terminal enzyme deoxynucleotidyl transferase (TdT) for 60 min at 37°C for the synthesis of the nucleotide tail. After 3 washes of 5 min each with 1× PBS/3% BSA (bovine serum albumin in phosphate buffered saline), the samples were transferred to nuclear marker TO-PRO®-3 647 stain for 30-60 min (using the same pattern as the fixative) at a the dilution of 1:100 in 1× PBS:3% BSA. Following fixation and labelling, the material was mounted on a slide with a coverslip, or between two coverslips, with antifade Vectashield Mounting Medium (Vector Labs, Burlingame, USA) and observed under confocal Laser Scanning Microscopy, LSM 510 META, Zeiss. The images with TUNEL marking were located with the LSM Image Browser program (Zeiss, Oberkochen, Germany).
Statistical analyses
Twenty (20) ovaries from different females at each sampling period (first to fifth day during the PVP, and 6 h, 12 h and 24 h during the VP) per group (wPip+ and wPip-) were analysed to compare the number of apoptotic events. Statistical analyses were performed aiming to compare between infected and uninfected, separately at each time.
The parametrical Student's t-test was used to compare between samples with normal distributions (Shapiro-Wilk), and the nonparametric Mann-Whitney test was used for those with non-normal and heteroscedastic distributions. Rejection level (alpha) was stipulated at 5%.
Tetracycline treatment
The treatment with tetracycline was successful so that we were capable of originating an uninfected group. The disinfection was stable during the whole period of experiments, as confirmed by the regular PCR-checkings. After the treatment, no Wolbachia-positive individuals were detected (Additional file 1: Figure S1).
General morphology of the ovaries
Using confocal microscopy, we described the developmental patterns of ovaries during several moments of the oogenesis and vitellogenesis. No ovarian morphological differences were observed between wPip+ and wPip-mosquitoes.
In the first day of the PVP it was not possible to identify follicles in the ovarian tissue; on the second day, follicles were noticeable, but it was not possible to distinguish between primary and secondary follicles (Additional file 2: Figure S2). On the other hand, between the third and fifth days of the PVP, primary and secondary follicles had distinguishable sizes, and ovarian development was apparently complete. Furthermore, at that point, the presence of nurse cells and/or oocyte in primary follicles was pronounced. At 24 h of the VP, the primary follicle exhibited an increased size, and the difference between nurse cells and the oocyte was already apparent, being the oocyte occupying most of the ovarian follicle (Additional file 3: Figure S3).
At 24 h of VP, it was necessary to dissect and separate the ovarioles from the ovary and assemble them between coverslips to permit laser penetration (of the confocal microscope) through the primary follicles. Even though, later stages of the VP could not be analysed because of chorion broad-spectrum autofluorescence and chorion thickness, which prevented laser penetration (Additional file 4: Figure S4). In general, despite the technical constraints, using the confocal approach we could identify the main developmental phases of mosquito eggs.
Apoptosis of follicular cells
To quantify the apoptotic events in follicular cells and to compare it between infected and uninfected females, we observed under confocal microscopy, the ovaries labelled by TUNEL and TO-PRO in several occasions of the previtellogenic and vitellogenic periods. Using the Z-stack of the confocal microscope, only nuclei of cells that were simultaneously labelled with two fluorophores (TO-PRO and TUNEL) were regarded as apoptotic cells.
The sum of apoptotic events in ovary follicular cells of infected females was 1087 from all 160 analysed ovaries, whereas, in 160 ovaries from non-infected individuals, only 562 apoptotic events were detected. Regarding the total observations, the difference between wPip+ and wPipwas statistically significant (Mann-Whitney test, U = 9240, n 1 = n 2 = 160, P < 0.0001). The greatest contribution to this difference between both insect groups was on the second and third day of the PVP and 6 h and 12 h of the VP (Table 1; Fig. 1; Additional file 5: Table S1; Additional file 6: Figure S5). The difference between wPip+ and wPipoccurred mainly because of several apoptoses in secondary follicles (544 infected secondary infected follicles had at least one apoptotic cell, whereas it was 323 for uninfected secondary follicles, Mann-Whitney test, U = 5378.5, n 1 = n 2 = 120, P = 0.0007). Primary follicles contributed little to this difference as only 98 infected and 90 uninfected follicles had at least one apoptotic cell (Mann-Whitney test, U = 7182.5, n 1 = n 2 = 120, P = 0.9744). Inferential statistics quantitatively compared the presence of apoptotic follicular cells between infected and uninfected females at respective time points, during the previtellogenic and vitellogenic periods. Due to the impossibility of distinguishing follicles in the first and second day of the PVP, only the number of apoptotic follicular cells per ovary was counted on the first day, whereas on the second day the apoptotic events were all counted as they belonged to secondary follicles (Table 1, Additional file 4: Figure S4; Additional file 5: Table S1).
Apoptosis of nurse/oocyte cells
Differently, from the follicular cells, very few oocytes or nurse cells were labelled with TUNEL. Labelling was seen only in infected individuals and at second (n = 7) and third day of the PVP (n = 1) (Fig. 2).
Atresia
When analysing microscope images, we considered follicular atresia when the whole follicle was labelled by TUNEL. Few atretic follicles were found at the observed times: second day (n = 3; Fig. 3a); fourth day (n = 3; Fig. 3b); and fifth day of the PVP (n = 1; Fig. 3c), and 12 h of the VP (n = 1; Fig. 3d). The atresia on the second and third day of the PVP apparently occurred in secondary follicles and on the fifth day of the PVP and 12 h of VP they were observed in primary follicles. These observed atresia events on PVP times frequently happen in wPip+ mosquitoes, and the only one observed in VP happened in wPip-mosquito.
Discussion
During the five days of PVP, most apoptoses of follicular cells occurred earlier (days 1-3). This observation is compatible with the fact that this tissue was under proliferation and differentiation, as previously reported by other researchers (see [37]). Coherently, we also observed the lower occurrence of apoptosis on the last days of PVP (days 4 and 5) when the ovary was reaching the resting period. Comparing the number of apoptotic cells in wPip + and wPip-, we observed that this phenomenon occurs mostly in ovaries of infected females, especially in the second and third day of the PVP, and at 6 h, 12 h and 24 h of the VP. These apoptotic cells were observed mainly in the secondary follicles, which would become eggs in the second gonotrophic cycle. Coincidentally the second gonotrophic cycle (corresponding to secondary follicles) is when infected females laid fewer eggs [35], what lead us to pose a new hypothesis: the eggs reduction is caused by Wolbachia-induced apoptosis. A next project should then investigate this neo-hypothesis.
Although there were significantly more apoptotic cells in the primary follicles on the third and fifth day of PVP in wPipmosquitoes, total apoptoses in the ovary over time was low (< 2), and, considering our hypothesis, does not imply a significant change in the total number of eggs laid (approximately 200 eggs in the first gonotrophic cycle) as reported by our group [35].
It seems that apoptosis of follicular cells would not lead, in most cases, the follicle to death, because many follicles eventually present apoptosis, but only a few undergo atresia. A possible explanation for the follicle survival is the tissue repair systems, that have been described in response to apoptosis in Anopheles gut, via actin cone zipper [38], and via gut regeneration in the same mosquito species used in this work [39] and another possibility is that the microtubules of adjacent cells, responsible for the patency (intercellular channels) formation, can be involved in this epithelial restructuration. On the other hand, the apoptosis of few follicular cells could increase patency formation and, consequently, increase the absorption of yolk by the oocyte through the follicular cells, which would be beneficial for oocyte maturation. This possibility could result in the faster eggs maturation, previously observed in infected females [35].
Other studies concerning the occurrence of apoptosis in the germinal tissue to the absence of Wolbachia were performed in Brugia malayi [40], Drosophila mauritiana [41], and Asobara tabita [42], but in those cases, the Wolbachia relationship with the hosts are considered mutualistic, unlike that of Culex. However, the symbiotic relationship was considered parasitic with the induction of apoptosis in the germ cells of Drosophila melanogaster infected with a virulent strain (wMelPop) of Wolbachia [43]. The same strain of the bacteria was transfected into Aedes albopictus, which resulted in fewer eggs laid and follicular atresia was a possible cause of the reduction [44].
Pan et al. [19] found that transfection of wMelPop for Aedes aegypti caused an increase in transcription of genes related to the immune system of the mosquito, and unregulated transcription of antioxidant genes and ferritin gene, a protein responsible for storage of free iron. It is known that these factors are directly related to apoptosis induction [45]. In our view, the induction of apoptosis described in this work could, at least in part, be related to any of the results obtained by Pan et al. [19] being that the presence of Wolbachia would cause oxidative stress in the ovary cells.
In this work, we expected to find whole follicles in apoptosis (follicular atresia), but it was rarely observed using our method. In Ae. aegypti, atresia was detected between 26 h and 30 h after the blood meal [3], and in Culex pipiens pallens it occurred between two and three days after the blood meal [22,23]. Our observations of ovaries during the vitellogenic period showed that at 12 h of VP, atresia occurred in a single ovarian follicle, and according to the descriptions noted above, atresia usually occurs later. Unfortunately, the formation of the chorion prevented the observation of follicles after 24 h in the VP, and we could not observe this phenomenon with the method utilised in this study. It is also notable that the TUNEL reagent is commonly used for cells and not for tissues. Therefore, TUNEL reagent may not have been an efficient method to detect apoptosis in tissues or huge cells because of its low penetration.
According to Clements [24], the occurrence of atresia begins during the resting period (days 4-5 of PVP), but in our study, we found six atretic ovarian follicles on days 2-4 of PVP. This is the first study to document this in mosquitoes. Chao & Nagoshi [46] and Timmons et al. [15] concluded that follicular cells in apoptosis might induce apoptosis in nurse cells or oocytes, leading to follicular atresia. In addition to the apoptosis in follicular cells, we observed eight nurse cells (or oocytes) in apoptosis and eight follicles in atresia in 320 ovaries analysed in this work, which although small, confirmed that this type of cell death occurs in Culex quinquefasciatus ovaries.
We cannot rule out that there are ten types of programmed cell death as described in Drosophila, which occur on different pathways than that of apoptosis [15], and our detection method was not sufficient to observe all of them. In addition, the confocal method was partly limited by the formation of the chorion, which prevented the passage of the laser. Unfortunately, the occurrence of some apoptoses and atresia seems to occur in this period [21,24,33,47].
In this study, we firstly detected the occurrence of atresia during PVP. In addition, we observed follicular cells in apoptosis in the previtellogenic and early vitellogenic periods, but this phenomenon was rare in nurse and oocyte cells. Although apoptosis of follicular cells is a common occurrence in many organisms, it was not described in mosquitoes hitherto, and this work documents quantitatively and temporally its occurrence. Other approaches for future investigation on how Wolbachia influences reproduction of mosquitoes would be labelling Wolbachia for determine its location on mosquito tissues, and searching for apoptosis indicator genes/molecules acting differently between infected and uninfected mosquitoes.
Conclusions
To the best of our knowledge, the follicular atresia is first reported in the mosquito previtellogenic period of Cx. quinquefasciatus. The occurrence of apoptosis in ovarian follicular cells is more common than we believed, and is increased in the presence of Wolbachia. Because more apoptosis correlates with fewer eggs in the second gonotrophic cycle, it is worth investigating the causal relationship between apoptosis occurrence and egg diminishing in Wolbachia-infected mosquitoes. The findings of this work may have implications for mosquito control and, principally, in the use of Wolbachia as a mosquito and pathogens control strategy. | 5,415.2 | 2017-08-25T00:00:00.000 | [
"Biology",
"Medicine"
] |
Quasiadditivity of variational capacity
We study the quasiadditivity property (a version of superadditivity with a multiplicative constant) of variational capacity in metric spaces with respect to Whitney type covers. We characterize this property in terms of a Mazya type capacity condition, and also explore the close relation between quasiadditivity and Hardy's inequality.
Introduction
Given an open set Ω ⊂ R n and a subset E ⊂ Ω, the relative p-capacity cap p (E, Ω) measures the minimal energy needed by a Sobolev function that vanishes on ∂Ω to take on a value at least 1 on E. In potential theory this quantity can also be seen as a measure of the amount of rectifiable curves connecting E to ∂Ω. Hence, greater the amount of ∂Ω that is close to E the larger the relative p-capacity is.
It can be seen that E → cap p (E, Ω) is an outer measure on subsets of Ω. In particular, capacity is countably sub-additivite: if E k ⊂ Ω, k ∈ I ⊂ N, then (1) cap p k∈I E k , Ω ≤ k∈I cap p (E k , Ω).
Unlike for Borel regular measures, the equality in (1) does not (usually) hold even for nice, well-separated sets. Indeed, the only sets that are measurable with respect to cap p -outer measure are sets of zero capacity and their complements, see for example [30,Theorem 4.8] or [10,Theorem 2]. Nevertheless, in some cases a converse to (1), with a multiplicative constant, can be shown to hold for certain of unions of sets; this is called the quasiadditivity property of capacity. More precisely, we say that the p-capacity relative to an open set Ω is quasiadditive with respect to a given cover (or a decomposition) W of Ω if there is a constant A > 0 such that for all E ⊂ Ω. The quasiadditivity property (for the linear case p = 2) was first considered by Landkof [22,Lemma 5.5] (without the name) and Adams [1] for Riesz (and Bessel) capacities with respect to annular decompositions of R n \ {0}. Aikawa generalized these results in [2], where he showed that if the complement R n \ Ω has a sufficiently small dimension (formulated in terms of a local version of packing condition), then the Riesz capacity of R n is quasiadditive with respect to Whitney decompositions of Ω. On the other hand, in [3] Aikawa considered the Green capacity (obtained via the Green energy) and demonstrated that if R n \ Ω is uniformly regular (or, equivalently, uniformly 2-fat), then the Green capacity is quasiadditive with respect to Whitney decompositions of Ω. Note that in this case, conversely to the result of [2], the complement R n \ Ω has a large dimension. A good survey of these topics in the Euclidean setting can be found in [ See also [5,Section 16 of Part I] and [4] for related results for nonlinear (p = 2) setting for which decompositions other than the Whitney decomposition are used.
The aim of this note is to study the quasiadditivity problem for the relative p-capacity with respect to Whitney type covers in the setting of complete metric measure spaces satisfying the 'standard' structural assumptions (see Section 2.1). Nevertheless, most of our results are new for p = 2 (and for p = 2, obtained via new methods) even in Euclidean spaces. Part of our motivation stems from a need to clarify the relation between quasiadditivity and Hardy's inequality (a Sobolev-type inequality weighted with a power of the distance-to-boundary function; see Section 3.2). Glimpses of a connection between these concepts (and the related dimension bound of Aikawa from [2]) have appeared e.g. in [3,5,21,23,25,32], but now our main result -Theorem 3.3, a characterization of quasiadditivity in terms of a Mazya type capacity estimate -reveals a simple equivalence between quasiadditivity and Hardy's inequalities (Corollary 3.5). These results also link quasiadditivity and the geometry of the boundary (or the complement) of the open set Ω.
The organization of this note is as follows. In Section 2 we recall some of the necessary background material: The basic assumptions, the notions of (co)dimension for metric spaces, Sobolev type spaces and the related capacities, and Whitney covers (substitutes of the classical Whitney decompositions for our more general spaces). Since our proofs are largely based on potential-theoretic (rather than PDE-based) tools, an overview of these is given at the end of Section 2; of a particular importance for us is the weak Harnack inequality for superminimizers. Section 3 contains our main characterization of quasiadditivity and the aforementioned connection with Hardy's inequalities. A concrete outcome of these considerations is that the uniform p-fatness of X \ Ω guarantees the quasiadditivity for the relative p-capacity in Ω for 1 < p < ∞.
Aikawa's dimension bound dim A (R n \ Ω) < n − p from [2] translates to more general metric spaces as co-dim A (X \ Ω) > p. We show in Section 4 that this bound, together with an additional discrete John type condition, is sufficient for the relative p-capacity to be quasiadditive with respect to Whitney covers of Ω. Finally, in Section 5 we explain how the results involving a large complement (uniform p-fatness) or a small complement (Aikawa's condition) can be combined, allowing us to deal with more general open sets whose complements consist of parts of different sizes.
For the notation we remark that C will denote positive constants whose value is not necessarily the same at each occurrence. If there exist constants c 1 , c 2 > 0 such that c 1 F ≤ G ≤ c 2 F , we sometimes write F ≈ G and say that F and G are comparable.
Preliminaries
2.1. Doubling metric spaces. We assume throughout this note that X = (X, d, µ) is a complete metric measure space, where µ is a Borel measure supported on X, with 0 < µ(B) < ∞ whenever B = B(x, r) is an open ball in X, and that µ is doubling, that is, there is a constant C > 0 such that whenever x ∈ X and r > 0, we have µ(B(x, 2r)) ≤ C µ(B(x, r)).
We make the tacit assumption that each ball B ⊂ X has a fixed center x B and radius rad(B), and thus notation such as λB = B(x B , λ rad(B)) is well-defined for all λ > 0.
If µ is a doubling measure, then by iterating the doubling condition we find constants Q > 0 and C > 0 such that µ(B(y, r)) µ(B(x, R)) ≥ C r R Q whenever 0 < r ≤ R < diam X and y ∈ B(x, R). Furthermore, if X is connected (this is guaranteed in our setting by the below-mentioned Poincaré inequalities), then there exists a constant Q u > 0 such that for all 0 < r < R < diam X and y ∈ B(x, R), In general, 1 ≤ Q u ≤ Q. However, if we have uniform upper and lower bounds for the measures of the balls, i.e. c −1 r Q ≤ µ(B(x, r)) ≤ cr Q for every x ∈ X and all 0 < r < diam(X), we say that the measure µ is (Ahlfors) Q-regular.
When working with a (non-regular) doubling measure µ, it is often convenient to describe the sizes of sets in terms of codimensions (instead of dimensions). For instance, the Hausdorff codimension of E ⊂ X (with respect to µ) is the number Another notion of codimension that will be useful for us is the Aikawa codimension: For E ⊂ X we define co-dim A (E) as the supremum of all q ≥ 0 for which there exists a constant C q such that for every x ∈ E and all 0 < r < diam(E). Here we interpret the integral to be +∞ if q > 0 and E has positive measure. It is not hard to see that co-dim A (E) ≤ co-dim H (E) for all E ⊂ X (cf. [25]). If µ is Ahlfors Q-regular, then we could define the Aikawa dimension of a set E ⊂ X to be the number dim A (E) = Q − co-dim A (E). Nevertheless, it was shown in [25] that for subsets of Ahlfors regular metric measure spaces this concept is actually equal to the Assouad dimension of the subset; see [26] for the basic properties of the Assouad dimension.
2.2.
Sobolev-type function spaces in the metric setting. There are many analogs of Sobolev-type function spaces in the metric setting. The one considered in this note is based on the notion of upper gradients, generalizing the fundamental theorem of calculus. Given a measurable function f : X → [−∞, ∞], we say that a Borel measurable non-negative function g on X is an upper gradient of f if whenever γ is a compact rectifiable curve in X, we have Here x, y denote the two endpoints of γ, and the above condition should be interpreted as claiming that γ g ds = ∞ whenever at least one of |f (x)|, |f (y)| is infinite. See [12] and [6] for a good discussion on the notion of upper gradients. Using upper gradients as a substitute for modulus of the weak derivative, we define the norm where the infimum is taken over all upper gradients g of f . The Newtonian space N 1,p (X) is the space where the equivalence ∼ is given by u ∼ v if and only if u − v N 1,p (X) = 0 (see [28] or [6] for more on this function space).
In addition to the doubling property, we will also assume throughout that the space X supports a (1, p)-Poincaré inequality, that is, there exist constants C > 0 and λ ≥ 1 such that whenever B = B(x, r) ⊂ X and g is an upper gradient of a measurable function f , we have Different notions of capacity are of fundamental importance in many questions related to the behavior of the functions belonging to a certain class. Given a set E ⊂ X, the total p-capacity of E, denoted Cap p (E), is the infimum of u p N 1,p (X) over all functions u such that u ≥ 1 on E. Just as sets of measure zero play the role of indeterminacy in the theory of Lebesgue spaces L p (X), sets of total p-capacity zero play the corresponding role in the theory of Sobolev type spaces; see [6] or [28] for details. We say that a property holds (p-)quasieverywhere (p-q.e.) if the exeptional set is of zero total capacity.
When the examinations are taking place in an open set Ω ⊂ X, then a more appropriate version of capacity is the relative p-capacity. For a measurable set E ⊂ Ω this is defined as the number where the infimum is taken over all u ∈ N 1,p (X) with u = 0 on X \ Ω, u = 1 on E, 0 ≤ u ≤ 1, and over all upper gradients g u of u. A function u satisfying the above conditions is called a capacity test function for E. Should no such function u exist, we set cap p (E, Ω) = ∞. When the variational capacity is taken with respect to Ω = X, it may be the case that cap p (E, X) = 0 for all bounded E ⊂ X; this is certainly true if X is bounded. If X is unbounded and still cap p (E, X) = 0 for all bounded E ⊂ X, then X is called p-parabolic, but if cap p (E, X) > 0 for some bounded E ⊂ X, then X is p-hyperbolic. These notions will be relevant in the considerations of Section 4. See [14] and [15] for more on parabolic and hyperbolic spaces. Notice in particular that if X is bounded or p-parabolic and Ω ⊂ X is such that Cap p (X \ Ω) = 0, then cap p (E, Ω) = 0 for every E ⊂ Ω.
Besides measuring small (exeptional) sets, the relative capacity can also be used to give conditions for the largeness of sets. For instance, a closed set E ⊂ X is said to be uniformly p-fat if there exists a constant C > 0 such that for all balls B of radius less than diam(X)/8. See [6,Chapter 6] for this and other basic properties of the total and the variational capacity on metric spaces. We remark that the uniform p-fatness can also be characterized using uniform density conditions for Hausdorff contents; see e.g. [19].
Recall that we say the variational p-capacity cap p (·, Ω) to be quasiadditive with respect to a decomposition or a cover W of Ω if there exists a constant A > 0 such that for every E ⊂ Ω. In the next subsection we discuss the one particular family of covers we are concerned with.
Whitney covers.
Let Ω ⊂ X be an open set. We often write d Ω (x) = dist(x, X \ Ω) for x ∈ Ω. When 0 < c ≤ 1/2, we fix a Whitney type cover Such a cover can always be constructed by considering maximal packings (or, alternatively, '5r'-covers) of the sets {x ∈ Ω : with the balls of the above type. In pathological situations we allow B i = ∅ for some i, if necessary.
In our proofs, we need to be able to dilate the Whitney balls without having too much overlap; the existence of such a cover is established in the next (elementary) lemma. For a proof, see e.g. In the proof of our main characterization of the quasiadditivity, we will need for Whitney balls B ∈ W c (Ω) the estimate where c 1 , c 2 may depend on W c (Ω) but not on the particular B ∈ W c (Ω). The upper bound in (3) is always true in our setting, and can be proved almost immediately by using only the doubling condition and the test function The lower bound is a bit more involved, and can in fact fail in some spaces satisfying our basic assumptions. Thus, in the cases where we need the lower bound, we will need to have some extra assumptions on Ω or X, e.g. those given in Lemma 2.2 below. However, we stress that the lower bound in (3) is only needed in the proof of Lemma 2.3, which on the other hand is only used to prove the Main Theorem 3.3, and then once again in the considerations of Section 5, and thus these are the only instances where such assumptions are needed. Another important case when the lower bound is valid is when X \ Ω is uniformly pfat; the bound then follows easily (e.g.) from the p-Hardy inequality (see Section 3.2) for capacity test functions of B ∈ W c (Ω). In this case we only need the standing assumptions that µ is doubling and X supports a (1, p)-Poincaré inequality.
If we want to weaken the assumption on X \ Ω, we need to assume more on µ and p. In the following lemma we chose the condition that X is unbounded (in which case we have µ(X) = ∞ by (2)). However, if X happens to be bounded, then we could impose a further condition on Ω instead, such as diam(Ω) < 2 diam(X), in which case the constants depend on γ = µ(Ω)/µ(X) < 1. The proof in this case is similar to the one below. Recall here that Q u is the exponent from the 'upper mass bound' (2).
for all Whitney balls B ∈ W c (Ω).
Proof. Let u ∈ N 1,p (Ω) be a capacity test function for B = B(x, r) ∈ W c (Ω), and let g ∈ L p (Ω) be an upper gradient of u. For positive integers j let B j = 2 j B and r j = 2 j r. As u ∈ L p (X) and µ(X) = ∞ by the unboundedness of X (here we use (2)), there exists K ∈ N (depending on u) such that u B K < 1/2. On the other hand, because u ≥ 1 on B we know that u B = 1. Now a standard telescoping argument using the (1, p)-Poincaré inequality yields It follows that there exists a constant C 0 > 0 and some 1 ≤ j 0 ≤ K such that (4) would lead to a contradicition when compared to a geometric series). Thus we obtain, using also (2) and the assumption 1 < p < Q u , that as desired.
Let us record the following easy consequence of estimate (3) for unions of Whitney balls.
Let Ω ⊂ X be an open set and let W c (Ω) = {B i } i∈N be a Whitney cover of Ω for which the lower bound in (3) holds. Furthermore, let U ⊂ Ω be a union of Whitney balls, i.e., U = i∈I B i for some I ⊂ N. Then Proof. This follows from the fact that
2.4.
Existence and properties of p-potentials. In computing the relative p-capacity cap p (E, Ω), should this capacity be finite, we can find a minimzing sequence of capacity test functions u k ∈ N 1,p (X), i.e., lim k→∞ inf gu k X g p u k dµ = cap p (E, Ω).
We will assume throughout that 1 < p < ∞; hence L p (X) is reflexive, and so a standard variational argument using Mazur's lemma on L p (X) (as in Lemma 2.4 below) tells us that if Ω ⊂ X is bounded and Cap p (X \ Ω) > 0, then there is a p-potential u ∈ N 1,p (X) such that 0 ≤ u ≤ 1 on X, u = 1 on E, u = 0 on X \ Ω, and Such a p-potential is unique because X supports a (1, p)-Poincaré inequality; see [29] for more details. Nevertheless, the following more general lemma tells us that a p-potential u ∈ N 1,p loc (X) exists in more general cases (e.g. if Cap p (X \ Ω) = 0) as well; though, if X is p-parabolic, we would have u be a constant. In addition, the below proof shows that the reflexivity of N 1,p (X) is actually not needed for the existence of p-potentials.
If Ω is bounded, then the following proof can be easily modified, or the results of [29] can be used to obtain the desired conclusion. Hence here we will only give the proof for the case that Ω, and hence X, is unbounded. For each u ∈ N 1,p (X) there is a minimal (p-weak) upper gradient g u ∈ L p (X); see for example [6]. Hence, from now on, we let g u denote this minimal upper gradient of u.
Let {u k } k∈N be a sequence of functions in N 1,p (X) that satisfy 0 ≤ u k ≤ 1 on X, u k = 0 on X \ Ω p-q.e., u k = 1 p-q.e. on E, and lim k→∞ Ω g p u k dµ = cap p (E, Ω).
Fix x 0 ∈ Ω and for each positive integer n let B n = B(x 0 , n). Given that 0 ≤ u k ≤ 1, the sequences {u k } k and {g u k } k are bounded in L p (B n ) for each positive integer n, and hence because 1 < p < ∞, the uniform convexity of L p (B n ) together with a Cantor diagonalization argument tells us that {u k } k converges weakly to a functionû in L p loc (X) and that {g u k } k converges weakly to g in L p loc (X). Finally, an application of [16, Lemma 3.1] to {u k } k and {g u k } k in B n allows us to modifŷ u on a set of measure zero to obtain a function u ∈ L p loc (X) that has g as a p-weak upper gradient, and furthermore, from [16, Lemma 3.1] and [28, Proof of Theorem 3.7], we can conclude that u = 1 p-q.e. on E, u = 0 p-q.e. on X \ Ω, and that This, together with the definition of cap p (E, Ω) now completes the proof.
The results from [18] show that such u, if non-constant, satisfies u > 0 on Ω with u < 1 on X \ E. Of course, should cap p (E, Ω) be infinite, then no such u exists.
It is clear that the p-potential u, corresponding to E ⊂ X, has the property that u is a p-superminimizer in Ω and a p-subminimizer in X \ E; in particular, u is a p-minimizer in Ω \ E. Here, we say that a function v ∈ N 1,p loc (X) is a p-superminimizer in an open set U ⊂ X if, whenever ϕ ∈ N 1,p (X) is a non-negative function such that ϕ = 0 on X \ U , we have inf gv supt(ϕ) and v is a p-subminimizer in U if −v is a p-superminimizer in U . We refer the interested reader to [17] for information on minimizers; see also [6]. In particular, it is known that if v is a p-superminimizer in U and w ∈ N 1,p loc (X) is a p-minimizer in U such that w ≤ v holds p-q.e. on X \ U , then w ≤ v on U as well. This is the so-called comparison principle. Notice also that if Cap p (X \ Ω) = 0 and v ∈ N 1,p loc (X) is a minimizer in Ω, then v is a minimizer in X; moreover, in this case, if u is a p-potential for cap p (E, Ω) then it is a p-potential for cap p (E, X).
In the proofs of our results the following weak Harnack inequality for p-superminimizers is of fundamental importance. See [18] for a proof of this lemma.
Characterizations of quasiadditivity
In this section we prove the main result of this note, Theorem 3.3, and provide a connection between quasiadditivity and p-Hardy inequalities. Recall that we always assume that 1 < p < ∞.
3.1. The main characterization. We begin by showing that quasiadditivity property for unions of balls is in fact sufficient for the quasiadditivity for general sets. Below C A is the constant from the weak Harnack inequality and λ is the dilatation constant from the p-Poincaré inequality. Let Ω ⊂ X be an open set with Ω = X and let W c (Ω) = {B i } i∈I be a Whitney cover of Ω with c < min{(C A ) −1 , (30λ) −1 }. Assume that the quasiadditivity condition holds for unions of Whitney balls, i.e., if U = i∈I B i for some I ⊂ N, B i ∈ W c (Ω), then i∈I cap p (B i , Ω) ≤ C 1 cap p (U, Ω).
Then the capacity cap p (·, Ω) is quasiadditive with respect to W c (Ω), i.e., there exists a constant C > 0 such that for every E ⊂ Ω.
Proof. The structure of the proof is based on the idea of Aikawa [2], but given the nonlinear nature of our setting, the tools we employ are completely different. Let E ⊂ Ω. If the relative capacity cap p (E, Ω) is infinite, then the desired inequality would follow. Therefore we assume that cap p (E, Ω) < ∞, and let u ∈ N 1,p loc (Ω) be the p-potential corresponding to this capacity. If cap p (E, Ω) = 0, then by the monotonicity of cap p (·, Ω), each term in the sum on the left-hand side of the desired inequality is also zero, and the claim follows. Therefore we will assume that cap p (E, Ω) > 0, and so the p-potential u is non-constant.
where q and A are the constants from the weak Harnack inequality (Lemma 2.5). We divide the union E = i∈N E i into the following two parts: If u(x) ≥ C 0 for q.e. x ∈ B i , then i ∈ I 1 , and otherwise i ∈ I 2 . Note also that the indices i for which cap p (E i , Ω) = 0 do not contribute to the sum on the left-hand side of the desired quasiadditivity inequality. Hence in the following argument we only consider the indices i for which cap p (E i , Ω) > 0.
It is immediate that u C 0 is an admissible test function for Thus, using the assumption that quasiadditivity holds for unions of Whitney balls, we obtain for all upper gradients g u of u that On the other hand, if i ∈ I 2 , then by the weak Harnack inequality Since 0 ≤ u q ≤ 1, it follows that for the set whence v ∈ N 1,p loc (X) and the class of upper gradients of u is precisely the class of upper gradients of v as well. Also, This gives a positive lower bound c 1 for the mean-value of v p in 2B i ; We can now use the well-known Mazya's version of the (Sobolev-)Poincaré inequality (see e.g. [27,Chapter 10], and [8,Proposition 3.2] for the metric space version): where g v is an arbitrary upper gradient of v (and thus of u as well). Since E i = B i ∩{v = 0} by the comparison principle, it follows from (7) that Using this and the fact that the balls 10λB i do not overlap too much, guaranteed by our choice of the parameter c (with L ≥ 10λ) and Lemma 2.1, we conclude that The claim now follows by taking the infima over all upper gradients of u in (5) and (8) whenever U ⊂ Ω is a union of Whitney balls. Then there exists a constant C > 0 such that Proof. Let E ⊂ Ω. If cap p (E, Ω) = ∞ the claim follows, and thus we may again assume that cap p (E, Ω) < ∞. Let u be the p-potential of E with respect to Ω, and let g u be an upper gradient of u. We denote E i = E ∩ B i and split the union E = i E i into two parts: In the first case i ∈ I 1 we have |u − u 2B i | ≥ 1/2 in E i , and so, using the (p, p)-Poincaré inequality (a consequence of the (1, p)-Poincaré inequality by [11,Theorem 5.1]) and the bounded overlap of the balls 10λB i , we obtain In the second case i ∈ I 2 it follows from u 2B i ≥ 1/2 and 0 ≤ u ≤ 1 that for from which we obtain Thus, by the weak Harnack inequality for superminimizers, we obtain that for each i ∈ I 2 . Hence the function u/C 1 is an admissible test function for Using the bounded overlap of the balls B i and the assumption (9), we conclude that The lemma follows from estimates (10) and (11) by taking the infimum over all upper gradients of u.
Combining the conditions from Propositions 3.1 and 3.2, we arrive at the main result of this section: Let Ω ⊂ X be an open set, and let W c (Ω) = {B i } i∈N be a Whitney cover of Ω with c < min{(C A ) −1 , (30λ) −1 }. Then the following conditions are (quantitatively) equivalent: (a) There exist C > 0 such that for all E ⊂ Ω, and the capacity estimate (3) holds.
whenever U = i∈I B i for B i ∈ W c (Ω) and I ⊂ N, and the capacity estimate (3) holds.
Proof. The implications (a) =⇒ (b) and (c) =⇒ (d) are trivial and the implications converse to these are Proposition 3.2 and Proposition 3.1, respectively. As a link between these two equivalences we have (b) ⇐⇒ (d) from Lemma 2.3, and here the lower bound of (3) is needed to pass from (d) to (b). Hence we assume the validity of (3) in parts (c) and (d). Recall that the validity of (3) is guaranteed by Lemma 2.2 when the hypotheses of this lemma are satisfied, or by the uniform p-fatness of X \ Ω.
The Hardy connection.
We say that an open set Ω ⊂ X admits a p-Hardy inequality if there exists a constant C > 0 such that the inequality holds for all u ∈ N 1,p (Ω) with u = 0 on X \ Ω and for all upper gradiets g u of u. Let us record the following Mazya-type characterization for Hardy inequalities.
for all E ⊂ Ω.
Proof. For compact sets K ⊂ Ω, the above characterization is proven in the metric space setting in [20, Theorem 4.1] (see also [27, §2.3] in the Euclidean setting). Thus it suffices to show that if Ω admits a p-Hardy inequality and E ⊂ Ω is an arbitrary subset, then estimate (12) holds. If cap p (E, Ω) = ∞, then there is nothing to prove, and on the other hand if cap p (E, Ω) < ∞, then the p-Hardy inequality, used for capacity test functions u k with lim k→∞ Ω g u k dµ = cap p (E, Ω), yields the desired estimate (12).
In other words, an open set Ω ⊂ X admits a p-Hardy inequality if and only if the assertion (a) of Theorem 3.3 holds. This leads immediately to the following corollary. Since uniform p-fatness of the complement X \ Ω is a sufficient condition for p-Hardy inequalities in our setting (see [9, Corollary 6.1] and [19]), we obtain a concrete sufficient condition for the quasiadditivity of the p-capacity: Corollary 3.6. Let Ω ⊂ X be an open set and let W c (Ω) be a Whitney cover of Ω with a suitably small parameter 0 < c ≤ 1/2. Assume further that the complement X \ Ω is uniformly p-fat. Then the capacity cap p (·, Ω) is quasiadditive with respect to W c (Ω).
Quasiadditivity and the Aikawa dimension
In this section we focus on open sets Ω ⊂ X that satisfy co-dim A (X \ Ω) > p (recall the definition from Section 2.1). In this case we also have that co-dim H (X \ Ω) > p, and hence it follows from [24,Proposition 4.1] that Cap p (X \ Ω) = 0. Therefore, as was remarked in Section 2.4, we know that cap p (E, Ω) = cap p (E, X) for every E ⊂ Ω. Recall from Section 2.2 that if X is p-parabolic, then actually cap p (E, X) = 0 for all bounded E ⊂ X. Thus, if X is p-parabolic and Ω ⊂ X is such that Cap p (X \ Ω) = 0, then Ω satisfies the quasiadditivity property trivially; the same is also true if X is bounded and Cap p (X \ Ω) = 0. Hence in this section we assume that X is unbounded and p-hyperbolic.
We say that an open set Ω = X \ E and a related Whitney cover W = W c (Ω) satisfy a discrete John condition if there exist L > 1, a > 1, and C > 0 such that for each B ∈ W we find a chain C(B) = {B m } ∞ m=0 of Whitney balls B m ∈ W(Ω), with B 0 = B, such that B m ∩ B m+1 = ∅, B ⊂ LB m , and rad(B m ) ≥ Ca m rad(B) for each m ∈ N. This condition is satisfied, for instance, if Ω is an unbounded John domain (see [31]); similar chain conditions have been used e.g. in [11,12]. Notice in particular that since our open sets are unbounded, there can not exist a 'John center' as in the usual John condition for bounded domains; essentially the 'point at infinity' acts as a John center. On the other hand, the domain Ω = {(x, y) ∈ R 2 : 0 < y < |x| + 1} satisfies the discrete John condition, but is not an unbounded John domain (in the sense of [31]). Let Ω ⊂ X be an open set with co-dim A (X\Ω) > p. Assume furthermore that Ω satisfies the above discrete John condition for a Whitney cover W c (Ω) with c ≤ (6λ) −1 . Then cap p (·, Ω) is quasiadditive with respect to W c (Ω) and Ω admits a p-Hardy inequality.
Proof. By Theorem 3.3 and Corollary 3.5, it suffices to show that there is a constant Fix such a set U , and write r i = rad(B i ), i ∈ I. We may clearly assume that cap p (U, Ω) < ∞. Let u be a capacity test-function for U . Then u B i = 1 for each i ∈ I (and thus u 2B i ≥ α for some α > 0 since 0 ≤ u ≤ 1). On the other hand, since u ∈ L p (Ω), we find, using the discrete John condition, for each i ∈ I a chain of Whitney balls B i,m , m = 0, 1, . . . , and thus, by Hölder's inequality, allowing us to choose the index M i as above.
By a standard chaining argument using the (1, p)-Poincaré inequality (see e.g. [11] or [12]), we have that (14) α Comparing the sum on the right-hand side of (14) with the convergent geometric series ∞ m=0 a −mδ , we infer that if δ > 0, then there must exist a constant C 1 > 0, independent of u and B i , and at least one index m i ∈ N such that (15) rad Let us now fix q such that co-dim A (X \ Ω) > q > p and set δ = (q − p)/p > 0. We thus obtain from (15) for each B i a ball B * i = B i,m i with radius r * i satisfying Using estimate (16), and changing the summation to be over all Whitney balls, we calculate If B = B * i , that is, B ∈ W satisfies (16) for the ball B i , then B i ⊂ LB by the chain condition. Since r −q i ≈ d Ω (x) for all x ∈ B i , it follows from the bounded overlap of the Whitney balls B i ⊂ LB and the assumption co-dim A (X \ Ω) > q, that By the assumption c ≤ (6λ) −1 the overlap of the balls 2λB, where B ∈ W c (Ω), is uniformly bounded (Lemma 2.1), and so we conclude from (17) and (18) The claim (13) follows by taking the infimum over all capacity test functions for U (and their upper gradients).
It has been shown in [25, Section 6] (following the considerations of [21]), that if Ω ⊂ X admits a p-Hardy inequality, then either co-dim H (X \ Ω) < p − δ or co-dim A (X \ Ω) > p + δ for some δ > 0 only depending on the data associated with the space X and the Hardy inequality. Moreover, there is also a local version of such a dichotomy for the dimension [25, Theorem 6.2]. These results, together with the above Proposition 4.1 (and see also the following Section 5), show clearly that the condition co-dim A (X \ Ω) > p is very natural in the context of Hardy inequalities and thus also for quasiadditivity. On the other hand, the case co-dim H (X \ Ω) < p − δ includes open sets with uniformly p-fat complements; cf. Corollary 3. 6.
The main open question here is whether the discrete John condition is really necessary in Proposition 4.1; we know of no counterexamples. In the Euclidean space R n this extra condition is certainly not needed. Indeed, as commented at the end of [21], the dimension bound dim A (R n \ Ω) < n − p implies by [2,Theorem 2 for all (measurable) E ⊂ Ω; here R 1,p (E) is a Riesz capacity of E (cf. [2] or [5] for the definition) and the second inequality is a well-known fact. Quasiadditivity for cap p (·, Ω) follows by Theorem 3.3.
Nevertheless, Proposition 4.1 still gives a partial answer to the question of Koskela and Zhong [21, Remark 2.8], i.e., a q-Hardy inequality holds in their setting provided that Ω satisfies the discrete John condition (and the Minkowski dimension in [21, Remark 2.8] is replaced by the correct Aikawa (co)dimension).
Combining fat and small parts of the complement
The results studied in Section 3 gave us a criterion, uniform p-fatness of X \ Ω, under which Ω supports quasiadditivity of p-capacity for the Whitney decompositions of Ω; this condition requires X \Ω to be 'large'. Conversely, in Section 4 we gave a criterion, largeness of the Aikawa co-dimension of X \ Ω (or, smallness of the Assouad dimension -and hence 'smallness' of X \ Ω), under which Ω supports quasiadditivity for the Whitney decompositions of Ω. Nevertheless, requiring the whole complement to be either large or small rules out many interesting cases. For instance, sometimes the complement of a domain can be decomposed into two closed subsets such that one of them is 'large' and one is 'small'; an easy example is the punctured ball B(0, 1) \ {0} ⊂ R n . The aim of this final section is to explain how the results of the previous Sections 3 and 4 can be combined to address such more complicated sets. In the Euclidean case, some results into this direction can be found also in [23]. A full geometric characterization of domains supporting quasiadditivity of p-capacity for Whitney decompositions still seems to be beyond our reach. However, in the next lemma we demonstrate a technique which applies to a broad class of sets.
Lemma 5.1. Assume that X is unbounded and that 1 < p < Q u . Suppose that Ω 0 ⊂ X is an open set such that X \ Ω 0 is uniformly p-fat. Suppose also that F ⊂ Ω 0 is a closed set with co-dim A (F ) > p, and that X \ F satisfies the discrete John condition of Section 4.
Then Ω = Ω 0 \ F satisfies a quasiadditivity property with respect to Whitney covers W c (Ω) with suitably small c > 0.
Proof. Let W c (Ω) be a Whitney decomposition of Ω, where 0 < c < min{(C A ) −1 , (30λ) −1 }. Set W 1 to be the collection of all balls B(x, r) ∈ W satisfying dist(x, X \ Ω) = dist(x, F ) and, similarly, let W 2 be the collection of all balls B(x, r) ∈ W satisfying dist(x, X \ Ω) = dist(x, X \ Ω 0 ). It is clear that we can extend the collection W 1 to a Whitney cover W 1 * of X \ F =: Ω 1 and the collection W 2 to a Whitney cover W 2 * of Ω 0 , both with the same constant c but possibly with larger overlap constants.
As before, to prove the quasiadditivity property, it suffices to consider unions of Whitney balls. Thus, let U = i∈I B i , where B i ∈ W c (Ω) and I ⊂ N; we may also assume that cap p (U, Ω) < ∞. Set U 1 = B i ∈W 1 B i , U 2 = B i ∈W 2 B i . Since co-dim A (X \ Ω 1 ) = co-dim A (F ) > p and the discrete John condition holds for Ω 1 , we know, by Proposition 4.1, that (19) {i∈I:B i ∈W 1 } cap p (B i , Ω 1 ) ≤ C 1 cap p (U 1 , Ω 1 ) ≤ C 1 cap p (U, Ω); here we use the facts U 1 ⊂ U and Ω ⊂ Ω 1 . On the other hand, an application of the results of Section 3 yields (20) {i∈I:B i ∈W 2 } cap p (B i , Ω 0 ) ≤ C 2 cap p (U 2 , Ω 0 ) ≤ C 2 cap p (U, Ω).
Here the last inequality follows since U 2 ⊂ U and Ω 0 \ Ω ⊂ F is of zero p-capacity. For the same reason we have in (20) that cap p (B i , Ω 0 ) = cap p (B i , Ω) for each B i ∈ W 2 . To estimate the corresponding capacities on the left-hand side of (19), we use the capacity bounds from (3) (with respect to Ω and Ω 1 ; note that the assumptions of Lemma 2.2 are valid in the latter case) to obtain for all B i ∈ W 1 that cap p (B i , Ω) ≤ C rad(B i ) −p µ(B i ) ≤ C 3 cap p (B i , Ω 1 ). | 9,749.6 | 2012-11-29T00:00:00.000 | [
"Mathematics"
] |
Two Antimicrobial Peptides Derived from Bacillus and Their Properties
Growth promotion and disease prevention are important strategies in the modern husbandry industry, and for this reason, antibiotics are widely used as animal feed additives. However, the overuse of antibiotics has led to the serious problem of increasing resistance of pathogenic microorganisms, posing a major threat to the environment and human health. “Limiting antibiotics” and “Banning antibiotics” have become the inevitable trends in the development of the livestock feed industry, so the search for alternative antimicrobial agents has become a top priority. Antimicrobial peptides (AMPs) produced by Bacillus spp. have emerged as a promising alternative to antibiotics, due to their broad-spectrum antimicrobial activity against resistant pathogens. In this study, two strains of Bacillus velezensis 9-1 and B. inaquosorum 76-1 with good antibacterial activity were isolated from commercial feed additives, and the antimicrobial peptides produced by them were purified by ammonium sulfate precipitation, anion exchange chromatography, gel chromatography, and RP-HPLC. Finally, two small molecule peptides, named peptide-I and peptide-II, were obtained from strain 9-1 and 76-1, respectively. The molecular weight and sequences of the peptides were analyzed and identified by LC–MS/MS, which were 988.5706 Da and VFLENVLR, and 1286.6255 Da and FSGSGSGTAFTLR, respectively. The results of an antibacterial activity and stability study showed that the two peptides had good antibacterial activity against Staphylococcus aureus, B. cereus, and Salmonella enterica, and the minimum inhibitory concentrations were 64 μg/mL and 16 μg/mL, 32 μg/mL and 64 μg/mL, and 8 μg/mL and 8 μg/mL, respectively. All of them have good heat, acid, and alkali resistance and protease stability, and can be further developed as feed antibiotic substitutes.
Introduction
Since the 1950s, during which the United States Food and Drug Administration first approved the addition of antibiotics in feed, antibiotics as feed additives have played a huge role in improving animal growth rate, improving the quality of livestock and poultry products, reducing animal morbidity and mortality, improving feed utilization, and reducing costs [1][2][3].However, with the long-term use of antibiotics in livestock and poultry breeding, problems such as intestinal flora disorders, reduced disease resistance, increased resistance of pathogenic bacteria, and excessive drug residues in meat and egg products have become more prominent, which have been of wide concern by countries around the world [4][5][6].Following the ban of all food animal growth-promoting antibiotics by Sweden in 1986 [7], the European Union banned the use of antibiotic growth promoters in animal food production in 2006 and the U.S. Food and Drug Administration imposed restrictions on antibiotic use in animal production in December 2016 [8].The Ministry of Agriculture and Rural Affairs of our country issued Notice No. 194 in 2019, saying that since 1 January 2020, the addition of antibiotics in feed has been completely banned, and this announcement also indicates the real beginning of "Banning antibiotics" in China."Banning antibiotics" in feed and "Reducing antibiotics, limiting antibiotics " in the breeding process have become inevitable trends in the development of the livestock feed industry worldwide [9].However, this will certainly have a huge impact on the global livestock industry, which may lead to a decline in the production level of meat and egg products and a significant increase in breeding costs.Therefore, the development of antibiotic replacement products and reasonable alternative strategies have become a key issue for the healthy sustainable and green development of animal husbandry.
Previous studies have found that antimicrobial peptides (AMPs) have similar characteristics to antibiotics, such as a broad antibacterial spectrum, it is not easy to induce microbial resistance in them, enhanced host immunity, high safety, etc.They have gained widespread attention as one of the most promising alternative antibiotics [10,11].Antimicrobial peptides (AMPs) produced by Bacillus spp.have emerged as a promising alternative to antibiotics [12].In this study, the antibacterial active strains of Bacillus spp.isolated from commercial feed additives were first screened, and the fermentation time of the active strains were optimized.Then, the antibacterial peptides produced by the active strains were isolated, purified, and identified using an ammonium sulfate precipitation method and various chromatographic and mass spectrometry techniques.Finally, the stability and minimum inhibitory concentration of the antimicrobial peptides were determined.This provided a theoretical basis for the application of the antimicrobial peptide in animal husbandry.
Screening of Bacteriostatic Bacillus spp. Strains
Staphylococcus aureus ACCC 10499, Bacillus cereus ACCC 04315, and Salmonella enterica ACCC 01996 were used as indicator bacteria in this study.The 57 strains of Bacillus spp.isolated from commercial feed additives were screened by the standardized agar diffusion method [13], and 33 strains with antibacterial activity were obtained.Strain 76-1 and 9-1 showed good inhibitory effects on Staphylococcus aureus ACCC 10499 (Figure S1a), Bacillus cereus ACCC 04315 (Figure S1b), and Salmonella enterica ACCC 01996 (Figure S1c).A 16S rRNA gene sequence analysis showed that strains 76-1 and 9-1 were B. inaquosorum and B. velezensis, respectively, which were used to isolate antimicrobial peptides subsequently.
Determination of the Optimal Fermentation Time
In order to determine the accumulation time of antibacterial substances in active strains, the optimal fermentation times of strains 9-1 and 76-1 were determined, respectively.The antibacterial activity of strain 9-1 gradually increased with time between 12-120 h, and decreased after 120 h (Figure 1a).However, for strain 76-1, the antibacterial activity gradually increased between 12 h and 36 h, reaching the highest level at 36 h, and decreasing after 36 h (Figure 1b).Therefore, 120 h and 36 h were selected as the best fermentation times for strains 9-1 and 76-1, respectively.
Ammonium sulfate precipitation is one of the most commonly used salting-out methods for antimicrobial peptides purification and has the advantages of safety, mild action conditions, maintaining target protein biological activity, low cost, simple operation, etc. [14].By investigating the effect of the concentration of ammonium sulfate on the precipitation of antimicrobial peptides, it was found that the precipitation product obtained from the fermentation supernatant of strain 9-1 using a concentration of 20% to 100% ammonium sulfate had antibacterial activity, and the antibacterial circle of the product was the largest when the concentration was 40% (Figure 2a,b), which was selected as the best ammonium sulfate concentration for the purification of antimicrobial peptides of strain 9-1.However, for strain 76-1, only the precipitated products at 80% and 100% ammonium sulfate concentrations showed antibacterial activity (Figure 2c,d).After a comprehensive comparison of factors such as desalt time, purification effect, and cost, 80% was selected as the best ammonium sulfate concentration to precipitate the antibacterial peptide of strain 76-1.
Isolation and Purification of the Antimicrobial Peptides
After ammonium sulfate precipitation and desalting, the samples were separated and purified by DEAE Cellulose-52 ion exchange chromatography, and gradient elution was performed with 0-0.5 mol/LNaCl buffer with pH 8.0 0.02 mol/L Tris-HCl.After precipitation with ammonium sulfate and desalting by dialysis, the samples were separated and purified by DEAE Cellulose-52 ion exchange chromatography, and 0.02 mol/L, pH 8.0 Tris-HCl buffer containing 0-0.5 mol/LNaCl was used as gradient eluent.The light absorption value of the eluted samples at 280 nm was detected.Both of the samples from strains 9-1 and 76-1 produced two peaks when the NaCl concentration was 0 and 0.1 mol/L, namely peaks 1 and 2 and peaks A and B, respectively, and the peak shape was basically the same (Figure 3a,b).The activity test showed that the products of peaks 1 and 2, as well as peaks A and B, had obvious antibacterial circles (Figure 3c), indicating that the samples contained antimicrobial peptides.Sephadex G50 gel chromatography was used to continue to purify the antimicrobial peptides.Two 280 nm UV absorption peaks 1-1 and 1-2 were obtained by gel chromatography at the DEAE Cellulose-52 ion exchange chromatography peak 1 of strain 9-1 (Figure 4a), and one 280 nm UV absorption peak 2-1 was obtained at peak 2 (Figure 4b), both of which showed antibacterial activities (Figure 4c), but the A280 value of peak 2-1 was significantly lower than that of peak 1-1 and 1-2.After gel chromatography, peak A and peak B of strain 76-1 obtained from DEAE Cellulose-52 ion exchange chromatography each had two A 280 peaks (Figure 4d,e), and only peak A-2 and peak B-2 were found to have antibacterial activities (Figure 4f).Finally, the antimicrobial peptides were further purified by anti-phase high-performance liquid chromatography.The peptide from strain 9-1 showed one peak at around 35 min (Figure S2a), while the peptide from strain 76-1 also showed a main peak at around 35 min (Figure S2b).After using the same elution condition to prepare purified antimicrobial peptides samples, it was found that the substances with both of these two peaks had antibacterial activities.
Identification and Property Analysis of Antimicrobial Peptides
LC-MS/MS was used to determine and analyze the molecular weight and amino acid sequences of the antimicrobial peptides.The molecular weight of the antimicrobial peptide purified from strain 9-1 is 988.5706Da and the mass charge ratio is 495.2932m/z.The secondary mass spectrum was shown in Figure 5a, which showed that the antimicrobial peptide contained 8 amino acids and the sequence is VFLENVLR.The antimicrobial peptide was named peptide-I.The antimicrobial peptide purified from strain 76-1 has a molecular weight of 1286.6255Da and a mass charge ratio of 644.3203 m/z.The secondary mass spectrum was shown in Figure 5b, containing 13 amino acids.The sequence is FSGSGSGTAFTLR and named peptide-II.The amino acid sequences of peptide-I and peptide-II were compared with that of antibacterial peptides reported in the APD database (https://aps.unmc.edu/database,accessed on 22 November 2023), and the highest homologies were only 44.44% and 43.75%, respectively.Learning about the physicochemical properties of antimicrobial peptides can provide a theoretical basis for their practical production and application.Therefore, we determined the resistance of antimicrobial peptides to temperature and pH protease and the minimum inhibitory concentration.
In terms of temperature tolerance, the activity of antimicrobial peptide-I did not change significantly with that of untreated samples (Figure 6I(a,b)), that is, the antibacterial activity did not decrease significantly after high temperature treatment, indicating that the antimicrobial peptide had good thermal stability.However, the activity of antimicrobial peptide-II decreased slightly at 40-80 • C, and there was no antibacterial activity after being treated at 100 • C and 121 • C (Figure 6I(c,d)), indicating that the antimicrobial peptide could not tolerate high temperatures.In terms of pH stability, peptide-I could maintain activity at pH 2.0~10.0,and there is no significant difference in its activity at a different pH (Figure 6II(a,b)).Although peptide-II is also active at pH 2.0 to pH 10.0, it is the most active at pH 7.0 (Figure 6II(c,d)).From the stability of protease, the antimicrobial peptide-I was insensitive to protease K, trypsin, and pepsin (Figure 6III(a,b)).The activity of antimicrobial peptide-II was significantly reduced after treatment with protease K, while trypsin and pepsin had no significant effect on its activity (Figure 6III(c,d)).Lastly, the minimum inhibitory concentrations (MIC) of antimicrobial peptides showed that the MIC of peptide-I against St. aureus ACCC 10499, B. cereus ACCC 04315, and Sa.enterica ACCC 01996 were 64 µg/mL, 32 µg/mL, and 8 µg/mL, respectively.The MIC of peptide-II against St. aureus ACCC 10499, B. cereus ACCC 04315, and Sa.enterica ACCC 01996 were 16 µg/mL, 64 µg/mL, and 16 µg/mL, respectively (Table 1).It can be Lastly, the minimum inhibitory concentrations (MIC) of antimicrobial peptides showed that the MIC of peptide-I against St. aureus ACCC 10499, B. cereus ACCC 04315, and Sa.enterica ACCC 01996 were 64 µg/mL, 32 µg/mL, and 8 µg/mL, respectively.The MIC of peptide-II against St. aureus ACCC 10499, B. cereus ACCC 04315, and Sa.enterica ACCC 01996 were 16 µg/mL, 64 µg/mL, and 16 µg/mL, respectively (Table 1).It can be seen that the antibacterial peptide-I has the lowest MIC and the best antibacterial activity against Sa.enterica ACCC 01996, while the antibacterial peptide-II has the same antibacterial activity against St. aureus ACCC 10499 and Sa.enterica ACCC 01996, with an MIC of 16 µg/mL.
Screening of the Antimicrobial Active Bacillus sp.
A total of 57 strains of Bacillus spp.isolated from commercially available feed additives stored in the laboratory were inoculated into NB liquid medium, oscillated at 220 rpm at 30 • C for 20 h, and then inoculated into fermentation medium (glucose 10.0 g, peptone 10.0 g, Na 2 HPO 4 distilled water 1000 mL, pH 7.0-7.2,121 • C high temperature autoclave sterilization for 15 min) at 1% inoculation rate, and continued to cultivate for 24 h.The culture was centrifuged at 4 • C and 10,000 rpm for 10 min, and the supernatant was stored at 4 • C.
The antibacterial activities of the strains were determined by the agar diffusion method [13].Staphylococcus aureus ACCC 10499, Bacillus cereus ACCC 04315, and Salmonella enterica ACCC 01996 were inoculated in NB medium, respectively, and oscillated for 16-20 h at 220 rpm, 37 • C.After the concentration of bacterial solution was adjusted to OD 600 0.6~0.8, an appropriate amount of bacterial solution was mixed with about 50 • C NA medium at a ratio of 1:50 and poured into sterile plates.The plates after solidification were drilled respectively, into which 50 µL of Bacillus spp.strain fermentation supernatant were added and cultured for 12-14 h at 37 • C to observe whether there were bacteriostatic zones.Three parallel experiments were set for each group.The strains with larger antibacterial zones and a wider antibacterial spectrum were selected as the active strains for isolating antimicrobial peptides.
Determination of the Optimal Fermentation Time
Strains 9-1 and 76-1 with good activities were selected for fermentation.In order to compare the effect of fermentation time on the production of antimicrobial peptides, the fermentation broths were taken at intervals of 12 h and their antibacterial activities against St. aureus ACCC 10499 were measured.Three parallel experiments were set up in each group.
Identification of the Optimal Ammonium Sulfate Saturation
The fermentation supernatant of each strain was divided into 5 parts of 50 mL and added into ammonium sulfate with saturations of 20%, 40%, 60%, 80%, and 100%, respectively, and placed in the refrigerator at 4 • C overnight.The solutions with different saturations were centrifuged at 4 • C and 10,000 rpm for 30 min, the supernatant was discarded, and the precipitation was retained as the crude antimicrobial peptide.
Dialysis Desalting
Ammonium sulfate in the obtained crude antimicrobial peptides is generally removed by dialysis or molecular rejection [15].Dialysis was selected for desalting in this study.The precipitate obtained in the ammonium sulfates above was dissolved in sterile redistilled water and packed into a dialysis bag with the retained molecular weight of 3.5 kDa.Then, dialysis was conducted in the refrigerator at 4 • C with sterile redistilled water for 48 h, and the dialyzed liquid was vacuum freeze-dried to obtain lyophilized powder.The solution was dissolved with 0.02 mol/L pH 8.0 Tris-HCl buffer.St. aureus ACCC 10499 was used as indicator bacterium and the antibacterial activity was detected by the agar diffusion method [13].Lastly, the optimal ammonium sulfate saturation was determined by comparing the size of antibacterial zones.
DEAE Cellulose-52 ion-Exchange Chromatography
Ion exchange chromatography is a method of protein separation based on the difference of protein charge.DEAE Cellulose-52 was selected as the filler and balanced with 0.02 mol/L pH 8.0 Tris-HCl buffer at a flow rate of 1 mL/min in this study.The freeze-dried samples of the desalted dialysate were dissolved with 0.02 mol/L pH 8.0 Tris-HCl buffer and then configured into a solution with a concentration of 20 mg/mL.The solution was centrifuged at 10,000 rpm for 10 min, and the supernatant was taken for loading.The loading amount was 2 mL, and the loading time was recorded.After all the samples flowed into the column, the 0.02 mol/L pH 8.0 Tris-HCl buffer containing 0-0.5 mol/L NaCl was used for gradient elution and the eluent was collected.Then, the elution curve was drawn according to the change in the light absorption value at 280 nm.Finally, the antibacterial activity of each peak of the elution curve was detected, and the appropriate eluent concentration was selected according to the antibacterial activity.
Sephadex G50 Gel Chromatography
Sephadex G50 gel chromatography is based on the molecular size of antimicrobial peptides for purification and can be used to desalt samples after ion exchange chromatography [15,16].The packing Sephadex G50 was used and 0.02 mol/L pH 8.0 Tris-HCl buffer was selected for balancing.The flow rate was adjusted to 0.5 mL/min during the balancing process.For the eluting peaks of samples with antibacterial activity in the previous step, vacuum freeze drying was carried out, and the sample was dissolved by the method of Section 3.2.2.The samples' loading amounts were 5% of the column volume, and the samples' loading times were recorded.After the samples were completely flowing into the column, the 0-0.5 mol/L NaCl of pH 8.0 0.02 mol/L Tris-HCl buffer was added for gradient elution with the elution rate of 0.5 mL/min, and the light absorption values at 280 nm were measured.The eluent was collected every 5 min to draw an elution curve according to the change in A 280 values, and the antibacterial activity of the eluent with the peak value was measured.
Anti-Phase High-Performance Liquid Chromatography (RP-HPLC) Purification and Preparation
RP-HPLC can isolate, purify, and prepare antimicrobial peptides based on their unique physicochemical properties and different retention times [17,18].The active components after purification in Sephadex G50 gel chromatography were freeze dried by vacuum, from which the dried samples were dissolved with methanol and filtered.C18 column (4.6 mm × 250 mm) was used for HPLC analysis.The loading volume was 10 µL, acetonitrile and water were used as mobile phases for gradient elution in different proportions, the flow rate was 1 mL/min, the column temperature was 30 • C, and the detection wavelength was 254 nm.The HPLC gradient elution procedures were shown in Table 2.By observing the time of the peaks' occurrence, a suitable gradient elution program was selected and the antimicrobial peptides were prepared by preparative high-performance liquid chromatog-raphy.The activities of the antimicrobial peptides were determined by the filter paper disk agar diffusion method [19,20].In order to determine the molecular weight and sequence of antimicrobial peptides, the prepared antimicrobial peptide samples were sent to Sangon Biotech Co., Ltd.(Shanghai, China) for mass spectrometry identification and analysis by LC-MS/MS.The detailed process was as follows: The polypeptide samples were dissolved in a washing buffer (0.1% formic acid, 2% acetonitrile), and transferred into a 10 KD ultrafiltration centrifuge tube to centrifugate at 12,000× g for 10 min.The solution after ultrafiltration was desalinized using a C18 peptide desalting column (Acclaim PepMap RSLC, 75 µm × 25 cm C18-2 µm 100 Å).The sample was then eluted with an elution buffer (0.1% formic acid, 60% acetonitrile) and the elution solution was transferred to a new Eppendorf tube.The elution samples were centrifugally concentrated and dried, and then redissolved in 100 µL Nano-LC mobile phase A (0.1% formic acid/ultrapure water), then samples were taken and online LC-MS analysis was performed.The dissolved samples were loaded onto a nanoViper C18 column (3 µm, 100 Å) at a volume of 2 µL and then rinsed for desalting at a volume of 20 µL mobile phase (0.1% formic acid/ultrapure water) using an Easy nLC 1200 liquid phase system (Thermo Fisher, Waltham, MA, USA).The samples that were retained on the nanoViper C18 column after desalting were then separated by C18 reverse-phase analysis column (Acclaim PepMap RSLC, 75 µm × 25 cm C18-2 µm 100 Å).The gradient used was an increase in mobile phase B (80% acetonitrile, 0.1% formic acid) from 5% to 38% within 30 min.
Mass spectrometry was performed by Thermo Fisher Q Exactive system (Thermo Fisher, USA) combined with Nanospray Flex™ Ion Sources (Thermo Fisher, USA).The spray voltage was 1.9 kV and the heating temperature of the ion transport tube was 275 • C. The scanning mode of the mass spectrometry was Data Dependent Analysis (DDA), the scanning resolution of the primary mass spectrometry was 70,000, the scanning range was 100-1500 m/z, and the maximum injection time was 100 ms.A maximum of 20 secondary spectra with charges of 1+ to 3+ were collected in each DDA cycle, and the maximum implantation time of secondary mass spectrometry ions was 50 ms.The impact chamber energy (High Energy Collision Dissociation, HCD) was set to 28 eV for all precursor ions, and the dynamic exclusion was set to 6 s.
The original raw files collected by mass spectrometry were processed and analyzed using PEAKS Studio 8.5 (Bioinformatics Solutions Inc. Waterloo, ON, Canada) software.The database was Bacillus subtilis species Protein database downloaded from Uniprot (https:// www.uniprot.org/,accessed on 10 May 2023).The search parameters were set as follows: the primary mass spectrometry mass tolerance was 10 ppm, and the secondary mass spectrometry was 0.05 Da.In the processing, Oxidation (M), Acetylation (Protein N-term), Deamidation (NQ), Pyro-glu from E, and Pyro-glu from Q were selected as variable modifications.
Figure 1 .
Figure 1.Bacteriostatic activity of strains with different fermentation times.(a) Curve of antibacterial activity of strain 9-1 with time; (b) curve of antibacterial activity of strain 76-1 with time.
Figure 4 .
Figure 4. Sephadex G50 gel chromatography purification chromatogram and bacteriostatic activities of crude antimicrobial peptides.Graphs (a,b): gel chromatogram of ion exchange chromatographic peak 1 and peak 2 of strain 9-1, respectively; (c) antibacterial activities of different gel chromatographic peak components of strain 9-1; (d,e): gel chromatogram of ion exchange chromatographic peak A and peak B of strain 76-1, respectively; (f): antibacterial activities of different gel chromatographic peak components of strain 76-1.
Figure 5 .
Figure 5. Secondary mass spectrum of antimicrobial peptides.(a) Mass spectrum of peptide-I obtained from strain 9-1; (b) mass spectrum of peptide-II obtained from strain 76-1.
Molecules 2023 , 12 Figure 6 .
Figure 6.Resistance of the peptides to temperature (I), pH (II), and proteases (III).I: Temperature sensitivity of antimicrobial peptides.Graphs (a,c): bar chart of temperature sensitivity of peptide-1 and peptide-II, respectively; (b,d): inhibitory circles of peptide-I and peptide-II against St. aureus ACCC 10499 after treatment at different temperatures, respectively.II: pH stability of antimicrobial peptides.Graphs (a,c): bar chart of pH stability of peptide-1 and peptide-II, respectively; (b,d): inhibitory circles of peptide-I and peptide-II against St. aureus ACCC 10499 after treatment at a different pH.III: Protease stability of antimicrobial peptides.Graphs (a,c): bar chart of protease stability of peptide-1 and peptide-II, respectively; (b,d): inhibitory circles of peptide-I peptide-II against St. aureus ACCC 10499 after treatment at different proteases, respectively.
Figure 6 .
Figure 6.Resistance of the peptides to temperature (I), pH (II), and proteases (III).I: Temperature sensitivity of antimicrobial peptides.Graphs (a,c): bar chart of temperature sensitivity of peptide-1 and peptide-II, respectively; (b,d): inhibitory circles of peptide-I and peptide-II against St. aureus ACCC 10499 after treatment at different temperatures, respectively.II: pH stability of antimicrobial peptides.Graphs (a,c): bar chart of pH stability of peptide-1 and peptide-II, respectively; (b,d): inhibitory circles of peptide-I and peptide-II against St. aureus ACCC 10499 after treatment at a different pH.III: Protease stability of antimicrobial peptides.Graphs (a,c): bar chart of protease stability of peptide-1 and peptide-II, respectively; (b,d): inhibitory circles of peptide-I peptide-II against St. aureus ACCC 10499 after treatment at different proteases, respectively.
3. 4 . 2 .
Determination of Antimicrobial Peptide Stability Appropriate amounts of purified antimicrobial peptide samples were placed in a water bath at 40 • C, 60 • C, 80 • C, 90 • C, and 100 • C for 30 min and high-temperature autoclave
Table 1 .
The minimum inhibitory concentration (MIC) of antimicrobial peptides for each indicator bacterium. | 5,349.4 | 2023-12-01T00:00:00.000 | [
"Biology",
"Environmental Science",
"Medicine"
] |
Enhanced Stability of Skyrmions in Two-Dimensional Chiral Magnets with Rashba Spin-Orbit Coupling
Recent developments have led to an explosion of activity on skyrmions in three-dimensional (3D) chiral magnets. Experiments have directly probed these topological spin textures, revealed their nontrivial properties, and led to suggestions for novel applications. However, in 3D the skyrmion crystal phase is observed only in a narrow region of the temperature-field phase diagram. We show here, using a general analysis based on symmetry, that skyrmions are much more readily stabilized in two-dimensional (2D) systems with Rashba spin-orbit coupling. This enhanced stability arises from the competition between field and easy-plane magnetic anisotropy and results in a nontrivial structure in the topological charge density in the core of the skyrmions. We further show that, in a variety of microscopic models for magnetic exchange, the required easy-plane anisotropy naturally arises from the same spin-orbit coupling that is responsible for the chiral Dzyaloshinskii-Moriya interactions. Our results are of particular interest for 2D materials like thin films, surfaces, and oxide interfaces, where broken surface-inversion symmetry and Rashba spin-orbit coupling naturally lead to chiral exchange and easy-plane compass anisotropy. Our theory gives a clear direction for experimental studies of 2D magnetic materials to stabilize skyrmions over a large range of magnetic fields down to T=0.
Skyrmions are topological objects that first arose in the study of hadrons in high energy physics, but in recent years they have been discussed in connection with a wide range of condensed matter systems including quantum Hall effect, spinor Bose condensates and especially chiral magnets [1][2][3] . There has been tremendous progress in establishing exotic skyrmion crystal (SkX) phases in a variety of magnetic materials that lack inversion symmetry, ranging from metallic helimagnets like MnSi 1,4 to insulating multiferroics 5 using both neutrons 4 and Lorentz transmission electron microscopy 6 . Skyrmions also lead to unusual transport properties in metals like the topological Hall effect [7][8][9] , and may be related to the observed non-Fermi liquid behavior [10][11][12] .
Spin-orbit coupling (SOC) in magnetic systems without inversion symmetry gives rise to the chiral Dzyaloshinskii-Moriya (DM) 13,14 interaction D·(S i ×S j ). This competes with the usual S i ·S j exchange to produce spatially modulated states like spirals and SkX.
The 2D case is particularly interesting. Even in materials that break bulk inversion, thin films show enhanced stability 15,16 of skyrmion phases, persisting down to lower temperatures. Inversion is necessarily broken in 2D systems on a substrate or at an interface, and this too may lead to textures arising from DM interactions. Spin-polarized STM 17 has observed such textures on magnetic monolayers deposited on non-magnetic metals with large SOC. Very recently, we have proposed 18 chiral magnetism at oxide interfaces. In these systems the 2D electron gas at the interface between two insulating oxides has a large gate-tunable Rashba SOC 20 that leads to a tunable DM exchange leading to spirals 18 and SkX phases at finite temperatures 19 .
Motivated by this, we investigate 2D chiral magnets with a free energy dictated by general symmetry considerations, given SOC and broken inversion in the zdirection. Our results are summarized in the T = 0 phase diagram in Fig. 1 as a function of perpendicular magnetic field H and anisotropy A. The easy-axis anisotropy (A < 0) has been studied before 2,16 , but the the easy-plane regime (A > 0) has not. It is precisely here that we find an unexpectedly large SkX phase. Skyrmions not only gain DM energy, but are also an excellent compromise between the field and anisotropy A > 0. Moreover, we show that the skyrmion has a nontrivial structure in the spatial variation of its topological charge density (see Fig. 2) in this easy-plane region.
Can such easy-plane anisotropy of the required strength arise naturally in real materials? We present a microscopic analysis of three exchange mechanisms -superexchange in Mott insulators, and double exchange and Ruderman-Kittel-Kasuya-Yosida (RKKY) interaction in metals -and show that the same SOC that gives rise to the DM interaction D also leads to an easy-plane compass anisotropy A c . The compass term is usually ignored since it is higher order in SOC than DM. We show, however, that its contribution to the energy is comparable to that of DM, with A c |J|/D 2 1/2 for all three mechanisms, where J is the exchange coupling. This striking fact seems not to have been clearly recognized earlier, possibly because these microscopic mechanisms have been discussed in widely different contexts using different notation and normalizations. We also discuss how additional single-ion anisotropies enter the analysis. Our microscopic considerations should serve as a guide for material parameters of 2D chiral magnets such that a large SkX region can be probed experimentally.
Ginzburg-Landau Theory: The continuum freeenergy functional F [m] = d 2 rF(m) for the local magnetization m(r) of a 2D system is given by The isotropic term F iso = F 0 (m) + (J/2) α (∇m α ) 2 consists of F 0 , which determines the magnitude of m and a stiffness J that controls its gradient (α = x, y, z). For our T = 0 calculation we replace F 0 with the constraint m 2 (r) = 1. Broken z-inversion and SOC lead to the DM term . We can rewrite this as m·(∇×m) with a π/2rotation of m about the z axis. SOC also leads to the The A c > 0 "compass" terms give rise to easy-plane anisotropy, while the singleion A s term can be either easy-axis (A s < 0) or easy-plane (A s > 0). We define length in units of lattice spacing a so that J, D, A c and A s all have dimensions of energy. While the form of the free energy (1) follows entirely from symmetry, the microscopic analysis -presented in the second half of this paper -gives insight into the relative strengths of the various terms. The origin of both the DM and compass terms lies in Rashba SOC whose strength λ t, the hopping, in the materials of interest. Thus we obtain a hierarchy of scales with the exchange J D ∼ J(λ/t) A c ∼ J(λ/t) 2 . Naively one might expect the compass term to be unimportant, however its contribution to the energy O(A c ) is comparable to that of the DM term O(D 2 /J). Note that while the DM term is linear in the wave-vector q of a spin configuration, its energy must be O(q 2 ). Thus compass anisotropy, usually ignored in the literature, must be taken into account whenever the DM term is important.
We will show below that, for a wide variety of exchange mechanisms independent of whether the system is a metal or an insulator, the ratio A c J/D 2 1/2. We will also discuss the origin and strength of the single-ion A s term. Here we only note that the effective anisotropy in model (1) is governed by A = A c + A s , which is easy-axis for A < 0 and easy-plane for A > 0.
Phase Diagram: We begin by examining the T = 0 phase diagram of (1) for a fixed D J as function of magnetic field H = Hẑ and the dimensionless anisotropy parameter AJ/D 2 , which we explore by varying A s with A c J/D 2 = 1/2.
We extend the simple spiral above to incorporate more general 1D modulation described by m spiral (r) = sin[θ(Q 0 .r)]Q 0 + cos[θ(Q 0 .r)]ẑ, where θ varies only alonĝ Q 0 , chosen to bex without loss of generality. In contrast to the linear variation in the simplest ansatz, here θ(x) is an arbitrary function with m(x + R) = m(x) where R is the period. We numerically minimize (1) with the variational parameters θ(x) and R (see Supplement). This more general 1D periodic modulation stabilizes spiral solution relative to FM beyond |A|J/D 2 = 1 to 1.25 at H = 0; see Fig. 1 We next turn to H = 0. For the A > 0 FM state, the easy-plane anisotropy competes with the field alonĝ z so that the magnetization points at an angle θ tilt = cos Skyrmions: At finite magnetic field, we also need to consider the SkX phase 2 in addition to FM and spiral. A skyrmion 1 is a"hedgehog"-like spin-texture with a quantized topological charge or chirality q = (4π) −1 d 2 rm · (∂ xm × ∂ ym ), which is restricted to be an integer. For example, the q = −1 skyrmion in Fig. 1(a) is a smooth spin configuration with the topological constraint that the central spin points down while all the spins at the boundary point up.
The SkX state is a periodic array of skyrmions. It is often described by multiple-Q spiral condensation 23,24 .
Here we use the 'single-cell approximation' of Bogdanov 2 , where we impose the topological constraint for the centre and boundary spins within a unit cell. We then find the optimal skyrmion configuration within a single unit-cell, whose size R is also determined variationally. We discuss in the text the results from a 'circular-cell' approximation 2 . This leads to an effectively 1D problem (along the radial direction) that is computationally much simpler than the full 2D numerical minimization of the energy (1) using the conjugate-gradient method described in the Supplementary Information. We find the results of the two methods are very similar. In the circular-cell approximation, we replace the unit cell of the 2D crystal by a circular cell of radius R and take a skyrmion configuration m skyrmion (r) = sin θ(r)r + cos θ(r)ẑ with the topological constraint θ(0) = π and θ(R) = 0. We minimize the energy (1) with θ(r) and the cell radius R as variational parameters. To construct the crystal, we make hexagonal packing of the optimal circular cells and recalculate the energy by filling the space between the circles by up spins.
To get a preliminary idea about the stability of SkX, we simplify further. The linear ansatz 2 θ(r) = π(1−r/R) with the single parameter R, the skyrmion size, has the great virtue of being essentially analytically tractable. It leads to the dashed lines in the anisotropy-field phase diagram of Fig. 1(b) and gives us the first glimpse of the large SkX phase for easy-plane anisotropy.
We obtain the SkX-spiral and SkX-FM phase boundaries shown in Fig. 1(b) by numerical energy minimization using the more general form of eq. (2) and discretizing θ(r) on a 1D grid. We see that this confirms the qualitative observations from the linear approximation and yields an even larger SkX phase on the easy-plane side. Our 2D square cell calculations essentially reproduce the same phase diagram (see Supplementary Information).
Easy-plane vs. easy-axis anisotropy: Our results for the phase diagram in the easy-axis region (A < 0) agree well with previous studies 2, 16 . One might have thought that the perpendicular field H and easy-axis anisotropy would both be favorable for a skyrmion, all of whose spins are pointing up far from the center, but then the FM state is even more favorable.
The remarkable result in Fig. 1(b) is that the SkX phase is much more robust for easy-plane anisotropy (A > 0). We can physically understand this as follows. The skyrmion obviously gains energy from the DM term due to the twist in the spin configuration, but it is also the best compromise between field alongẑ and the easyplane anisotropy. This is why the large SkX region in the phase diagram is more or less oriented around H = 2A, the dashed line in Fig. 1 Here the core has a large 'transition' region (yellow-orange) from down (centre) to up (boundary) in m leading to an unusual two-peak structure for |2πrχ|.
The internal structure of skyrmion within a unit cell gives us further insight into the stability of the SkX phase. In Fig. 2 we plot m z (r) and the (angular averaged) topological charge density |2πrχ(r)|, where χ(r) = [m·∂ x m×∂ y m]/4π. It is conventional to define the 'core radius' of a skyrmion from the maximum of |dm z /dr|, which for the rotationally symmetric ansatz (2) is given by dm z /dr = |2πrχ(r)|.
For the easy-axis case the skyrmion core shows a conventional structure with a single peak in |2πrχ| in Fig. 2(a,c). In contrast, easy-plane anisotropy can lead to a non-trivial core with a double peak in |2πrχ(r)|; see Fig. 2(b,d). As the spins twist from down at the center (θ(0) = π) to all up (θ(R) = 0) at the boundary, it is energetically favorable to have an extended region where θ(r) θ tilt (defined above), the best compromise between the field and easy-plane anisotropy. As a result, |2πrχ| shows a two-peak structure with the topological charge split into two spatially separated parts.
Phase transitions: We next describe the nature of the various phase transitions in Fig. 1(b) within our variational framework. All the phase boundaries between the spiral state and either FM or SkX are first order transitions with a crossing of energy levels. On the other hand the SkX to easy-axis FM transition in the regime H > 2A is second-order with the optimal SkX unit cell size diverging at the transition; (see Supplementary Information). The SkX to tilted FM transition for H < 2A is first order, as the SkX cell size remains finite across it. There is a tricritical point where the SkX phase boundary intersects the line H = 2A. Another interesting feature of Fig. 1(b) are the reentrant transitions from FM → SkX → FM for AJ/D 2 > ∼ 1. Microscopic Analysis: Our results for the enhanced stability of skyrmions for easy-plane anisotropy followed from the phenomenological free energy (1) for 2D chiral magnets. We next present a detailed microscopic derivation of eq. (1) which shows that the parameter regime of interest arises naturally for three very different exchange mechanisms in the presence of SOC.
Moriya's original paper 14 considered superexchange with SOC, and this was further elaborated in a way relevant to our analysis in ref. 25 . The RKKY interaction with SOC was first discussed for spin-glasses 26 and the relation between DM and anisotropy was analyzed 27 in the context of quantum dots. Double exchange ferromagnets with SOC were analyzed in our recent work 18 . In all these cases, it was found by explicit calculation that A c |J|/D 2 = 1/2 (in the notation of this paper).
We sketch here a "unified" way of thinking about these very different problems. Consider the microscopic Hamil- Here t is the nearest-neighbor hopping on a 2D square lattice with sites r i , λ is the SOC, σ are Pauli matrices andd ij =ẑ × r ij /|r ij | with r ij = r i − r j . Our analysis can be easily generalized to further-neighbor hopping and arbitrary lattices.
The interaction H int can be chosen to model several different situations. (i) Hubbard repulsion H int = U i n i↑ n i↓ with U t at half-filling, gives rise to antiferromagnetic (AF) superexchange with SOC. (ii) Coupling of conduction electrons with a lattice of localized spins S i via H int = −J H i s i ·S i leads to Zener doubleexchange with SOC, where s i = (1/2) αβ c † iα σ αβ c iβ and the Hund's coupling J H t. (iii) The H int of (ii) with a Kondo coupling |J K | t leads to an RKKY interaction between moments mediated by electrons with SOC.
The common feature of (i,ii,iii) is that, in each case, the effective Hamiltonian can be derived by considering pairwise interaction between spins. We discuss (i) and (ii) below. The RKKY case (iii) is discussed in the Supplementary Information. To focus on the lowenergy sector, we consider a two-site problem with nearest neighbor sites i and j and rewrite H 0 as H 0 = −t αβ (c † iα [e iϑσ.dij ] αβ c jβ + h.c.) witht = √ t 2 + λ 2 and tan ϑ = λ/t. Next we gauge away the SOC with SU (2) rotations on the fermionic operators at the two sites, via a iα = [e −i(ϑ/2)σ.dij ] αβ c iβ and a jβ = [e i(ϑ/2)σ.dij ] αβ c jβ .
Expressed in terms of the transformed fermions H 0 looks like the kinetic energy without SOC, while the form of local H int is left unchanged, provided the spins are also suitably transformed. The usual analysis for superexchange for case (i) then leads 25 to the result H SE = J AF <ij> s i · R(2ϑd ij )s j where J AF = 4t 2 /U , and R(2ϑd) is the orthogonal matrix corresponding to a rotation by angle 2ϑ aboutd. In case (ii), the same R enters the double-exchange result for classical spins, where J F = κt with κ a constant that depends on the density of itinerant electrons.
At low-temperatures, the effective spin model for both cases (i) and (ii) can be written in a common form (after expanding the square-root in case (ii) and a sublattice rotation in case(i)). We get whereμ =x,ŷ. Here J =J cos 2ϑ withJ = J AF for super-exchange andJ = J F for double-exchange. The SOC-induced terms are the DM term with D = J sin 2ϑ and the compass anisotropy A c =J(1 − cos 2ϑ). Since tan ϑ = λ/t 1, we get the microscopic result A c J/D 2 1/2.
It is straightforward to derive the continuum free energy (1) from the lattice model (4). The only term in (1) that does not come from (4) is the the phenomenological anisotropy A s (m z ) 2 arising from single-ion or dipolar shape anisotropy 21 . For moments with S < 2, the single-ion anisotropy vanishes 22 . In some cases, a simple estimate of dipolar anisotropy is much smaller than the compass term 18 . For larger-S systems, the singleion anisotropy is non-zero and can even be varied using strain 16 . We cannot, however, ignore compass anisotropy since its contribution to the energy is comparable to the DM term, as already emphasized.
Conclusions: We have shown the enhanced stability of skyrmions when the effective anisotropy parameter (A c + A s ) > 0 is easy-plane. The compass term A c is intrinsically easy-plane and thus our results suggest that experiments should look for systems with suitable single ion anisotropies A s , or ways to tune it using strain, e.g., so as to enhance the SkX region. Theoretically, it would be interesting to study in the future the finite temperature phase diagram for easy-plane anisotropy and electronic properties, such as the anomalous Hall effect and possible non-Fermi liquid behavior of itinerant electrons coupled to the SkX spin texture in this regime.
I. SUPPLEMENTARY INFORMATION
Here we give details of the calculations reported in the main text. We discuss (1) the variational calculation of the phase diagram, (2) results for skyrmion length scales, and (3) the microscopic analysis of RKKY interactions wit SOC.
1) Variational calculation of the phase diagram:
We consider the FM, spiral and SkX phases in turn. We use A = A c + A s as the effective anisotropy, and omit additive constants in the energy, which are common to all phases.
Spiral:
For the spiral solution we take the general 1D periodic modulation m(x) = sin θ(x)x + cos θ(x)ẑ, and minimize the energy, where ∂ x θ = (∂θ/∂x). We use conjugate gradient minimization with respect to the size R and the function θ(x) which is discretized on a 1D grid. We use the periodic boundary condition θ(R) = θ(0) + 2πn where n is an integer. This form allows for a spiral solution with a net magnetization m z in the presence of a perpendicular magnetic field. For analytical calculation in zero field, one can take a more restrictive (linear) variational ansatz θ(x) = 2π(x/R). In this case the energy of the spiral can be easily evaluated by minimizing with respect to R. This gives the spiral pitch R = R sp = 2π(J/D) and the energy F sp = −D 2 /2J + A/2.
Skyrmion crystal:
We have discussed in the main paper the method used to construct a hexagonal SkX solution using the circular cell approximation 1 with rotationally symmetric form of eq. (2).
As discussed in the main paper, to qualitatively understand the stability of SkX over FM and spiral states, one can use a simple linear ansatz θ(r) = π(1−r/R) and minimize the energy by choosing an optimal R. This leads to the solution R sk ≈ πJ/D for the optimal skyrmion cell size, with the energy given by Fig. 1(b) in the main paper. The symbols and parameters used are exactly the same as described in the caption for Fig. 1(b). Note that the two calculations, although quite different in their computational complexity, nevertheless lead to essentially identical results for the overall phase diagram.
Here Ci(x) = − ∞ x dt cos t/t is the cosine integral and γ is the Euler constant. The result for F sk makes it clear that SkX gains energy from both DM and Zeeman terms.
For the more general θ(r) variation within the circular cell approximation, we need to numerically minimize We need to find the optimal cell size R and optimal values of θ(r), which we discretize on a 1D grid in the radial direction. We have carried out 1D conjugate gradient minimization using Mathematica on a laptop, using grids of up to 250 points. Fig. 2, however the nontrivial structure of the skyrmion core in the easy-axis case is qualitatively similar to that in the circular cell calculations. a full 2D minimization by discretizing the GL functional (1) over a square grid. For the 2D calculation, we used up to 100 × 100 grids with polar and azimuthal angles (θ(r), φ(r)) of m(r) at each grid point as variational parameters. The 2D conjugate gradient calculations are done using a Numerical Recipes 2 subroutine in C on a local cluster of computers. This 2D minimization is much more computationally intensive than the 1D calculation for the circular cell approximation.
The 2D square cell result shown in Fig. 3 for the phase diagram is essentially the same as that obtained from the circular cell calculation; see Fig. 1(b) in the main text. We show in Fig. 4 the internal structure of the skyrmion as calculated from the full 2D square-cell minimization. This figure should be compared with the results from a circular cell calculation in Fig. 2 of the main paper. Note that the parameters used here are slightly different from those used in Fig. 2, however the nontrivial structure of the skyrmion core in the easy-axis case -the two-peak structure in the topological charge density |2πrχ(r)| -is qualitatively similar to that in the circular cell calculations.
FIG. 5: Skyrmion length scales: Plots of the Hdependence of the skyrmion cell radius R sk (denoted by R for simplicity in main text) and the core radii defined by the location of the maxima of |dmz/dr|. For the ansatz of eq. (2), dmz/dr = |2πrχ(r)|. (a) In the easy-axis region, both R sk and the (inner) core radius Rin are finite at the first order spiral-to-SkX phase boundary, but R sk diverges while Rin remains finite at the second-order SkX-to-FM transition. (b) In the easy-plane region, there are two core radii corresponding to the two maxima in |2πrχ(r)|. These inner and outer core radii Rin and Rout, and the cell radius R sk , all remain finite at the two first order phase transitions out of the SkX phase.
we show the skyrmion cell size R sk and core radiii as a function of field for (a) easy-axis anisotropy with AJ/D 2 = −0.5 and (b) easy-plane anisotropy with AJ/D 2 = 1.35. As described in the main paper, and shown in Fig. 2, there is only one length scale associated with skyrmion core size for the easy axis case, where as two-length scales appear for the easy-plane side, near the re-entrant region of the SkX phase diagram. We also show the skyrmion cell radius in this plot and we see that this is the divergent length scale at the second order phase boundary between the SkX and out-of-plane FM in the easy-axis case in Figs. 5(a).
3) RKKY interaction in the presence of SOC:
Here we discuss the case of RKKY 3 interaction between local moments embedded in a metallic host described by eq (3). In this case, the magnetic exchanges 27 , namely the isotropic, DM and compass, between two moments at r 1 and r 2 turn out to be J 12 =J(r 12 ) cos 2ϑ 12 , D 12 =J(r 12 ) sin 2ϑ 12 and A 12 =J(r 12 )(1 − cos 2ϑ 12 ). Here r 12 = r 2 − r 1 , ϑ 12 = k R r 12 with k R ≡ (λ/ta) andJ(r) −(J 2 K a 2 /4π 2 t) sin (2k F r)/r 2 . This result is obtained 27 for k F r 12 1, where k F is the Fermi wavevector, and by approximating 2D tight-binding energy dispersion by a parabolic band, as appropriate for lowdensity of conduction electrons. Evidently, for λ t and k −1 F r 12 k −1 R , the ratio A 12 J 12 /D 2 12 1/2 is maintained. We consider a set of moments that are regularly distributed on a square lattice with a spacing a such that the ratio AJ/D 2 1/2 for nearest-neighbor exchanges. If we neglect longer-range part of the RKKY, then we obtain the effective spin Hamiltonian (4) of the main paper. | 6,076 | 2014-02-27T00:00:00.000 | [
"Physics"
] |
The use of magnetic marks in steel wire ropes
Various methods of marking wire ropes are not devoid of certain disadvantages. In this paper, the authors consider the theoretical possibility of using integrated magnetic marks. The use of mathematical modelling (GMSH + GetDP) assessed the degree of shielding magnetic field of a mark by rope wire. Through the simulation, the authors have determined that the using of a cylindrical mark with the magnetization direction along the rope has qualitative advantages over other forms of marks, or the transverse magnetization direction in terms of its detection by the external detecting device. This type of mark can be easily embedded in the polymeric core of the rope.
Introduction
With the development of the cable transport, a need for marking the different portions of the rope arises. For example, it becomes necessary to analyse rope slacking, registering its speed or measuring the distance, which the section of rope or the structure, rigidly connected with it (cabin, column), passed.
Currently, there are several ways of marking mining ropes and armored cables. Thus, the marking of the mining rope or cable can be accomplished by coating ferromagnetic powders on the outer surface of the armor or strands with their subsequent magnetization. The resulting marks have a low life due to mechanical abrasion of the mark caused by friction and reversal magnetization by external magnetic fields. Another method of marking is through the magnetized portions of the armor mining rope [1]. Magnetic marks are pretty quickly demagnetized, because the armor of mining cables is made of soft magnetic steel. Furthermore, the application of such marks to a rope or armor of the moving rope or cable, for example during the manufacture of the product, leads to the longitudinal deformation of the mark. Longitudinal deformation of the mark is caused by a displacement of the rope towards the magnetizing device for the time needed to magnetize the rope part.
There is also a well-known method for making magnetic marks consisting in creating capsules with a ferromagnetic fluid, enclosed into a flexible magnetic material of the core rope [2]. In this case, the material fills the space (gap) between the strands and the length of the capsule which is equal to a strand pitch. As the identifier marks RFID tags [3] are applied, their use may be complicated by interference in reading data from the mark caused during the work of the cableway devices or other devices. In paper [4], it was proposed to implement magnetic marks based on the permanent magnets in the core of the rope, which will greatly increase their service life in comparison with the external marks (ferromagnetic powders or the obtained magnetization portion of the rope or armor). The major problem is that in most of cases, mild unalloyed steel is used for rope manufacturing, which largely shields the magnetic field. In order to investigate and to study the possibility of using magnetic marks that are integrated into the core of the rope, the numerical simulation of the electromagnetic field distribution in the part of the rope with a permanent magnet fitted in the core, is performed. This design can be implemented on the ropes with a polymeric core that can be replaced by rubber ferrite.
The model and simulation
For the analysis, the authors used the method of finite elements with the implementation of it in the software package 'GMSH + GetDP' [5,6]. The computational domain is the portion of the rope with six strands (the geometric parameters correspond to those listed in The State Standard 3062-80) [7] which are homogeneous areas ( Figure 1). The length of the portion of the rope is set to be 70 mm. The magnetic properties of the rope model correspond to the magnetic properties of steel AISI 1010 (St 10). The model rope [8] takes into account the lateral gap between the strands, the presence of which corresponds to the state of a rope not worn out [9].
The magnetic mark is installed into the core, made of a nonmagnetic material having a minimum gap size of 0.5 mm from the inner wires of the rope. The rope core used in a model simulates airspace. The magnetic mark is represented by characteristics corresponding to hard-magnetic material ANI-8 (the value of the residual induction -0.53 T, coercive force -790 kA/m). When the magnetization takes place along axes Y and Z, the magnetic mark is performed in the form of a rectangular parallelepiped (dimensions ranging from 8x8x4 to 8x3x2 mm (Figures 2a, 2b)). For magnetizing along the X axis, the cylindrical mark was used (mark length is 4 mm, the diameter varied from 2 to 6 mm (Figure 2c)). When conducting the research, the rope diameter and the diameter of the strands ranged within 12-30 mm and 4-10 mm, respectively. a b c Figure 2. The magnetic mark is magnetized: a -along axis Y; b -along axis Z; c -along axis X.
The picture of the magnetic field was obtained using the finite element method. The computational domain (a three-dimensional model of the rope with the magnetic mark and the magnetic concentrator) is divided into a finite element mesh with built-in GMSH package [10]. The simulation has found that the strands of the rope largely shield the magnetic field generated by the magnetic mark: the magnetic field induction on the surface of the rope along the line of magnetic mark arrangement is evenly distributed (Figure 3). For pronounced inhomogeneity of the magnetic field, a horseshoe-shaped ferromagnetic concentrator (Figure 4), partially covering the cable, is placed at the location of the magnetic mark. The gap between the ferromagnetic concentrator and the rope is 1 mm, the thickness of the concentrator is constant along its length and is equal to 3 mm. The radius of the curvature varies depending on the diameter of the rope to ensure the air gap between the rope and the concentrator is equal to 1 mm. The width of the magnetic concentrator located directly above the magnetic mark equal to the length of the permanent magnet. The magnetic properties of the material of the magnetic concentrator correspond to structural steel AISI 1008 (St 08). In case of using a cylindrical mark, the magnetic concentrator has a U-shape and is located on the lateral surface of the rope, the length of the concentrator is 12 mm ( Figure 5). With the magnetization of the magnetic mark is directed along axes Y and Z (across the rope), the magnetic induction in the gap of the concentrator depends on the angle of rotation of the mark (rope with a mark) around axis X. Modulus values of the magnetic induction will be maximal if the vector of magnetic induction of the magnetic mark is directed from one pole to the other pole of the magnetic concentrator, i.e. along axis Y. With increasing the angle of rotation of the mark, the value of the magnetic induction in the gap of the concentrator is lowered and reaches a minimum when the mark rotational angle is 90°, which corresponds to its magnetization along axis Z. The magnitude of the magnetic induction is 20-25% of the initial value (the rotation angle is 0°). The dependence of the modulus magnetic induction on the rotation angle is shown in Figure 6.
Thus, the authors concluded that magnetization of the mark along axes Y and Z (transversely the rope) has a drawback. It is a variation of the value of magnetic induction in the gap of the magnetic concentrator depending on the angle of rotation of the mark. The magnetic induction in the gap of the concentrator during magnetizing the magnetic mark along the X-axis does not change during the rotation of the rope, since the direction of the magnetic induction vector in this case is not changed.
The resolution
To determine the resolution when reading the signal, the dependence of the magnetic induction in the concentrator gap on the value displacement of the magnetic mark relative the concentrator along the rope axis has been studied. The geometrical dimensions of the model were not different from those mentioned above. Zero displacement corresponds to the location of the concentrator above the magnetic mark. The magnetic induction in this case reaches its maximum. The simulation was carried out for both rectangular and cylindrical marks. The value of the air gap between the concentrator and the rope was 0.5 and 2 mm.
When using a cylindrical mark, a sharp decrease of the magnetic induction in the gap of the concentrator with increasing displacement occurs. The 5-time reduction of the magnetic induction is achieved with the displacement of the magnetic mark by 16 mm (four lengths of the mark) in case of a cylindrical mark, and by 28 mm (seven lengths of the mark) in case of a rectangular mark (Figure 7). The resolution will be determined by changing the sign of the normal component of the number of times corresponding to the number of marks. To estimate the resolution in determining the mark by the aforementioned method, the numerical simulation of the rope part with the length of 200 mm was performed with two cylindrical magnetic marks with the length of 5 mm. The distance between the adjacent ends of the marks was varied from 1 to 40 mm. Simulation results are presented in Figure 9. The value of the normal component corresponds to the values of the magnetic induction along a line parallel to the axis of the rope at a distance of 1 mm from the outer surface. The small distance between magnetic marks prevents from identifying the two marks. Thus, a double change in the sign of the normal component of the vector of magnetic induction recorded only at distances between the magnetic marks about 15 mm and more, which exceeds the diameter of the rope (11.5 mm). At shorter distances, the identification of two marks by the aforementioned method is not possible.
Conclusion
Analysis of research results leads to the following conclusions.
• The results confirm the possibility of mathematical modelling and the efficient use of permanent magnets in marking the rope with six strands.
• It was determined that for display a maximum variation of the magnetic field in a zone of the mark in the reading device, it is advisable to install a concentrator composed of the ferromagnetic material.
• Analysis of the magnetic field, generated by the permanent magnet, which form a magnetic mark, confirms the effectiveness of the registration normal component of the magnetic field induction vector.
• Using the permanent magnet with the magnetization direction along of the rope reduces the influence of the angle of rotation about the axis relative to the concentrator on the value of the induction in the concentrator measuring gap.
• Increasing the length of the permanent magnet performed as a magnetic mark increases the magnitude of the amplitudes of the normal component of the magnetic field induction vector on the relevant section of the rope that allows improving the quality of the reading signal of the magnetic mark.
• The distance between the individual adjacent marks should be more than the rope diameter for reliable identification. | 2,608.2 | 2017-02-01T00:00:00.000 | [
"Engineering",
"Physics",
"Materials Science"
] |
A facility location problem for palm-oil-based biodiesel plant: case study in North Sumatera Province
As a country that supports the development of renewable energy, Indonesia has policies to direct the use of biodiesel as fuel energy source. In 2020, the Government of Indonesia (GoI) implements mandate of biodiesel from palm oil, one of Indonesia’s best plantation products. The increase in the mandate up to B30 makes the national biodiesel plant’s capacity to operate up to 80% so that a new plant is needed to meet biodiesel for national and export demand. Taking North Sumatra province as the location of study, this research expects to determine the biodiesel plant’s location that will be built to meet the needs of petrodiesel fuel in the North Sumatra area to minimize transportation costs at each stage. Mathematical modelling is conducted to reach the objective. The model aims to determine the most appropriate location and size of the biodiesel plant to be made, by taking into account the distance between several two-transportation points, the availability of feedstocks, demand, and supply chain costs. Mixed-integer linear programming (MILP) is employed to simulate the model. Result suggests that two regions are suitable locations for establishing biodiesel factories to meet the needs of the people of North Sumatra, namely Langkat Regency and Serdang Bedagai Regency. The result can serve as recommendations for policymakers to minimize the biodiesel supply chain cost in North Sumatra.
Introduction
The use of fossil energy is an issue that is concerned in many countries because of the depleting fossil energy reserves and its impact on environmental emissions. Biofuels is one of the types of renewable energy being developed to address that concern. Indonesia is also trying to increase biofuels usage through the mandate of biofuel blends [1]. Specifically, for biodiesel, through the Minister of Energy and Mineral Resources Regulation No. 12 of 2015 related to mixing diesel fuel with biodiesel, the Indonesian government seeks to increase biodiesel consumption with palm oil as raw material, which is one of the mainstay products of Indonesian plantations. Indonesia is the country with the largest palm oil producer in the world. As a perennial crop with a permanent leaf canopy, oil palm can grow all year round, which is one of the main reasons why its productivity is so high compared to other vegetable oil crops [2]. With sufficient raw materials from within the country, the government's mandate to use biodiesel will be easier to achieve. In addition to meeting energy needs through MICES 2020 IOP Conf. Series: Earth and Environmental Science 753 (2021) 012033 IOP Publishing doi: 10.1088/1755-1315/753/1/012033 2 renewable energy, biodiesel has also improved the welfare of oil palm farmers [3]. Apart from fuel oil, biodiesel also has great potential in the electric power sector [4].
North Sumatra is one of the provinces with the potential for oil palm plantations and constitutes 14% of Indonesia's total oil palm [5]. With the increase in the government's mandate from B20 to B30 in 2020 [1], it impacts the need for an increase in biodiesel industries that will process palm oil into biodiesel. As of 2019, the biodiesel industry's total capacity in Indonesia is 11.3 million kiloliters [6], with capacity utilization reaching 70% for the B20 mandate. If GoI wants to reach B30, then the biodiesel needed is 9 million liters or about 80% of the existing capacity, assuming there is no increase in demand. Operating capacity is around 85-90% of installed capacity due to machines in service and others [7]. This causes the industry to be unable to export biodiesel as is usually done, so it is necessary to develop the new biodiesel industry plant to meet domestic and export needs. In North Sumatra, there are already four biodiesel plants with a capacity of 1,728,000 KL, all of which are located in Medan City [8]. However, that plant has been allocated to meet national needs because of its location close to the port. In North Sumatra, another biodiesel plant needs to be built to meet the national biodiesel needs because the amount of raw material available in North Sumatra is quite sufficient, which is around 6 million tons per year [5]. One thing that needs to be considered in constructing a biodiesel plant is a suitable location between raw materials and demand. For this reason, a recommendation for a biodiesel plant location is needed to meet the needs of the North Sumatra province.
In order to save biodiesel supply chain costs, this research will determine the location of the biodiesel plant, so the transportation cost minimum. The issues to be discussed include determining the biodiesel plant's location to minimize the biodiesel supply chain cost in the North Sumatra region following the government's mandate. This location determination is expected to produce policy recommendations in the context of saving biodiesel supply chain costs. To the best of our knowledge, there is a lack of emphasis on biodiesel optimization in Indonesia's case study. Optimization of the biodiesel industry capacity, location of fuel oil terminals and transportation routes will be carried out. Mixed-integer linear programming (MILP) will be developed to simulate this objective.
Methodology
This section will describe the mathematical model that was built. The distribution process is as follows. First, palm oil will be processed into CPO in the feedstock area and then sent to the biodiesel factory to be processed into B100 (FAME). Furthermore, following the government's mandate, the B100 is sent to Pertamina's fuel oil terminal to be mixed with diesel (B0) to become B30. After being mixed, B30 is then distributed to the several areas for consumption by the resident.
System description
Oil palm plantations in North Sumatra are divided into 22 regencies. In this model, it is assumed that fresh fruit bunches are harvested in each area and then sent to the local palm oil mill industry to be processed into CPO for further processing at the biodiesel plant to produce B100 (FAME). The index to be used in this model is shown in Table 1. Before biodiesel is distributed, B100 (FAME) must be mixed with pure diesel (B0) to produce B30. The mixing is carried out at Pertamina's TBBM (Pertamina fuel oil terminal). Currently, 5 Pertamina TBBMs are operating in North Sumatra, namely TBBM Medan, TBBM Sibolga, TBBM Pematangsiantar, TBBM Kisaran, and TBBM Gunung Sitoli [9]. Table 1
Sets Descriptions
Total Description biomass location index 22 The number of biomass location is based on oil palm potential calculation in each region in North Sumatera. biodiesel plant location (B100) index
33
The number of potential biodiesel plant is based on the number of regency in North Sumatera. TBBM location index (blending location index)
5
The number of Pertamina's TBBM in North Sumatera. Demand location index 33 The number of demand location is based on the number of regency in North Sumatera.
Demand clusters
In this section, demand will be calculated based on the population per region [14]. The division of the territory is carried out based on the division of districts in Indonesia. Therefore, demand is estimated based on the population's percentage to the total annual demand for diesel in Indonesia.
Production and transportation cost
The means of transportation used in this study are tank trucks and ships. Tank trucks are used to transport CPO from palm oil mills to biodiesel plants, from biodiesel plants to fuel oil terminals, and from fuel oil terminals to demand locations. Ships are used to transport oil in areas that do not have land access. The cost of transportation using an oil truck to transport CPO from PKS to the biodiesel plant, from the biodiesel plant to the mixing point, to the location of demand is 0.14 USD / t-CPO / km ( [15], [11]).
The cost of biodiesel production in this study follows the cost of biodiesel production using conventional methods calculated by Fumi et al., which is USD 9.34 million per 28,409 KL of biodiesel produced [11]. To determine the operational costs of biodiesel plants and fuel oil terminal, we carry out calculations by considering the plant capacity using the scaling effect with = 0.7.
Using the scaling effect in Eq 1, it is possible to calculate costs for the different processing steps of biofuel production plants of different sizes.
Cost Matrix distance description
Transportation is an essential consideration in fuel oil distribution because of the costs involved. Transport and distance between two points determine factors in the number, capacity, and biodiesel plants' location. The distance between two points in this study is determined done by looking at real data through the Google Map. Further, for data between regions that require sea transportation, the distance between ports is calculated using the Netpas Software.
Mathematical modeling
The model built aims to determine the most appropriate location and size of the biodiesel plant to be made, taking into account the availability of feedstocks, demand, and supply chain costs. This model also accommodates the scenario if there is additional demand or changes in the biodiesel plant. The model will not optimize at the minimum cost in one sector but minimize biofuel's overall cost for the region welfare.
The total cost of the system and the descriptions is shown in Eq. (5)- (11). Cost 1 is the cost incurred for the transportation of raw materials from raw material sources to the biodiesel industry location. Cost2 is the cost spent to bring B100 to the fuel oil terminal's location to be mixed with petrodiesel to become B30. Cost 3 is the transportation cost to transport B30 from the point of delivery and then sent to all demand locations. Cost 4 is the plant setup cost/capital cost if the fuel oil terminal is opened. Cost 5 is the cost of the plant setup cost if the factory is opened. The main constraint functions are as follows: The amount of biomass sent to the biodiesel plant does not exceed the total amount of biomass available in the area, the amount of biomass sent to the biodiesel plant j does not exceed the maximum capacity of raw materials that can be processed by the plant, the amount B100 sent from factory j to blending k to be blended does not exceed the maximum blending capacity for mixing 30% B100 and 70% petrodiesel, biodiesel and petrodiesel used is the same as the amount of demand according to the government mandate.
Results and discussions
To determine the optimal location, the model that has been described is solved using optimization software CPLEX IDE 12.10.0 with an Intel Core i7 2.9 GHz computer, 8GB RAM, and a 64-bit system. The model that is built produces an optimal solution with the number of variables 1094 and constraint 1291.
The analysis found that to meet biodiesel's needs in North Sumatra, the most suitable biodiesel plant locations are Langkat regency and Serdang Bedagai regency with their respective capacities feedstock locations and TBBM points as shown in table 3. These results are obtained by considering the location of feedstock's, the Pertamina TBBM site, and the demand location. For the Langkat area, the biodiesel plant to be built has a capacity of 385,386 with raw materials coming from the Langkat area itself. After the B100 is produced in Langkat, it is sent to TBBM Medan and TBBM Kisaran to be mixed with petrodiesel to produce B30. For the Serdang Bedagai area, the biodiesel plant to be built has a capacity of 163,996 with raw materials coming from Serdang Bedagai regency and Labuhan Batu Selatan regency. After the B100 is produced in Serdang Bedagai, it is sent to TBBM Sibolga, TBBM Gunung Sitoli, and TBBM Pematangsiantar to be mixed with petrodiesel to produce B30. Next, the location of demand served by each Pertamina's fuel oil terminal is presented in Table 4. This table shows the demand areas that must be served by each TBBM. Figure 1. The sensitivity of the parameters that will affect the final cost is analyzed. The parameters considered are land transportation cost, sea transportation cost, TBBM capacity, and demand. The effect of each parameter was investigated with a change of + -30%. Sea transportation costs have little impact on final costs, which are below 3%, but land transportation costs significantly change, up to 20%. Differences in TBBM capacity affect final cost but are not significant, meaning that the current TBBM capacity is has been optimal. The most influential parameter is demand. The changing parameter of the amount of raw material does not affect at all, meaning that the amount of raw material is sufficient to meet demand needs.
Conclusions
The government supports the use of biofuels in Indonesia by mandating the use of biodiesel. By 2020 the mandate requires a mix of 30% biodiesel with petrodiesel (B30). For B30, the need to biodiesel is 9 million liters or about 80% of the existing national capacity. Thus, there is a need for a new biodiesel plant to be built to meet the demand for national and exports, given that the palm oil raw material is still sufficient to meet these needs. The study takes the biodiesel plant's location is in North Sumatra to meet the demand in North Sumatra itself so that the distribution costs are minimum. The distribution costs include biodiesel plant cost, shipping costs from the feedstock site to the biodiesel plant site, the biodiesel plant site to the TBBM Pertamina site, and then from the TBBM Pertamina site to be distributed to the area where it is needed.
From the study results, the optimum location was chosen in 2 areas among 33 districts in North Sumatra, namely Langkat and Serdang Bedagai. The total transportation cost for this calculation is USD 22.44 per KL B30. This cost does not include biodiesel production cost, petrodiesel production cost, and petrodiesel transportation costs from the refinery to each TBBM. The resulting minimum cost from the model is not for minimizing costs in the industrial sector but for the overall minimum cost for regional welfare. The results could be expected to be policy recommendations in determining a biodiesel plant location in North Sumatera province. | 3,271.8 | 2021-01-01T00:00:00.000 | [
"Environmental Science",
"Engineering",
"Business"
] |
Laboratory and In-flight Evaluation of a Cloud Droplet Probe ( CDP )
Please quantify the seven droplets sizes generated in the lab; otherwise it is unclear what you mean with “For the smallest diameters” (lines 5-6 of abstract) and with “For all larger diameters” (line 7). In the revised abstract we are explicit about the drop sizes tested. The revised abstract also provides quantification of the specific tests fkor different droplet sizes.
Introduction
In-situ cloud studies often utilize measurements from forward scattering optical particle counters (OPCs) to provide size and concentration information about cloud hydrometeors up to a few 10's of microns in diameter.The Particle Measuring Systems' Forward Scattering Spectrometer Probe (FSSP) and the Droplet Measurement Technologies' (DMT) Cloud Droplet Probe (CDP) are forward scattering OPCs used to measure hydrometeors of 1 -50 µm in diameter.These 25 instruments use an open-path laser and measure the intensity of scattered light from transiting particles and relate that to particle size utilizing Mie-Lorenz theory and assuming something about the particles (typically liquid and spherical).The instruments output measurements as cumulative binned counts of droplet diameter.Some instruments including certain versions of the CDP, Fast CDP (FCDP; SPEC, Inc.), and FAST-FSSP are also capable of providing the sizes and interarrival Several sources may contribute to OPC sizing and counting errors that in turn propagate through to higher moments (Dye and Baumgardner, 1984;Baumgardner et al., 1985;Cooper, 1988;Baumgardner and Spowart, 1990;Brenguier et al., 1998;5 McFarquhar et al., 2017;Wendisch and Brenguier, 2013).Sizing error can also result in artificial broadening of hydrometeor size distributions which can mistakenly be attributed to distribution-modifying cloud processes (Baumgardner et al., 1990).
The non-linear relationship between droplet diameter and the intensity of light scattered by a droplet limits the resolution of size bins (Pinnick et al., 1981).The CDP has a default bin width of 2 µm for diameters larger than 14 µm, which can result in as great as 15% uncertainty in diameter (Nagel et al., 2007).Mie resonance, which is more pronounced for the CDP's 10 unimodal laser, also introduces sizing uncertainty for droplet diameters smaller than 14 µm (Knollenberg, 1976;Nagel et al., 2007).
Laser intensity and droplet scattering angles vary based on droplet transit location (Dye and Baumgardner, 1985;Brenguier et al., 1998).Therefore, droplets are only counted and sized if they pass through the qualified sample area; an elliptical 15 region within the depth of field where laser intensity and droplet scattering angles are relatively homogenous.Nonetheless, laser intensity and scattering angles are somewhat variable even within the qualified sample area, resulting in counting and sizing error that is dependent on droplet transit location (Brenguier et al., 1998;Wendisch et al., 1996).Instrument-specific misalignment of optical components can increase spatial variability in counting and sizing accuracy (Lance et al., 2010).The cross-sectional area of the depth of field (DOF) is included in calculations of sample volume, such that errors in DOF will 20 propagate as a scaling bias in concentration (Wendisch et al., 1996).
Coincidence error is a concentration-dependent phenomenon that occurs when multiple droplets are simultaneously within the sensitive area of an OPC's laser.Coincidence can affect sizing and counting accuracy but errors can be difficult to characterize because they depend on many factors including particle concentration, particle size, the location that particles 25 transit the laser, and instrument optical design (Baumgardner et al., 1985;Cooper, 1988;Brenguier, 1988).
FSSP electronic limitations require an 'electronic delay sequence' for a period after particle detection.Particles passing through the qualified sample area during the delay sequence are not detected resulting in undercounting, or 'dead time losses', which requires algorithmic corrections to FSSP-measured concentration (Baumgardner et al., 1985;Brenguier et al., 30 1998;Baumgardner and Spowart, 1990;Brenguier et al., 1998).Newer OPCs including the FAST-FSSP, CDP, and FCDP feature faster electronics that negate dead time loss errors (Brenguier et al., 1998).Forward scattering OPC measurements require that several assumptions be made about cloud particles.OPC techniques assume that measured particles are primarily composed of water and therefore have refractive indexes equal to that of pure water (Pinnick et al., 1981).Particles must also be small enough (less than ~50 µm diameter) to follow Mie-Lorenz scattering theory for the laser wavelength of an instrument.Particle shape affects scattering behaviour, so it is assumed that liquid hydrometeors are spherical (Nagel et al., 2007).Several researchers have used the FSSP to study ice hydrometeors but 5 such measurements are subject to uncertainty imposed by the variability in ice particle shape (Gardiner and Hallett, 1985;Field et al., 2003).
In mixed and ice phase cloud, ice particles are prone to shattering on contact with OPC structures.If passed through the sample area, ice fragments can be erroneously identified as natural particles leading to errors in counting and sizing and an 10 artificial bimodality in hydrometeor distributions (Gardiner and Hallett, 1985;Korolev and Isaac, 2005).FSSP measurements can be greatly affected by shattering artifacts because the probe's laser is housed in a cylindrical shroud (Heymsfield, 2007;McFarquhar et al., 2007).The CDP features an open-path laser that is passed between two arms that are often outfitted with anti shattering tips.As a result, particle shattering introduces negligible uncertainty in CDP measurements, as demonstrated in work by Lance et al. (2010) and Khanal et al. (2018).15 The FSSP and CDP are often calibrated by passing glass microbeads or polystyrene spheres through the sample area.These methods have crude control of calibration media placement and concentration such that they are only capable of testing OPC sizing response.Because these methods have limited control of particle concentration, coincidence can compromise calibrations (Wendisch et al., 1996).Furthermore, the refractive indices for glass and polystyrene differ from that of water 20 requiring a correction be applied to calibration measurements (Nagel et al., 2007).Wendisch et al. (1996), Korolev et al. (1991), Nagel et al. (2007), and Lance et al. (2010Lance et al. ( , 2012) ) developed droplet generating calibration systems that can produce and precisely place a mono-disperse stream of droplets of a known size/frequency at discrete locations within an instrument's sample area.These systems can test locationally-dependent sizing/counting accuracy at specific locations throughout an instrument's sample area and in turn provide measurements of sample area dimensions . 25 This work uses a water droplet generating system to quantify CDP errors in counting and sizing resulting from variations in droplet scattering angles and laser intensity and misalignment of optical components.Seven droplet generator experiments with droplets of 9 -46 µm in diameter provide data for detailed evaluations of CDP performance at locations throughout the sampling area of the probe.This work is similar to the earlier work reported by Lance et al. (2010), but utilizes a wider range 30 of droplet sizes over the entire qualified sample area and a much higher resolution of measurements across that area.
Estimates of how errors in sizing and counting affect higher order moments are provided.Comparisons of in-situ CDPderived liquid water content (LWC) and bulk LWC measurements from a hotwire device provide an additional means of evaluating probe performance.
CDP Operating Principles
The CDP features two forward-protruding arms; one houses a 568 nm laser diode and the other contains a series of collecting optics and photodetectors.As the probe is flown through cloud, some droplets transit the laser and scatter energy.The collecting optics capture energy scattered by droplets in an ~12° arc, remove photons in the innermost ~4°, and focus the 5 remaining energy onto a beam splitter.The beam splitter divides the laser energy and passes it to a sizer photodetector that is covered by an 800 µm diameter pinhole mask and a qualifier photodetector that is masked by a rectangular slit.Responses from the two photodetectors are converted to digital counts ranging from 1 -4095 counts.Sizer responses are used to estimate droplet diameter through Mie-Lorenz theory.The qualifier's rectangular mask is designed to reduce the collection angles of the detector so that responses are maximized when droplets pass through the qualified sample area.A droplet is 10 considered to be within the qualified sample area (or a "qualified droplet") if the signal from the qualifier is greater than onehalf of the signal from the sizer.The CDP employs a dynamic sizer signal threshold in order to minimize false counting events resulting from impinging solar radiation or other sources of noise.This is accomplished by considering all sizer responses within a 10 Hz period that result in less than 512 digital counts.A noise band is defined as the region that contains at least 75% of responses with less than 512 counts.Sizing/counting events are rejected if sizer response is less than the 15 determined noise band (Lance et al., 2010;Droplet Measurement Technologies, 2014).
Standard coincidence occurs when multiple droplets are simultaneously within the qualified sample area of a CDP.OPCs are designed to count/measure a single particle at a time so standard coincidence results in undercounting and can also lead to oversizing due to the additional light scattered by coincident hydrometeors (Baumgardner et al., 1985;Cooper, 1988).20 Because the qualified sample area of the CDP laser is relatively small (on the order of 0.3 mm 2 ), standard coincidence is only expected to affect measurements in regions of high droplet concentration (Lance et al., 2010).
Originally, the sizer detector was unmasked, meaning that it was sensitive to light scattered by droplets transiting through a region surrounding the qualified sample area, called the extended sample area.Droplets passing through the extended sample 25 area cause insignificant qualifier detector responses so they are not counted or sized.A specialized form of coincidence, called extended coincidence, occurs when droplets are simultaneously within the qualified and extended sample areas (Lance et al., 2010 and2012).Coincident droplets within the extended sample area scatter additional light that can in turn result in oversizing of qualified droplets.Extended coincidence can also lead to undercounting if sizer response exceeds a threshold value.Lance et al. (2010) used a droplet generating calibration system to measure the qualified and extended sample areas 30 (SA Q and SA E ) using droplets with 12 and 22 µm diameters.The researchers found that SA E can be much larger than SA Q (20.1 mm 2 vs. 0.3 mm 2 ) resulting in errors from extended coincidence up to 60% oversizing and 50% undercounting in concentrations as low as 400 cm -3 (Lance et al., 2010).Results from Lance et al.'s 2010 study motivated the addition of an 800 µm diameter sizer pinhole mask that decreases the size of the extended sample area to ~2.7 mm 2 , thus reducing the occurrence of extended coincidence (Lance et al., 2012).It was concluded that extended coincidence introduces negligible uncertainty in droplet concentrations less than 650 cm -3 for CDPs featuring the sizer mask modification.oversized by 2 µm at the centre of the qualified sample area and that sizing accuracy for 12 and 22 µm droplets is dependent upon where droplets transit the qualified sample area.Droplets were undersized by as much as 74% in certain sample 10 locations and oversized by as much as 12% in others (Lance et al., 2010).It was found that on average, 12 and 22 µm droplets were counted to within 95% accuracy.Counting error is more severe at the edges of the qualified sample area as a result of photodetector signal noise (Lance et al., 2010).
University of Wyoming droplet generating system
The University of Wyoming (UW) Atmospheric Science Department developed a droplet generating calibration system very 15 similar to the system built by Lance et al. (2010Lance et al. ( , 2012) ) which is based on work by Korolev et al. (1991), Wendisch et al. (1996), andNagel et al. (2007).A detailed explanation of the design and operation of a droplet generating system can be found in Lance et al. (2010).The systems built by the UW team and Lance et al. (2010Lance et al. ( , 2012) ) use a piezoelectric print head to produce a mono-disperse stream of water droplets inside a glass flow tube.The flow tube contains a sheath flow that is accelerated in a tapered exit region, which by extension, accelerates and focuses suspended droplets into a precise stream.20 The accelerated droplet stream is then passed through a CDP's sample area at discrete locations.Two-axis positioning stages are used to control the point of droplet injection and provide the coordinates of injection locations.The UW droplet generator incorporates computerized stages, instead of the manual positioners used by Lance et al. (2010), in order to automate and expedite the testing procedure.
25
A high speed metrology camera outfitted with a 10X microscope objective provides an independent measurement of droplet diameter using the glare technique as described by Korolev et al. (1991), Wendisch et al. (1996), Nagel et al. (2007), and Lance et al. (2010).As droplets pass through the laser of the CDP, the left and right sides of the droplet are illuminated as a result of reflection and refraction.The metrology camera images these illuminated regions (glares) which appear as two parallel lines when using an exposure time of 1/1000 sec.Estimates of droplet diameter are obtained by considering the pixel 30 separation of glares, a pixel to distance conversion, and a formula that accounts for the angle of the camera objective relative to the laser (Wendisch et al. (1996) and Korolev et al. (1991).Using this technique, the UW system is capable of determining individual droplet diameters to within ±0.355 µm.
The metrology camera can also be used to estimate droplet velocity by capturing images with exposures on the order of 1/150,000 -1/300,000 second.Shorter exposure times produce glare images with well-defined start and end points.Droplet 5 velocity can be estimated by considering glare length, a pixel to distance conversion, and exposure time.The longitudinal position of glares can also be used to evaluate droplet placement precision.
A number of validation tests were performed to ensure that the UW droplet generator can produce droplets of consistent diameter for the amount of time required to conduct a test (~4 hours), precisely place droplets at discreet locations within the 10 sample area, and eject droplets at suitable velocities.Seven droplet generator tests that produced droplet diameters of 9 -46 µm are used to evaluate accuracy and consistency in droplet diameter.During the course of each test, glare images were captured once every second and a random sample of 80 images were analysed to provide distributions of true droplet diameter (D true ).Table 1 shows that standard deviation of D true is less than 0.7 µm for all seven tests.It also shows that all but one test produce 95 th -5 th percentile range of D true less than the 2 µm bin width of the CDP (for droplets larger than 14 µm).15 Two tests were conducted that used the deviation of glare position to validate droplet placement precision.To confirm that placement precision is similar along orthogonal axes, glare images of 32 µm droplets were captured for 1 hour with the metrology camera placed at 124.9º incident to the CDP laser and an additional hour at 214.9º incident.Glare position for a random sample of 50 images from each camera angle show that droplet deviation is similar along orthogonal axes.The 20 absolute deviation of glares is 5.7 µm along both axes and standard deviations are 1.5 and 1.7 µm for the 124.9º and 214.9º camera angles respectively.A separate experiment tested long term placement precision by analysing 80 random glare images captured over the course of a four-hour test.Droplet position for the sample has an absolute range of 11.4 µm and a 95 th -5 th percentile range of 9.3 µm.Approximately 8% of droplets were placed beyond 10 µm.
25
Droplet ejection velocity is validated by capturing images using exposure times of 1/150,000 -1/300,000 sec.It was found that when droplets were created and accelerated in a 13 l min -1 sheath flow, 40 µm diameter droplets cross the CDP laser at ~32 m s -1 .This velocity is only about 30% of typical University of Wyoming King Air research airspeeds but is greater than the minimum operational airspeed of the CDP (10 m s -1 ). 9, 17, 24, 29, 34, 38, and 46 The time required to complete a full test was in some cases as long as 5 hours (see Table 1).Stability of the droplet generator system over this time depends, in part, on the size of droplets being produced.Smaller droplets tend to result in reduced system stability and therefore required shorter test periods.For the five tests using droplets 24 µm and larger, the dwell time at each sample location was 2 seconds.This resulted in 500 droplets passing through the CDP sample area at each location.
For these same tests, a 10 µm by 10 µm grid of sample locations covered the entire test area, corresponding to 2700 discrete 15 sample locations across the approximately 0.27 mm 2 qualified sample area of the CDP.For the test using 17 µm droplets, the dwell time at each location was reduced by a factor of 2 and the grid resolution remained the same.The system was less stable when producing 9 µm droplets and required test times of less than 2 hours to ensure consistent droplet sizes and placement throughout the experiment.For this test, dwell time was further reduced such that 200 drops were placed at each location and the resolution of the grid was reduced to 30 µm by 20 µm, resulting in roughly 450 discrete locations across the 20 qualified CDP sample area.
CDP Sizing
CDP measurements for all droplets detected during a given test were used to produce a distribution of droplet diameters for that test.Droplet distributions were computed using number counts from each of the CDP's 30 pre-determined size bins.Bin widths are 1 µm for diameters less than 14 µm, and 2 µm for diameters greater than 14 µm.For each bin, we considered the 25 geometric mean diameter, hereafter referred to as D CDP .Also, for each test, 80 randomly selected droplet glares were analysed to determine a distribution of actual droplet diameters, D true .These droplets, when binned according to CDP size bins, resulted in a distribution of droplets, D true *. and another 30% were placed in the 8.5 µm bin.Nearly 90% of the randomly selected droplets were determined to have actual diameters between 8 and 10 µm, suggesting that the CDP undersized droplets in this range by about 1 to 2 µm.Table 2 shows that the absolute difference between mean D CDP and mean D true * was 1.3 µm.
Tests using 17 and 24 µm diameter droplets resulted in a better match between D CDP and D true *.For each test, the medians 5 and modes of D CDP and D true * were in the same bin and more than 95% of the droplets were contained in the same two bins.
However, the breadth of the distribution measured by the CDP was slightly larger than the actual distribution for the 17 µm test.And, for the 24 µm test, the distribution measured by the CDP was skewed to smaller sizes.For both tests, absolute differences between the means of the distributions were less than 1 µm (Table 2).10 For the 29, 34, 38, and 46 µm diameter tests, a steady trend of oversizing with increasing droplet diameter is apparent when comparing the normalized histograms of D CDP and D true * (Fig 1).In all cases, the mode diameter from CDP measurements was one bin larger than the true diameter mode.For the largest droplet test, 46 µm, 55% of the droplet diameters from the CDP fell in the 48 to 50 µm bin and another 10% fell in each of the 44 to 46 and 46 to 48 µm bins.More than 95% of the actual diameters were split roughly equally between the 44 to 46 and 46 to 48 µm bins.15 Skewing of the CDP-measured distribution to smaller sizes occurred for all tests using droplets 24 µm and larger.Further, the breadth of the CDP-measured distribution increased with increasing droplet diameter.This is perhaps more apparent from the data in the last column in Table 2.Here we compare the difference between the 95 th and 5 th percentile range for D true * and D CDP .The difference increased significantly for larger diameter tests.Interestingly, even though the difference in 20 both the mode and median diameters of the CDP-measured distributions (compared to D true *) were larger for these larger diameters, the absolute difference between the means of the distributions were quite small, roughly 0.1 to 0.2 µm.This is because, with the measured distributions skewed to smaller diameters, comparisons of mean diameters appeared to compare more favourably.
25
By matching the measured response of the CDP to the expected Mie scattering curve, it is possible to investigate whether the errors in sizing observed from the droplet generator tests may be accounted for by limitations due to Mie resonances or by uncertainty in scattering angle collection.The CDP's nominal collection angles are 4° to 12°.However, optical misalignment along with the physical dimensions of the probe's depth of field may lead to uncertainties up to 1° (Baumgardner et al., 2017).Figure 2 shows that the Mie response curve matches reasonably well with the CDP threshold counts that are used to 30 sort droplets into discrete size bins.Two ranges of scattering angles are considered and both show similar behaviour.In fact, regardless of which range of angles are considered, the error in sizing is expected to be, on average, nearly the same.Errors in drop sizing for individual drops, however, will vary depending on collection angles.by more than about 2 µm.However, smaller droplets less than about 20 µm in diameter may easily be mis-sized by more than 2 µm, accounting for as much as ±20% error in sizing (Baumgardner et al., 2017).
Results from the droplet generator tests overlaid on Figure 2 provide additional insight into CDP response.The mean and 5 th 10 to 95 th percentile range of D true illustrates that the droplets being produced nearly all fell within one size bin of the CDP for any given test.The corresponding MIE resonance curves over those same size ranges generally fluctuate over a range of A/D counts that correspond to threshold values of up to 2 to 3 size bins.This can be seen by examining the Mie response (4° -12°) for the test producing 29 µm drops.Over the range of droplet sizes produced, some locations of the Mie curve fall just below the threshold box (for 28-30 µm bin), while others fall slightly above and still other fall inside the box.15 The skewing of CDP-measured distributions to smaller sizes is also apparent by examining the CDP response compared to the threshold curves.For each test, the mean value of A/D counts (Figure 2) lies either within or very near the appropriate threshold box for that droplet diameter.However, the median value of A/D counts exceeds the threshold box for that droplet diameter for all tests using droplet diameters 29 µm and greater.This suggests that the calibration of the CDP is based upon 20 mean diameter of drops rather than the median or mode diameter.While this may be appropriate, because of the unnatural skewing to smaller sizes, it does have implications on calculations of higher order moments.The severe undersizing of a small sample of drops for these same tests cannot be explained based on Mie resonance or collection angle considerations.magnitude of the sizing difference laterally across the beam increased with increasing droplet size.For the 24 µm test, the sizing difference was only about 2 µm across the beam, but for the 46 µm test, the sizing difference was nearly 6 µm across the beam.Also, for each of these five tests, a region near the detector showed significant undersizing of droplets that also increased in magnitude with increasing droplet size.For the 46 µm test, droplets were undersized by as much as 30 µm.This region accounts for the skewing to smaller sizes of the distributions discussed earlier in this section.5 Columns three and four in Table 2 provide information about how sizing differences for each test impact higher moments of the droplet size distribution.For the 9 µm test, the volume-weighted mean diameter (VMD) measured by the CDP was 1.1 µm small compared to that computed from the D true * distribution, resulting in a 36.7%underestimate in LWC.For the 17 and 24 µm tests, the absolute difference between the actual and measured VMD was less than 0.25 µm and resulted in a roughly 10 8% overestimate and 2% underestimate in LWC for these droplets, respectively.For tests using droplets larger than 24 µm, the CDP oversized VMD from 1 to 1.5 µm, resulting in overestimates of LWC of 2.4 to 11%.Readers should note that errors in sizing by a given amount will have a much more significant impact on LWC for smaller droplets.However, for real measurements in cloud, it is often the larger droplets that carry the majority of the liquid mass.Therefore, these middle and larger sizes from 20 µm and greater are expected to have the greatest impact on LWC estimates from the CDP.15
Counting accuracy and qualified sample area measurements
Counting accuracy is evaluated by comparing CDP-recorded counts to the actual number of droplets based on print head ejection frequency and dwell time at each sample location.For all tests, droplets are counted to within 98% accuracy in ~95% of the sample locations.Experiments indicate that all sizes of droplets are undercounted around the perimeter of the qualified sample area, presumably as a result of sizer and qualifier signal noise (Lance et al., 2010).Figure 4 shows 20 locationally-dependent counting accuracy for 46 µm droplets where purple areas correspond to locations where the CDP recorded 10 -50% of actual counts, blue show locations where 50 -90% of actual counts were recorded, and green denotes where at least 90% actual counts were recorded.Only 46 µm droplets were overcounted, specifically in two isolated regions where droplets were overcounted by as much as 100%.The regions are located just left of the area where 46 µm droplets were significantly undersized (see Fig. 3g and discussion earlier).Overcounting in these regions contributes to less than 1% 25 overall count error because they occupy less than 1% of total SA Q .
Figure 5 shows SA Q calculated by summing the individual areas of sample locations that received a certain percentage of actual counts.SA Q is calculated three times for each test by constraining which sample locations are considered to those that received at least 10, 50, and 90% actual counts (SA Q _ 10% , SA Q _ 50% , SA Q _ 90% ).Evaluating SA Q using this count threshold 30 method provides uncertainty ranges of SA Q and accounts for the fact that ~8% of droplets were placed beyond sample area bounds.The mean value of SA Q_50% considering all tests is 0.269 mm 2 ; compared to a value of 0.30 mm 2 provided by the manufacturer.SA Q_50% varies 0.03 mm 2 across the range of droplet diameters tested.It is smallest for 9 and 17 µm droplets, reaches a maximum of 0.28 mm 2 for 24 µm droplets, and then decreases to 0.27 mm 2 for 46 µm droplets.The range of SA Q_10% to SA Q_90% is smallest for the largest droplets, most likely because detector noise is less of a consideration for larger droplets that scatter relatively more light and hence provide a greater detector response.The test using 9 µm droplets shows 5 the greatest difference between SA Q_10% and SA Q_90% , but it should be noted that SA Q variability is likely exaggerated by the course spatial resolution used for that experiment.
For calculations of number concentration and higher moments, SA Q can be provided by either using a fixed value equal to the mean for all droplet sizes (solid red line in Figure 5) or by using a variable value based on a second degree polynomial fit 10 (blue curve in Figure 5).To explore the impact of employing a fixed vs. variable SA Q , three Poissonian droplet distributions with means of 10, 25, and 35 µm are prescribed.The concentration of each distribution equals 100 cm -3 when calculated with a fixed SA Q_50% of 0.27 mm 2 .Table 3 illustrates how using a fixed vs. variable SA Q affects concentration and LWC.It shows that choice of SA Q type most affects concentration and LWC for the distribution with a 10 µm mean diameter.Using a variable SA Q results in 6% greater concentration and ~4% greater LWC.For distributions with greater mean diameters, the 15 choice of using a fixed or variable SA Q results in less than 3% difference in concentration and LWC.
It seems best to calculate higher moments using a fixed SA Q of 0.27 mm 2 , given that the choice of SA Q type has relatively little impact on concentration or LWC.Furthermore, the second-degree polynomial fit used to model variable SA Q does not completely capture variations in SA Q for droplets with diameters between 20 to 30 µm and requires extrapolation of SA Q for 20 droplets with mean diameter less than 9 µm or greater than 46 µm.
Comparisons of liquid water content from the CDP and Nevzorov probes
In-situ data collected by aircraft are used to further investigate uncertainty in CDP measurements by comparing LWC derived from CDP measurements to bulk LWC measured by a Nevzorov hotwire probe.Comparisons of in-situ LWC measurements provide an independent evaluation of CDP performance and an indication of how error in real-world CDP 25 measurements compares to laboratory droplet generator results.
The University of Wyoming King Air
The University of Wyoming King Air (UWKA) is a Beechcraft Super King Air modified to carry a variety of atmospheric in-situ and remote sensors capable of collecting information about atmospheric thermodynamics, dynamics, and cloud particle properties (Wang et al., 2012).In the following, we utilize measurements from two field campaigns conducted in 30 USA (French et al., 2018).The majority of clouds sampled in both PACMICE and SNOWIE were mixed phase.5
Constant temperature hotwire probes
The UWKA carries both a DMT LWC-100 and a deep-cone Nevzorov constant temperature hotwire probe.Both provide estimates of bulk cloud water content utilizing changes in current supplied to heated elements that are exposed to impacts of cloud particles (King et al., 1978;Baumgardner et al., 2017).Element temperature is maintained near 100 C such that impinging particles will vaporize transferring energy from the element through the effects of sensible and latent heating.10 Control circuitry maintains element temperature by altering the power supplied using element resistance as a proxy for temperature.Measurements of water content are obtained by relating the power required to maintain element temperature as particles are vaporized to the sensible and latent heat capacities of water, and element surface area (King et al., 1978;Korolev et al., 1998).
15
Convective losses due to moist airflow over the sensor also transfer energy from collector elements and can be quite large at aircraft flight speeds (King et al., 1978;McFarquhar et. al, 2017).The Nevzorov probe features reference elements that are positioned on the devices' trailing edge such that they are aerodynamically shielded from particle impact (Korolev et al., 1998;Strapp et al., 2003).Energy losses from the reference elements are then assumed to arise solely due to convective considerations and thus the total power delivered to the reference elements can be used to estimate the convective heat losses 20 from the sensing (collector) elements.The relationship between collector and reference element convective losses depends on airspeed and density (Korolev et al., 1998, Abel et al., 2014).Data collected during clear air calibration manoeuvres are used to compute the ratio of collector to reference power and determine how the ratio varies with airspeed and density.The manoeuvres are typically flown at several flight levels over a range of airspeeds.Any inaccuracy in the estimate of convective heat losses in the collector sensor based on power delivered to the reference sensor results in baseline drift of the 25 Nevzorov-derived LWC (LWC NEV ) measurement (Abel et al., 2014).For the data used herein, the effectiveness of the Nevzorov data processing method was evaluated using ~60,000 out-of-cloud points.LWC NEV residual (i.e.departure from zero when not in-cloud) was used to determine uncertainty in baseline LWC.LWC NEV baseline uncertainty is estimated to be no greater than 0.05 g m -3 (the 95 th -5 th percentile range of residual LWC) and minimum detectable LWC NEV is +0.02 g m -3 (95 th percentile residual LWC).30 The Nevzorov is capable of measuring both LWC and total condensed water content (TWC) using two collector elements with different geometrical designs (Korolev et al., 1998).Estimates of ice water content (IWC) can then be obtained by differencing the two measurements.The LWC element is in the shape of a thin rod designed to only evaporate liquid particles.Ice particles shatter on impact with the sensor and are swept away before significant melting or evaporation can occur (Korolev et al., 1998).The TWC collector has a 'deep inverted cone' shape designed to capture both liquid and ice particles (Korolev et al, 2013).Korolev et al. (1998) showed that in mixed phase conditions, interactions between the LWC collector and ice particles can result in LWC overestimation on the order of 12% IWC. 5 In some conditions, collection efficiency may be significantly less than unity resulting in underestimation of LWC NEV .
Because airflow diverges in the vicinity of the LWC collector, LWC NEV may be underestimated by as much as 30% in droplet populations with VMD less than 8 µm since particles with insignificant mass are unable to cross the divergent streamlines and impact collector elements (Korolev et al., 1998).Collection efficiency also departs from unity for droplets 10 with VMD greater than 30 µm because larger droplets tend to splatter on impact leading to incomplete evaporation (Schwarzenboeck et al., 2009).
Dataset overview
Data were used from 29 research flights from both PACMICE and SNOWIE.Measurements from both probes were filtered to 1 Hz.Here we select only those data points in which both LWC NEV and LWC CDP exceeded a threshold value of 0.05 g m -3 .15 To minimize uncertainty due to presence of ice hydrometeors in CDP and Nevzorov LWC measurements, the IWC from the Nevzorov was used to select periods of liquid phase only penetrations from PACMICE and SNOWIE missions.However, IWC estimates from the Nevzorov may be affected by as-of-yet uncharacterized sources of uncertainty such that one cannot conclude the dataset used here is completely devoid of mixed-phase penetrations.Nonetheless, uncertainty in LWC resulting from the presence of ice is expected to minimally impact results.LWC NEV is subject to overestimation of less than 12% 20 IWC, which is often small compared to LWC in mixed phase cloud.It has been also been demonstrated that the CDP is minimally effected by ice shattering artifacts (Lance et al., 2010, Khanal et al., 2018).
The resultant data subset used in the comparison contains 17,917 1 Hz in-cloud points.Droplet concentrations encountered during SNOWIE were uncharacteristically low for continental clouds.Mean droplet concentration for the dataset is 113.6 25 cm -3 with 50% of data points having concentration less than 50 cm -3 .Consequently, droplets were relatively large; with an average VMD of 22.2 µm and 1 st and 3 rd quartiles of 16.7 and 27.7 µm.Nearly all measurements were taken in supercooled conditions, the environmental temperature range for the 5 th and 95 th percentile is -18.7 and -1.3 °C.
In-situ results
For each 1 Hz data point, measured spectra from the CDP were used to compute the total droplet concentration and the VMD 30 of the spectra.The data were first subdivided based on droplet concentration and then further divided based on VMD.LWC CDP is greater than LWC NEV ) for four different ranges of droplet concentration.The points represent mean percent LWC difference calculated using a best fit line for all points in 5 µm wide bins (5-10 µm, 10-15 µm, 15-20 µm, etc.); error bars are root mean squared error.Green hatched areas are estimates of expected percent LWC difference using droplet generator results computed for errors in CDP sizing and counting for 10% and 90% count thresholds and errors considering best estimates of Nevzorov collection efficiency as a function of VMD (from Korolev et al., 1998;Strapp et al., 2003;5 Schwarzenboeck et al., 2009).Plots are shown for 4 ranges of total droplet concentration.The mean percent LWC difference and the number of data points (n) are shown in the bottom-right corner of each plot.
For all VMDs larger than 10 µm, LWC CDP exceeded LWC NEV by as much as 40%.For VMDs less than 10 µm, LWC CDP was less than LWC NEV by 5 -10% for those cases in which total droplet concentrations were less than about 400 cm -3 .The 10 general trend, for all droplet concentrations, suggests increasing LWC CDP (compared to LWC NEV ) for increasing VMD.
However, the mean difference in LWC, across all VMDs, does not indicate any specific trend when considering different ranges of total droplet concentration.
Estimates of percent LWC difference expected based on results from droplet generator tests and Nevzorov collection 15 efficiency estimates predict that LWC CDP should be at most 11% greater than LWC NEV .However, when all of the data in this study are considered, the mean percent difference is 19.6%.Two striking features of the data show that the percent difference for large VMD, greater than about 25 -30 µm, is considerably larger than expected.And, for droplet concentrations greater than 400 cm -3 , the percent difference is significantly larger than expected for all VMDs.The larger than predicted difference between LWC CDP and LWC NEV is unlikely to be a result of coincidence error.The UWKA CDP 20 features a sizer pinhole mask modification such that it is expected to be relatively unaffected by coincidence in concentrations less than 600 cm -3 (Lance et al., 2012).Figure 6d shows that mean percent LWC difference for data with concentration of 400 -1600 cm -3 is not significantly different than mean values for much smaller total droplet concentrations.On the other hand, for this concentration range, percent LWC difference is significantly larger for smaller VMDs when compared to similar VMDs for lesser droplet concentrations, suggesting that those CDP measurements may 25 indeed be impacted by coincidence for these higher concentrations.Regardless, these data account for less than 4% of all points and suggest coincidence is unlikely to account for differences across all ranges of concentration.
The droplet generator tests used a near constant droplet velocity that was ~30% of typical UWKA airspeeds.They provide no information about how CDP sizing/counting accuracy and SA Q may vary with airspeed.Some of the discrepancy between 30 estimated and actual percent LWC difference could be a result of a change in CDP performance at typical aircraft flight speeds which could result from limitations in photodetector response (Dye and Baumgardner, 1984).However, in order for airspeed-dependent errors in sizing and/or SA Q to account for the discrepancies shown, increased flight speeds would need to result in an increase in sizing (and hence photodetector output for the sizer signal) and/or an increase in SA Q , both of which are unlikely outcomes; one might expect the opposite behaviour.On the other hand, overcounting could increase with increasing particle velocity if photodetector response limitations result in more significant signal noise.But it seems unlikely that such considerations could cause overcounting on the order of 5 -20% given that only 46 µm droplets were overcounted (by less than 1%) during droplet generator tests.
5
It is possible that the discrepancy between estimated and actual percent LWC difference could be a result of a change in counting/sizing behaviour for droplets passing through the qualified sample area region where droplets are severely undersized (blue areas in the rightmost 10% of the beam maps shown in Fig. 3).Sizer responses are characteristically within the noise band range (less than 512 digital counts) for droplets transiting these regions; thus, severely undersized droplets could be rejected during 'real-world' operation.If LWC CDP error estimates (as described in section 3.2) are recalculated 10 excluding these regions where droplets are severely then the resultant oversizing throughout the rest of the sensitive sample area could result in as much as 17% overestimation of LWC CDP (effectively shifting upward the hatched green areas in Figure 6 for large VMD).
Error in Nevzorov measurements could also contribute to the discrepancy between LWC CDP and LWC NEV .Instrument icing 15 was a common issue during SNOWIE.The 0.05 g m -3 threshold applied to LWC CDP and LWC NEV was used to exclude measurements taken when one (or both) of the instruments was (were) completely unresponsive.In the case of ice accumulation on the Nevzorov sensing element, build-up of rime ice near (or over) the LWC element often results in significant baseline drift along with an accompanying reduction in sensitivity to liquid water (due to changes in airflow and shielding of the sensing element).Such situations would result in an underestimation of LWC by the Nevzorov and could 20 explain some of the differences shown in Figure 6.However, examination of baselines prior to and after exiting clouds suggests this is not a large problem for the cases examined.Regardless, nearly all measurements were obtained in supercooled conditions so the data used in this study were not able to be further subdivided to investigate differences in regions where temperature greater than 0 °C would exclude possibility of icing.
Summary 25
A droplet generating calibration system was used to test the sizing and counting performance and provide measurements of the qualified sample area of the UWKA CDP using seven discrete droplet diameters ranging from 9 -46 µm.Experiments reveal that droplet sizing accuracy varies depending on where droplets transit the sample area and the size of the droplets.
Errors in sizing for the majority of droplets across the size ranges tested can be accounted for by the amplitude of Mie resonances on the response curve.The Mie resonances often result in an artificial broadening of the distribution by 1 -2 30 bins.How much broadening occurs depends on droplet size and the actual range of collection angles for the probe.This finding confirms results of earlier studies (Rosenberg et al., 2012;Baumgardner et al., 2017).Droplets with nominal The tests also reveal that for droplets 24 µm and larger, nearly all droplets passing through 10% of the qualified sample area (that portion closest to the detector) are undersized, by as much as 30 µm, depending on the droplet diameter.However, 15 droplets passing through much of the rest of the sample area are oversized.The locationally-dependent nature of sizing accuracy results in artificial spectral broadening of droplet size distributions, which is most pronounced for droplets with diameters 34 µm and larger.Although droplets are oversized by 2 -4 µm in most locations within the qualified sample area, the resulting errors in higher order moments such as mean diameter, VMD, and LWC, are mostly offset by undersizing of droplets throughout the rest of the sample area.This has implications for how sizing should be calibrated for the CDP.For 20 example, matching distribution modes when performing calibrations will result in an underestimation for higher order moments because distributions are artificially skewed.Conversely, calibrations that match mean droplet diameter will result in an over estimation of the diameter of the droplet distribution mode in real clouds.
Droplets were counted to within 98% accuracy over roughly 95% of sample locations.Only the largest droplets tested, 46 25 µm, indicated any significant over counting.This occurred in two regions bordering the area where these same droplets were significantly undersized.However, these regions account for less than 1% of the total qualified sample area so they introduce less than 1% overall count error.All sizes of droplets are undercounted around the perimeter of the qualified sample area and this must be considered when defining SA Q for higher moment calculations.SA Q 50% varies only 0.03 mm 2 depending on droplet diameter and thus the use of a mean of SA Q 50% for all droplet sizes (0.27 mm 2 ) is warranted.SA Q for the CDP used 30 in this work and the CDPs examined by Lance et al. (2010Lance et al. ( , 2012) ) agree to within 10%.
Comparisons of in-situ LWC measurements from the CDP and a Nevzorov hotwire probe provide another means of evaluating CDP performance.In-situ comparisons show that, on average, LWC CDP is greater by about 20% whereas droplet Table 1.Res.
[ ).Fixed SA Q 50% concentration is not shown because it equals 100 cm -3 for all distributions.Uncertainty is equal to 1/2 the range of each parameter when calculated with SA Q 10% and SA Q 90% .
Atmos.Meas.Tech.Discuss., https://doi.org/10.5194/amt-2017-479Manuscript under review for journal Atmos.Meas.Tech.Discussion started: 30 January 2018 c Author(s) 2018.CC BY 4.0 License.times of individual particles.Cloud particle sizes and counts are used to construct size distributions and calculate higher distribution moments.
5
Lance et al.'s (2010) droplet generator work also tested CDP sizing and counting accuracy throughout the qualified sample area (at a spatial resolution of 200 x 20 µm) using 12 and 22 µm droplets.Ten additional tests investigated sizing accuracy at the centre of the qualified sample area using droplet diameters of 8 -35 µm.It was shown that droplets are systematically Atmos.Meas.Tech.Discuss., https://doi.org/10.5194/amt-2017-479Manuscript under review for journal Atmos.Meas.Tech.Discussion started: 30 January 2018 c Author(s) 2018.CC BY 4.0 License.
Atmos.Meas.Tech.Discuss., https://doi.org/10.5194/amt-2017-479Manuscript under review for journal Atmos.Meas.Tech.Discussion started: 30 January 2018 c Author(s) 2018.CC BY 4.0 License. 4 Results of Droplet Generator Tests on the CDP 4.1 Experimental design To quantify uncertainty in CDP measurements of droplet counting and sizing, seven tests using nominal droplet diameters of µm provided measurements over most of the size range detectable by the CDP.For each test, droplets were injected at fixed locations through the qualified sample area of the CDP.Droplets were injected at a frequency 5 of 200 Hz for 9 µm droplets and 250 Hz for all other sizes.Following a dwell time at a given location, the position of the droplet injector relative to the CDP sample area was moved a small distance.The tests proceeded in this fashion, injecting droplets throughout the entire qualified sample area of the CDP.The start and end times at each location were recorded.Post-test, 5 Hz data from the CDP were synchronized to match droplet location and CDP measurements. 10
Figure 1
Figure1shows distributions of normalized frequency for D CDP and D true * for each of the seven tests.In general, the mode 30 diameter of the distribution based on sizing from the CDP (D CDP ) was within one to two size bins (1 to 4 µm) of the D true * mode.For the test using 9 µm droplets, more than 50% of the droplets detected by the CDP were placed in the 7.5 µm bin Atmos.Meas.Tech.Discuss., https://doi.org/10.5194/amt-2017-479Manuscript under review for journal Atmos.Meas.Tech.Discussion started: 30 January 2018 c Author(s) 2018.CC BY 4.0 License.The shaded region in Figure 2 illustrates the range of threshold counts that the CDP uses to determine the size bin for an individual drop.Regions where the Mie curve(s) lie within the shaded regions are locations where a drop will be sized 'correctly'.If the Mie curve is above the shaded region, the drop will be oversized; below the shaded region it will be undersized.The amplitude of the Mie resonances and the locations of the peaks and valleys depend on droplet diameter and vary with collection angles.Generally, the amplitude of the Mie resonances increases with increasing drop size, however, so 5 does the 'steepness' of the curve.Therefore larger droplets, 40 to 50 µm in diameter, should not be undersized or oversized
Figure 3
Figure 3 illustrates sizing from the CDP across the entire qualified sample area.For each sample location, the mean of D CDP 25 is compared to the mean of D true *.A positive difference (warm colours) indicates oversizing by the CDP, negative values (cool colours) indicate undersizing.For the 9 µm diameter tests, undersizing of droplets by 1 to 2 µm was found throughout most of the CDP sample area.Droplets passing through the centre of the beam and laterally towards the top experienced somewhat less undersizing than in other regions.No regions indicated oversizing of droplets.For the 17 µm tests, droplets throughout much of the sample area were sized correctly.Only a small region, laterally towards the top of the beam and 30 towards the detector, were droplets oversized, on average by about 1 µm.
late 2016 and early 2017.The Precipitation and Cloud Measurements for Instrument Characterization and Evaluation Atmos.Meas.Tech.Discuss., https://doi.org/10.5194/amt-2017-479Manuscript under review for journal Atmos.Meas.Tech.Discussion started: 30 January 2018 c Author(s) 2018.CC BY 4.0 License.(PACMICE) campaign began in August 2016 and lasted until May 2017, with flights over eastern Wyoming and western Nebraska, USA.It focused on collecting cloud and precipitation measurements in precipitating stratiform and convective systems primarily in the shoulder seasons.The Seeded and Natural Orographic Wintertime clouds -the Idaho Experiment (SNOWIE) occurred during January -March 2017, and focused on wintertime orographic clouds in southwestern Idaho, Atmos.Meas.Tech.Discuss., https://doi.org/10.5194/amt-2017-479Manuscript under review for journal Atmos.Meas.Tech.Discussion started: 30 January 2018 c Author(s) 2018.CC BY 4.0 License.
Atmos.Meas.Tech.Discuss., https://doi.org/10.5194/amt-2017-479Manuscript under review for journal Atmos.Meas.Tech.Discussion started: 30 January 2018 c Author(s) 2018.CC BY 4.0 License.diameter of 9 µm are undersized by 1 µm or less for roughly 33% of the droplets sampled and are undersized by 1 -4 µm for the remaining 66% of droplets.Errors in droplet sizing for 9 µm droplets do not depend strongly on where droplets transited the sample area.The errors in sizing for these smallest droplets are likely related to the amplitude of Mie resonances compared to the relatively shallow slope of the Mie function.Droplets with diameters of 17 and 24 µm are sized to within 2 µm of the true droplet diameter for nearly all droplets sampled (>90%), but there appears to be a small lateral dependence 5 within the sample area on errors in sizing, such that droplets passing through the top half of the sample area are sized larger than those transiting through the bottom half.Tests for droplets with diameters 29, 34, 38, and 46 µm reveal more significant oversizing, by as much as 2 -4 µm, with an even stronger lateral on sizing error.Droplet generator experiments performed by Lance al. (2010) using 12 and 22 µm droplets reveal a similar lateral gradient in sizing accuracy.The researchers attributed this behaviour to a misalignment of the qualifier detector mask, however this consistent behaviour 10 across different probes might indicate a problem with the optical design.Similar tests on two other CDPs using the University of Wyoming Droplet Generator system revealed similar lateral dependencies.
5 D
Droplet generator test characteristics including the number of droplets injected at each sample location, longitudinal and latitudinal resolution, test duration, mean droplet diameter from glares (mean D true ), and the 95 th to 5 th percentile range of D true .true statistics are from 80 randomly-selected glare images.
in several distribution parameters when calculated using CDP-recorded droplet diameter (D CDP ) vs. diameter from glares rounded to the geometric mean of CDP size bins (D true *).A positive difference (or positive percent difference) indicates that calculations using D CDP result in a larger value than D true *.Percent LWC difference is calculated by 5 comparing the integrated 3 rd moment of normalized D CDP distributions vs. normalized D true * distributions.
Fig. 2 .
Fig. 2. Mie response scaled to CDP A/D counts computed for 4°-12° collection angles (red) and 5°-13° collection angles overlaid on the CDP A/D threshold (shaded blue) that is used to bin individual drops.Black dots show mean droplet diameter (D true ) for the 7 5 | 11,909.8 | 2018-01-30T00:00:00.000 | [
"Environmental Science",
"Physics"
] |
Remote sensing image and multi-type image joint encryption based on NCCS
In this paper, an encryption algorithm for remote sensing image based on a new type of Novel Chebyshev chaotic system (NCCS) and a combined encryption algorithm for remote sensing image, gray image and color image are proposed. Aiming at the problem of large amount of remote sensing image data, this paper proposes NCCS algorithm, which effectively reduces the time complexity of the algorithm, and the generated pseudo-random sequence is more uniform, and the performance is better. On this basis, the remote sensing image encryption, first of all, each band of remote sensing image in a different channel, to obtain a three-dimensional matrix, using three-dimensional spiral curve to read each section of the three-dimensional matrix, a two-dimensional matrix composed of several one-dimensional sequences is obtained. This method makes each channel produce some coupling and reduces the dimension of the matrix, thus effectively improving the scrambling effect. Chaotic maps scramble one-dimensional sequences, then scramble one-dimensional sequences, and diffuse them by cyclic left shift based on additive modules. Because this method is suitable for multi-channel image encryption, it can be used not only for remote sensing image encryption, but also for remote sensing image, gray image, and color image encryption. Simulation results and performance analysis show that the method has good security. Compared with some existing encryption schemes, this method has a wider application range.
Introduction
In recent years, remote sensing images have developed very rapidly and play an important role in many fields. As an important spatio-temporal data, remote sensing images have the characteristics of multi-precision, multi-tense, multi-semantic, and multi-band [1][2][3][4][5]. In terms of information acquisition, it also shows its unique charm [6]. At the same time, the security of remote sensing images has gradually attracted the attention of some experts and scholars [7][8][9][10][11]. However, the encryption schemes of remote sensing images are very few and the scope of application is narrow, after a large number of literature searches, only the following work was found, such as the mixed domain remote sensing image encryption technology proposed by Zhang [12]; a new remote sensing image fragmentation chaos encryption scheme proposed by Guo et al. is based on fragmentation of remote sensing images and then scrambling the blocks and combining them with the Lorenz chaotic system [13]; however, these schemes are only for grayscale remote sensing images and color remote sensing images and cannot be applied to remote sensing images with more than three bands, and the scheme proposed in this paper solves this problem very well.
At present, the proposal of ordinary image encryption algorithms can be described as endless, and a series of image encryption algorithms based on chaos theory proposed by Wang et al. have been highly recognized by the industry [14]. It also includes the application of fractal ordering theory proposed by Xian et al. in image encryption [15][16][17], image encryption schemes based on DNA coding theory [18][19][20], and encryption schemes based on matrix half-tensor products and Boolean networks [21]. Common image and multi-image encryption schemes are growing, but joint encryption schemes for multitype images are currently absent [22][23][24].
In this regard, the encryption scheme based on chaos theory proposed in this paper can effectively solve the above problems. When encrypting remote sensing images, there is no limit on the size and number of bands of remote sensing images, and when multi-type images are jointly encrypted, there is no limit on the number of grayscale images, color images, and remote sensing images. The entire encryption process consists of two parts: scrambling and diffusion. The scrambling process is divided into three stages and each stage is accompanied by the reduction of the matrix dimension, the first stage uses the spiral curve to pre-scramble the three-dimensional matrix and convert the three-dimensional matrix into a twodimensional matrix; the second stage uses the chaotic sequence generated by NCCS to index the twodimensional matrix and convert the two-dimensional matrix into a one-dimensional matrix; the third stage is to randomize the one-dimensional matrix in onedimensional Arnold to obtain the scrambled image. The diffusion phase uses the classic method of adding and taking the mold to the left of the cycle to obtain the final ciphertext image. This paper mainly includes seven parts, the first part is a brief introduction to the whole text; the second part is the introduction of the relevant knowledge used in the article; the third part is the newly proposed chaotic system and the performance analysis tests of the system, the results show that the proposed chaotic system is better; the fourth part is the encryption process, which gives the detailed process and steps of chaos and diffusion; the fifth part is the decryption process, gives the decryption algorithm of the detailed process and steps; the sixth part is the simulation results and performance analysis part of the remote sensing image, indicating the accuracy of the algorithm proposed in this article; the seventh part is the simulation results, and performance analysis of the joint encryption of grayscale images and color images show that the proposed algorithm is also suitable for the joint encryption of multiple types of images and has better results.
Traditional Chebyshev chaotic system
The Chebyshev map is a map whose order is the parameter, and the Cosine form is defined as Eq. (1): where x n 2 ð À 1; 1Þ, l j j 2 ð2; þ1Þ, when l 2 ðÀ2; 2Þ, chaotic systems do not produce chaotic behavior.
Storage format
Remote sensing images with multiple bands are one of its more significant features, Tif format is characterized by having multiple channels, so remote sensing images mostly exist in Tif format, each band occupies a channel. One band of a remote sensing image corresponds to a two-dimensional matrix, and n bands correspond to n two-dimensional matrices, so a remote sensing image corresponds to a three-dimensional matrix. This article uses remote sensing images in Tif format for encryption.
Display mode
There are three ways to display remote sensing images, namely grayscale image display, pseudocolor image display, and true color image display.
Grayscale image display is to save one band of the remote sensing image as a grayscale image; true color image is to put the red, green, and blue bands of the remote sensing image into the R, G, B channels, respectively; the false color image display is selected from the remote sensing image to be placed in the R, G, B channel [25][26][27]. The encryption of remote sensing images in this paper is for all bands, but the display of remote sensing images can only select up to three bands, and this article uses false color images to display the simulation results of remote sensing images.
One-dimensional Arnold mapping
Arnold mapping is also known as cat mapping. It is a chaotic mapping method that repeatedly folds and stretches transformations in a finite area, which is widely used in the scrambling process of image encryption, and for NN matrices, two-dimensional Arnold matrix transformations such as Eq. (2): where x n ; y n is the coordinates before the transformation of the two-dimensional matrix,x nþ1 ; y nþ1 is the coordinates after the transformation of the twodimensional matrix; a, b is the parameter, and n is the number of transformations. This method is only suitable for cases where the length and width of the matrix are equal and have certain limitations [28][29][30][31][32]. This leads to the one-dimensional Arnold map. The one-dimensional Arnold map is suitable for matrix transformations of unequal length and width, and the time performance of the algorithm is better, for the matrix of MN; it is converted to a one-dimensional matrix, the size is 1 9 MN, then the coordinates of any point are (i = 1, 2, 3,…, MN), and the Arnold transformation obtains new coordinates.
x n y n " # x nþ1 ¼ 1 þ by n y nþ1 ¼ a þ ðab þ 1Þy n ð4Þ By Eq. (3) available Eq. (4), where a = 1 does not consider the transformation of the horizontal axis, and ab ? 1 is a new pseudo-random number, then Eq. (4) becomes Eq. (5). Thus, the formula for a one-dimensional Arnold matrix transformation can be expressed as Eq. (6): where y n is the ordinate coordinate before the onedimensional vector transformation, and y nþ1 is the ordinate coordinate after the transformation; a, b is the parameter.
The new Chebyshev chaotic system
This paper proposes an NCCS based on the Cosine form of the Chebyshev map, which has a wider range of parameters, a more uniform distribution of pseudorandom sequences generated, and better chaotic behavior. NCCS is a mapping of order l, defined as a result of the rest of the chordal forms such as Eq. (7): x nþ1 ¼ cos(larccos(x n ÞÞ Â 10 6 À floor(cos(larccos(x n ÞÞ Â 10 6 Þ ð 7Þ where x n 2 ð0;1Þ;l 6 ¼ 0; AE 2; AE 4; AE 6, when the initial value of the system x 0 and the value of the parameter l meet the above range, the chaotic behavior of NCCS is better.
Comparative analysis of Bifurcation diagram
According to the knowledge of dynamics, the uniformity of pseudo-random sequences generated by chaotic systems iteratively within the constraint range is an important criterion for evaluating the quality of a system [33][34][35][36]. The forked diagram of the Chebyshev chaotic system and the forked diagram of NCCS are given below. It can be seen that when the initial values of x 0 and the values of parameter l of the system meet the custom ranges, the distribution of the pseudorandom sequences generated by NCCS is much more uniform than that of the pseudo-random sequences generated by the Chebyshev chaotic system. Therefore, it is determined that the chaotic performance of NCCS is very good Fig. (1).
Comparative analysis of the Lyapunov Index
The Lyapunov Index (LE) is an important indicator of the dynamic stability of chaotic systems, accurately determining whether the system is in a chaotic state [37][38][39][40][41]. LE is calculated by formulas such as Eq. (8): where f ðx i Þ is the formula for chaos mapping, when LE is negative, it indicates that the system is in a contraction state; when LE is positive, it indicates that the system is in a chaotic state. It can be seen from Fig. 2 that the LE value of NCCS is larger than that of the traditional Chebyshev chaotic system, Logistic map and Sine map, that is, the dynamic stability of NCCS is better.
Comparative analysis of Shannon entropy
Shannon entropy (SE) reflects the degree of chaos of pseudo-random sequences, the larger the SE, the higher the degree of chaos, the better the chaotic performance [42][43][44]. The comparison of Shannon entropy between NCCS and Chebyshev system, Logistic system and Sine system is shown in Fig. 3, which shows that the chaotic performance of NCCS is better than that of other chaotic systems.
NIST test
The National Institute of Standards and technology (NIST) is a method for evaluating the performance of chaotic systems [45]. The randomness of sequences generated by chaotic systems is described by means of probability theory and statistics. The NIST test consists of 15 sub-tests, each of which generates a P value. Only when the P value is within the range of the interval [0.01,1] can we consider the test passed. The NIST test results are shown in Table 1, and all P values can be found to fall into the interval. Therefore, it is concluded that the chaotic behavior of the pseudorandom sequence generated by the proposed chaotic system is better.
0-1 Test
In addition, we also use 0-1 test to evaluate the performance of chaotic systems [46], which is also a more popular method in recent years. As shown in Fig. 4, NCCS has better chaotic performance than the existing Logistic map, Sine map, and Chebyshev map.
Encryption algorithm
This algorithm is based on NCCS, one-dimensional Arnold scrambling and addition modulus cyclic left shift diffusion method of remote sensing image encryption and remote sensing image combined encryption with color image and grayscale image. You can use remote sensing images or a union of multiple types of images as input to the algorithm. There are four parts, the first step is to generate the key through the SHA-512 algorithm; the second part is to generate two pseudo-random sequences by substituting the processed key into NCCS, which are used for chaos and diffusion; the third part is to first use a spiral curve to reduce the dimensionality of the threedimensional matrix, and then to perform index disorder and one-dimensional Arnold chaos; the fourth part is to use the classic cyclic left shift method of adding modulus for diffusion. The result is a ciphertext image. The encryption flowchart is shown in Fig. 5.
Key processing
Step 1: This article uses the SHA-512 algorithm to generate the key and brings the image P with a size of MNK into the SHA-512 algorithm to obtain a set of hexadecimal key with a length of 128 bits.
Step 2: Converts the hexadecimal key key to the binary string key 1 , and one hexadecimal number is equal to a four-digit binary number, so the string length becomes 512. Step 3: Each adjacent bit of the string key 1 is xor or different, resulting in a string key 2 with a length of 256 bits.
Step 4: Place the key key 2 as described in Eq. (9). Convert into four parts of equal length to obtain four values, namely: K 1 ;K 2 ;K 3 ;K 4 .
Step 5: According to Eq. (10) and Eq. (11), generate the parameters and initial values required for the two sets of NCCS:
Chaotic sequence generation
Step 1: The parameters l ¼ l 1 and initial values x 0 ¼ x 1 required for NCCS generated by the above formula are substituted into NCCS and grow into a set of sequence A of M ? 3MNK, which is divided into four parts A 1 ; which is used for interline index scrambling, A 2 long is MNK, which is used for inline index scrambling, and A 3 and A 4 long are MNK, which is used to generate one-dimensional Arnold scrambled parameters.
Step 2: The parameters l ¼ l 2 and initial value x 0 ¼ x 2 required for NCCS generated by the above formula are substituted into NCCS, and the growth is generated into a set of sequence B of 2MNK, and the sequence is divided into two parts B 1 ;B 2 . The length is MN and is used in diffusion.
Scramble algorithm
This method is suitable for MNK multi-channel image P, where there is no limit to the range of M, N, K.
Considering the problem of time complexity during the operation of the algorithm, the three-dimensional matrix is processed twice. Each dimensionality reduction is accompanied by the occurrence of chaos, which is divided into three steps: the first step is to use a spiral curve to scan the three-dimensional side-slice surfaces one by one, each tangent as a row of the twodimensional matrix, as shown in the figure; the second step is to scramble the two-dimensional matrix and convert the two-dimensional matrix into a onedimensional matrix; the third step is to mess up the one-dimensional matrix.
Step 1: The preset mess process is shown in Fig. 6, for the three-dimensional matrix P, from the far left side of the spiral curve to obtain pixel values one by one, as a row of the two-dimensional matrix P 1 , that is the three-dimensional matrix is reduced to a twodimensional matrix, the size is M 9 (NK).
Step 2: The index scramble process, the twodimensional matrix P 1 is scrambled once according to the pseudo-random sequence A 1 , and then converted to a one-dimensional matrix P 2 with a size of 1 9 (MNK). An index scramble is performed according to sequence A 2 to obtain a one-dimensional matrix P 3 .
Step 3: The one-dimensional Arnold is confused, and the one-dimensional Arnold transformation is performed on the one-dimensional matrix P 3 to obtain C 0 .
Diffusion algorithm
Step 1: The pseudo-random sequences B 1 and B 2 are decimal places of (0,1) and the length is MNK, which are mapped to (0,255) by the following formula to obtain the pseudo-random sequences S 1 and S 2 .
Step 2: Convert the original matrix P to onedimensional matrix P 0 with a size of 1 9 MNK; The size of the scrambled matrix C 0 is 1 9 MNK, and the pseudo-random sequences S 1 and S 2 are all 1 9 MNK in size.
Step 3: According to the original matrix P 0 , the pseudo-random sequence S 1 , the matrix C 0 is forward diffused according to Eq. (12) to obtain matrix C 1 .
Step 4: According to the original matrix P 0 , pseudorandom sequence S 2 , according to Eq. (13) reverse diffusion of matrix C 1 to obtain matrix C.
Decryption algorithm
The decryption algorithm is the inverse operation of the encryption algorithm. The flowchart is shown in Fig. 7. It consists of two stages. It spreads the ciphertext image in reverse and then inverts it to get the original image after decryption. The decryption process needs several parameter values, as the initial value of NCCS, the specific introduction and the calculation process can be seen Chapter 4.1 and 4.2.
The specific decryption steps are as follows.
The reverse process of diffusion
Step 1: According to the diffusion formula, the inverse diffusion process of ciphertext image C is carried out, and the pre-diffusion image C 1 is obtained.
Step 2: The image C 0 is transformed from onedimensional matrix to two-dimensional matrix, and C 1 is obtained.
The reverse process of scrambling
Step 1: One-dimensional Arnold scrambling is used to invert the C 1 image, and the chaotic sequence generated by NCCS system is needed in the scrambling process.
Step 2: Finally, the original image is obtained by the inverse process of image prescrambling.
Simulation results and performance analysis of remote sensing images
This section selects the size of 512 9 512 9 6 Landsat4-5 remote sensing image fragment for encryption, the remote sensing image used in this chapter and the resulting ciphertext image are containing six channels of Tif format, cannot be viewed directly, so here the remote sensing image of the false color image displays method to reflect the experimental results that is, optional three bands as the R, G, B channel of the color image.
Simulation results of remote sensing images
The results of the original image of this simulation experiment are shown in false color as shown in Fig. 9a, the image of each band of the original image is shown in Fig. 8, the encrypted ciphertext image is in Tif format, and the results displayed in false color are shown in Fig. 9b; the decrypted image is still in Tif format, and the display method of the false color image here is shown in Fig. 9c.
Keyspace analysis
The keyspace refers to the total number of different keys that can be used in a cryptographic system, and it is an important measure of the cryptography's resistance to brute force attacks. Theoretically, the larger the key space, the stronger the algorithm's ability to resist various attacks. The medium key of the proposed algorithm in this paper is converted from 512-bit binary, and its key space size is 2 512 . The key space is large enough to effectively resist brute force attacks and enhance the security of encryption.
Key sensitivity analysis
The level of key sensitivity is also an important indicator of a cryptographic algorithm. In the process of image encryption, for small changes in the key, the image cannot be decrypted correctly to ensure the security of image encryption. As follows, decryption using the correct key key 3 yields an image as shown in Fig. 8a, changing bit 18 in key key 3 to change ''9'' to ''e'' to obtain key key 4 , and decrypting with key 4 to obtain a decrypted image as shown in Fig. 10b. Obviously, the 128-bit hexadecimal key key 3 obtains a far difference from the original image after changing one character, which can show that the algorithm has good key sensitivity and can ensure the security of the encrypted image. In addition, the correlation of two ciphertext images generated by two different encryption keys is tested [47]. As shown in Fig. 10c, d, e and f, the diagonal correlation coefficients of the two ciphertext images are 0.001922 and 0.000902. This numerical result is a good proof of the sensitivity of the key.
Histogram analysis
The histogram of the image reflects the distribution characteristics of pixel values. The histogram of the original image is mostly uneven, and attackers often use statistical analysis methods to select the pixel values of important information in the image as a Lansat4-5 remote sensing image histogram as shown in Fig. 12a, the histogram of each band as shown in Fig. 11, the histogram of the ciphertext image as shown in Fig. 12b, it can be seen that the histogram of the original image and its various bands is uneven, and the histogram of the ciphertext image is very uniform, and it will be difficult for the attacker to use statistical analysis methods to obtain important information in the image, which ensures the security of the image to a certain extent.
Correlation analysis between adjacent pixels
In plaintext images, the correlation of adjacent pixels tends to be stronger, and one of the purposes of encryption is to break the correlation between adjacent pixels. The size of the correlation coefficient is an important indicator of the algorithm's ability to resist attacks. The closer the correlation coefficient is to the ideal value of 0, the better the effect of the encryption algorithm. Therefore, the encryption algorithm should ensure that the correlation coefficient between adjacent pixels is as close as possible to the ideal value. In this paper, the parties are randomly selected from the various bands of the plaintext image, the ciphertext image, and the plaintext image to perform correlation analysis on adjacent pixels.
b Fig. 11 Histogram of each band of Landsat4-5 (a) Histogram of Landsat4-5 plaintext (b) Histogram of Landsat4-5 ciphertext The test results of the correlation of adjacent pixels in the diagonal direction of each band of the Landsat4-5 remote sensing image are shown in Fig. 13. The test results for the correlation of adjacent pixels in the horizontal, vertical, and diagonal directions of Land-sat4-5 remote sensing images are shown in Fig. 14a, b and c, respectively. The test results for the correlation of adjacent pixels in horizontal, vertical, and diagonal directions of redaction images are shown in Fig. 14d, e and f, respectively. It can be observed from the plot that the distribution of adjacent pixels in the plaintext image and its various bands is relatively concentrated, while the distribution of adjacent pixels in the redaction image is relatively uniform. In order to more accurately show the correlation between pixels in different directions, the correlation coefficient is calculated using Eqs. (14)- (17), which is shown in Table 2 and compared with other schemes as Table 3. The value of the correlation coefficient in the ciphertext image is close to the ideal value of 0, indicating that the correlation between adjacent pixels is greatly reduced, which ensures the security of the image to a certain extent. DðxÞ EðxÞ 6.6 v 2 test v 2 test is used to describe whether the distribution of image pixels is uniform, the more uniform the pixel distribution of the image, the better the performance of the encryption algorithm, the less valid information contained in the secret map, so the value of a v 2 should be as small as possible, and the security of the algorithm will be high. v 2 test formula such as Eqs. (18) and (19) are shown.
where v i represents the frequency at which pixel values i appear in the image, M represents the length of the three-dimensional matrix, N represents the width of the three-dimensional matrix, and K represents the height of the three-dimensional matrix. v is the average frequency. Table 4 shows the v 2 value of the Landsat4-5 remote sensing image, the v 2 value of each band and the v 2 value of the redaction image, which can be obtained by comparison, and the v 2 value of the secret map encrypted by this algorithm is much lower than the v 2 value of the plaintext image. Therefore, it can be shown that the pixel distribution of the ciphertext image obtained by the modification scheme encryption is relatively uniform, the performance of the encryption algorithm is better, and the security of the image can be better guaranteed.
Information entropy analysis
Information entropy can be used to describe the degree of confusion of pixel information in an image, and the closer the information entropy is to the ideal value8, the higher the randomness of the image pixel value. It is an important indicator of the quality of image encryption algorithms. Formulas for information entropy such as Eq. (20) are shown: where L is the length of a pixel in binary and pðs i Þ represents the probability of pixel s i appearing. When the entropy of information is close to 8, the randomness of the secret map is better, that is, the safety is better. The second column in Table 5 lists the information entropy of the original remote sensing image compared to the ciphertext image; columns 3-8 show the information entropy of the original bands of the remote sensing image compared to the encrypted bands. The test results show that the encryption result obtained by the algorithm is very close to the ideal value of 8, so it can be concluded that the algorithm has better randomness, that is, better security.
Robustness analysis
Robustness is used to reflect whether the encryption algorithm is robust enough to resist interference that images may encounter during transmission. When the image encounters the interference of uncertainty such as attack during transmission, the pixel value of the image will be damaged and some information is missing, and a good algorithm can still decrypt the useful information in the plaintext image through the partially destroyed ciphertext image, and this section performs noise attack and cropping attack on the picture to test the robustness of the algorithm. The cropping attack uses three different degrees of attacks to crop the various channels of the ciphertext image, which are 1/49 degree, 1/25 degree and 1/16 degree of cropping, and the cropped ciphertext image is displayed in a false color image, as shown in Fig. 15a, b and c. The corresponding Landsat4-5 remote sensing image under this attack decrypted image is shown in Fig. 15d, e and f. It can be seen that the algorithm can still restore the original image in the case of different degrees of pixel value loss in the secret map, and the security is better. The noise attack uses the classic salt and pepper noise attack and Gaussian noise attack, Fig. 16a, b and c is a ciphertext image after adding 0.01, 0.03 and 0.05, respectively, and the corresponding Landsat4-5 remote sensing image under this attack is shown in Figs. 16d, e and f; Figs. 17a, b and c shows the ciphertext image after Gaussian attack with mean 0, 0.05, 0.05 and variance 0.05, 0.05, 0.1 are added successively. The decrypted image is shown in Figs. 17d, e and f. Obviously, this algorithm can effectively resist noise attacks, and the original plaintext image can still be displayed under different levels of noise attacks.
Differential attack
A differential attack is the comparison and analysis of the results of different plaintext encryptions to attack cryptographic algorithms. This indicates that small changes in the pixels of a plaintext image produce two different ciphertext images. The two metrics that measure whether an image can resist differential attacks are the pixel number conversion rate (NPCR) and the uniform average change intensity (UACI), which are often used to qualitatively analyze encrypted images. NPCR and UACI can be used with Eqs. (21)(22)(23) to calculate: In Eq. (22), c 1 and c 2 are encrypted images in which the plaintext image changes a pixel value before and after the change. The test results for NPCR and UACI are shown in Table 6. From the results, the averages of NPCR and UACI were 99.6089% and 33.4374%, respectively, very close to the ideal values of 99.609% and 33.464%. This suggests that improved cryptographic algorithms are highly sensitive to small changes in plaintext. Compared with other algorithms, the algorithm has a good ability to resist differential attacks. It is known that plaintext and select plaintext attacks are common attacks in the field of image encryption, which refers to testing the performance of encryption algorithms against special pixel values by encrypting two special images, pure white and pure black images. Pure white images and pure black images have pixel values of 255 and 0, respectively, so these two special images are selected to test. Figure 18 is the experimental result of pure white image encryption; Fig. 19 is the experimental result of pure white image encryption. Table 7 lists some common test data, including v 2 tests, information entropy tests, and correlation coefficient analysis. The results are more in line with the ideal value, and the algorithm is sufficient to resist known plaintext and selective plaintext attacks.
Operational efficiency analysis
The operational efficiency of an algorithm is one of the important criteria for measuring the quality of an algorithm, and the algorithm runtime tests are performed on MATLAB 2020b, Intel Core i5-7300 CPU, 8 GB RAM, and Window 10 operating systems. For the remote sensing image encryption scheme mentioned in this article, the encryption time for images with dimensions 512 9 512 9 6 is 7.32 s, and the decryption time is 6.91 s. The running efficiency of the algorithm is still relatively ideal.
7 Simulation results and performance analysis of multi-type image joint encryption According to the characteristics of multi-channel encryption, this section performs joint encryption of remote sensing images, color images, and grayscale images. In the theory of joint encryption, there is no limit to the number and size of images, and the maximum value of the length of all pictures will be selected as the length of the three-dimensional matrix, the maximum value of the width as the width of the three-dimensional matrix, the sum of the channel number of all pictures as the height of the threedimensional matrix, and the pixel value where there are no pixels is filled with 1. Although there is no limit on the size of this scheme, if the size difference between the pictures is too large, it will cause a waste of space, so the author recommends that you choose images of similar size for joint encryption. The remote sensing image selects Landsat4-5 with the same size as Chapter 5, which is 512 9 512 9 6; the color image is selected from the classic Baboon diagram, which is 512 9 512 9 3; the grayscale image is selected from the classic Lena diagram, which is 256 9 256; as shown in Fig. 20. The three images form a three-dimensional matrix with 10 channels, with sizes of 512 9 512 9 10. Where the Lena diagram is undersized and filled with 1. The ciphertext image obtained by federated encryption is 512 9 512 9 10, and the ciphertext image of each channel is shown in Fig. 21. The decrypted image is shown in Fig. 22.
Common statistical analysis
The histogram of the Lansat4-5 remote sensing image is shown in Chapter 6 Figs. 11 and 12a, and the histogram of the color image Baboon and the grayscale image Lena is shown in Fig. 23a, b shown; the histogram of the redaction image is shown in Fig. 23c. The correlation coefficient test results of the Table 8.
Robustness analysis
This section tests the robustness of the algorithm by performing noise attacks and cropping attacks on images. Figure 24a, b shows ciphertext images of 1/49 degree cropping attack and 1% pretzel noise attack, respectively, and is displayed in the form of false color images; the corresponding decrypted images under the corresponding attacks are shown in Figs. 25 and 26. Table 9 gives some common security analysis test results, including v 2 tests, information entropy tests, and NPCR and UACI in differential attack tests. The encryption time of the joint image of 512 9 512 9 10 is 10.03 s, and the decryption time is 9.84 s. The operation efficiency is better.
Conclusion
In this paper, a remote sensing image encryption scheme based on low-dimensional chaotic system is proposed, which is also applicable to the joint encryption of remote sensing images, grayscale images and color images. Compared with the existing remote sensing image encryption scheme, this paper mainly solves two problems, one is to solve the encryption problem of remote sensing images for multiple bands, the algorithm proposed in this paper has no limit on the size and number of bands of remote sensing images, and the second is that the algorithm can be used for the joint encryption of remote sensing images, grayscale images and color images and the encryption effect is better. Based on the traditional Chebyshev chaotic system, this paper proposes a new type of Chebyshev chaotic system, and the performance test results of the chaotic system show that the performance of the system is better. The encryption process of this paper includes chaos and diffusion, of (d) Landsat4-5 (e) Baboon (f) Lena which the scrambling method includes the pre-placement disorder dominated by spiral curve scanning, index scrambling and one-dimensional Arnold scrambling three, the diffusion method adopts the classic cyclic left shift method of adding and taking modulus, and the simulation experimental results and performance analysis show that the algorithm is sufficient to resist common attacks, with better security and good running efficiency. In addition, the encrypted data of the multi-type images proposed in this paper are large, so it is often combined with the knowledge of image compression to achieve the purpose of improving the efficiency of the algorithm, and the author will continue to study this issue in depth. | 8,038.8 | 2023-05-30T00:00:00.000 | [
"Computer Science",
"Environmental Science"
] |
Evolution of distributed detection performance over balanced binary relay tree networks with bit errors
Target detection based on wireless sensor networks can be considered as a distributed binary hypothesis testing problem. In this paper, the evolution of detection error probability with the increase in the network scale is studied for the balanced binary relay tree network with channel noise. Firstly, the iterative expressions of false-alarm probability and missed-detection probability depending on the number of tree network layers are given. Then, the iterative process of two types of error probabilities in the network space is described as a discrete nonlinear switched dynamic system, and the dynamic properties of two types of error probabilities are analyzed in a plane rectangular coordinate system. A globally attractive invariant set of the state of the dynamic system, which is not related to the channel noise, is derived. The switching mode of the system and the total error probability in the invariant set are further analyzed, and a necessary and sufficient convergence condition of the total error probability is provided. Based on this condition, the following detection properties of the network are revealed: (1) as long as the channel bit error probability is not zero, the total error probability does not tend to zero as the network size grows to infinity; (2) when the channel bit error probability is greater than 2-32\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\frac{2-\sqrt{3}}{2}$$\end{document}, the total error probability will continue to increase with the increase in the network size.
Introduction
Target detection based on wireless sensor networks (WSNs) has a very wide and important application background and has received constant attention from industry and academia [1][2][3][4][5][6][7]. In particular, the relationship between distributed detection performance and sensor network size has been studied in detail for different network structures. Parallel structure is the first to receive attention. In the network of this structure, each sensor directly submits its detection results to the fusion center (FC). When each sensor has independent observations, the parallel structure can ensure that the total error probability of FC decreases exponentially with the increase in the network size [8]. However, when the physical distance between a sensor and FC is far away, the parallel structure is not conducive to energy saving and is usually difficult to implement in the case of large-scale networks. An alternative structure is tandem network [9][10][11]. In tandem networks, each node fuses its own detection results with the detection results of lower-level nodes and then transmits the fusion results to an upper-level node. Under this structure, the decision-making error probability at the FC decreases sub-exponentially with the increase in the network size [11]. A tree network is something between parallel structure and tandem structure. In this kind of network, in addition to the FC and the leaf nodes which directly detects the target, there are also a number of relay nodes which are only responsible for local information fusion and transmission. Tree networks have attracted more and more attention because they can take into account both the energy consumption and detection performance in practical applications.
For the tree networks with finite tree height (i.e., when the network size tends to infinity, the tree height is finite), paper [12] proves that under Bayesian criterion the error probability of FC exponentially decays with the increase in the network size although the decay exponent is smaller than that of parallel network. Paper [13] studies a kind of tree network with infinite tree height (i.e., when the network size tends to infinity, the tree height also tends to infinity), the so-called balanced binary relay tree (BBRT) network. Under the premise that all leaf nodes' observations are independent and all relay nodes adopt the same likelihood ratio test (LRT), it is proved by the Lyapunov method that the total error probability of FC converges to zero. In [14], the derivation of optimal fusion rules in the BBRT network is investigated by using dynamic programming method. Under the unit threshold LRT (UT-LRT) rule, paper [15] studies the binary hypothesis testing in the BBRT network, analyzes the dynamical properties of false-alarm probability (probability of Type I error) and missed-detection probability (probability of Type II error), and derives a globally attractive invariant set in the state space of error probabilities of two types. Based on the dynamic property analysis in this invariant set, it is proved that the decay exponent of the total error probability of the network is the square root of the number of network layers.
All the above-mentioned researches are aimed at dealing with ideal networks. In practical applications, however, WSNs are often deployed in complex environments. Many environmental factors may interfere with the communication between sensor nodes, which will reduce the stability and accuracy of communication to a certain extent [16,17]. More and more attention has been paid to the distributed detection performance of WSNs with network failure or communication noise. For the problem of network failure, paper [18] intro-duces node and link failure probabilities and studies the asymptotic detection performance of the BBRT network under the UT-LRT criterion. In paper [19], the asymptotic detection performance of the M-ary tree network with node and link failures is studied under the majority dominance rule. The upper and lower bounds of the convergence rate of the FC's error probability with respect to the network size are also given. Both [18] and [19] show that the existence of node and link failures with a certain probability does not overthrow the fact that the error probability at the FC decreases to zero as the network size grows to infinity. Besides, the influence of Gaussian noise in communication channels on the distributed detection performance of WSNs has also been extensively investigated in the literature [20][21][22].
The bit error caused by the communication noise in binary communication channels has a completely different impact in comparison with the network failure, and it can change the transferred information from "true" to "false." Paper [16] studies the impact of errors on a distributed detection conducted by belief propagation algorithm in WSN and shows that the detection can effectively be modeled as a distributed linear data fusion scheme. Paper [17] discusses fusion rules for distributed detection in clustered WSNs with communication channels experiencing additive white Gaussian noise and derives an optimal log-likelihood ratio fusion rule. Paper [23] studies the design of a parameter estimator with binary quantitative observations under the binary symmetric channels (BSC) model. Paper [24] compares the advantages and disadvantages of various information fusion rules in the BSC model. A distributed detection method based on Rao test is studied for noisy channels in [25]. Paper [26] considers the case that the bit noise exists in the communication between sensor node and FC, and constructs an extended maximum likelihood estimator. However, there are few relevant reports concerning the questions such as: How does the FC's error probability evolve in WSNs with bit errors with respect to the network size? Will it decay to zero as the networks size grows to infinity? Recently, the authors' group has investigated these questions for the M-ary tree network and has given a negative answer to the second question [27].
In this paper, we consider the distributed detection problem in the BBRT network with bit errors and analyze the asymptotic detection performance of the network described by the BSC model. Firstly, we derive FC the iterative expression of the false-alarm probability and missed-detection probability with respect to the number of tree network layers, and model the iterative process of the two kinds of error probabilities as a statedependent nonlinear switched dynamic system. Comparing the dynamic model of the BBRT model derived in this paper and the model of the M-ary tree network given in [27], we find that the former is much more complicated because it is a switched system besides nonlinearity while the latter is just a nonlinear stationary system. Afterward, since it is difficult to make the equilibrium analysis like [27] for this nonlinear switched system, we analyze the dynamic properties of the system by dividing the state space as mode-holding regions and mode-switching regions. Based on the state space partitioning, a globally attractive invariant set of the state of the dynamic system is obtained. Finally, we study the switching mode of the dynamic system in the globally attractive invariant set and give a necessary and sufficient condition for the convergence of the total error probability of the FC. It is pointed out that the critical value of the bit error probability that makes the total error probability unable to converge is 2− √ 3 2 . This critical value is meaningful for the design of large-scale wireless sensor networks.
The research method of this paper is greatly inspired by paper [18]. However, unlike the node and link failure discussed in [18], the BSC channel discussed in this paper will produce a signal with the opposite meaning to the source signal. The introduction of bit error probability in the dynamic model makes the evolution process of detection error probability much more complex, and the derivation and proof of the invariant set become much more difficult. And the result obtained in this paper is also quite different from that of [18]: The conclusion that the total error probability will decay to zero with the infinite growth of network size is no longer true for the BSC model.
The following notations are used in this paper. P(·) denotes probability. α k , β k and d k = 1 − β k denote the false-alarm probability, missed-detection probability and detection probability of a node at level k in a BBRT network, respectively. The channel bit error probability is denoted by b. And the closure of set A is denoted by A.
The rest of this paper is organized as follows. The problem formulation and dynamic model of detection performance over the BBRT network is presented in Sect 2. In Sect 3, we study the evolution of the detection error probabilities of two types by dividing their state space into different regions. In Sect 4, a channel noise-independent and globally attractive invariant set of the detection error probabilities is constructed. The switching mode of the dynamic system in the invariant set is also studied in this section. The asymptotic property of the total error probability is analyzed in Sect. 5. The conclusion is drawn in Sect. 6. We consider the target detection as a binary hypothesis testing problem: Hypothesis H 1 corresponds to the appearance of the target and hypothesis H 0 corresponds to the absence of the target. Let such a detection be conducted in a BBRT network with channel noise as shown in Fig. 1, where circles represent sensors, also called leaf nodes, which make measurements, squares represent relay nodes which only fuse messages, rectangle represents the fusion center which makes an overall decision, k denotes the layer number of the tree and N is the total number of sensors. Each leaf node makes independent detections for the same events, then summarizes it into binary message and sends the message to its parent node. Each parent node receives message sent by two child nodes; then, it will fuse two binary messages into one binary message and forward the new binary message to its parent node. This process is repeating until the message is transmitted to the root node, which is the fusion center. The channel noise modeled by BSC is shown in Fig. 2. The bit error probability of the BSC is denoted by b. When a child node sends X to its parent node, the parent node receives 1 − X with probability b and receives X with probability 1 − b.
The following assumptions are needed for our further study: Assumption 1 All sensors are independent of identical false-alarm probability α 0 and identical misseddetection probability β 0 , which are subject to α 0 +β 0 < 1.
Assumption 2
All channels in the network have the same bit error probability b ∈ [0, 1 2 ).
The part of Assumption 1 that all sensors are independent of identical false-alarm probability and identical missed-detection probability is made for the possibility of iterative analysis of the BBRT network (see, e.g. [15,18]). For the BSC model, the assumption α 0 + β 0 < 1 and Assumption 2 do not reduce the generality of the results of this paper because the case of α 0 +β 0 > 1 or the case b ∈ ( 1 2 , 1] can be considered as a folded case: If we redefine the meaning of the binary decision 0 as 1 and 1 as 0, then we will return to the case α 0 + β 0 < 1 or the case b ∈ [0, 1 2 ). Suppose that the UT-LRT rule is applied at each relay node and the fusion center. It has been shown that this fusion rule is locally optimal if the event detected has equal a priori probability, i.e., P(H 0 ) = P(H 1 ) = 1 2 (see, e.g., [13]).
Considering x as the type I (or type II) error probability of a decision, then F(b, x) is actually the type I (or type II) probability of the decision signal passing through the BSC channel with bit error probability b. Apparently, for any x ∈ [0, 1] and y ∈ [0, 1], the following equivalence holds Theorem 1 Under Assumptions 1, 2 and the UT-LRT rule, the relation of the false-alarm probability and missed-detection probability between two consecutive layers of the BBRT network with bit error can be described by where k ≥ 0 denotes the layer of the tree.
Proof At first, let us analyze a local network which consists of a parent node and two child nodes, both child nodes are leaves. By Assumption 1, the leaf nodes have identical false-alarm probability α 0 and identical detection probability d 0 = 1 − β 0 . We denote by X the message that the parent node receives from one child node, Y the message from the other child node, and let Z = (X, Y ). By Assumption 1 on the independence of the leaf nodes and Assumption 2 on the bit error probability, it is easy to get: Denote by D the decision made by the parent node based on the observation Z and the UT-LRT rule. By the meaning of false-alarm probability and misseddetection probability, we know that Case 1: Z = (0, 0). In this case, the UT-LRT rule implies that which contradicts Assumption 1. This implies that when Z = (0, 0), the decision D = 1 can never be taken under UT-LRT.
Case 2: Z = (1, 0). In this case, the UT-LRT rule implies that Case 3: Z = (0, 1). It is easy to see that this case is equivalent to the case that Z = (1, 0).
Case 4: Z = (1, 1). In this case, the UT-LRT rule implies that This implies that when Z = (1, 1), it always takes the decision D = 1 under UT-LRT. Summarizing the above four cases, we know that for the case α 0 ≤ β 0 , and for the case α 0 ≤ β 0 , and Thus, we have proved Eq. (4) in Theorem 1 holds for k = 0. This equation also shows that all nodes at level 1 have identical false-alarm probability α 1 and identical missed-detection probability β 1 . Moreover, by Eqs. (2), (3) and (4), it follows from α 0 +β 0 < 1 that α 1 +β 1 < 1 also holds. Since the UT-LRT rule is applied to all nodes in the tree, and all channels in the tree have the same bit error probability, one can repeat the above proof procedure from k = 1 to any positive integer and obtain the conclusion that the Eq. (4) holds for all k ≥ 0.
The iterative Eq. (4) can be regarded as a discrete dynamic system, which determines how the false-alarm probability α k and the missed-detection probability β k evolve in the BBRT with respect to the number of the tree level k. Obviously, it is a state-dependent nonlinear switched system. Paper [15] has studied this system for a special case that b = 0, i.e., the case of no bit error. An invariant set for the system state has been constructed, and with the help of the analysis of the system dynamics in this invariant set, it has been shown that both the false-alarm probability and missed-detection probability converge to zero as the number of sensor in the network increases to infinity. However, in this paper we will show that neither the false-alarm probability nor missed-detection probability converge to zero in the case that b = 0, and the system has very rich and complicated nonlinear dynamic properties. In the following sections, we will study the relationship between the convergence property of the system and the channel parameter b via dividing the state space according to the iterative mode of the system.
Mode-switching region in state space
As mentioned in the Proof of Theorem 1, α 0 + β 0 < 1 implies that α k + β k < 1 for all k ≥ 0. The state space of the dynamic system (4) is a right triangle in the rectangular coordinate system denoted by (α, β), which consists of upper part U and lower part L given below We note that Eq. (4) is symmetric about the line α = β. Hence, if a dynamic property of system (4) holds in region U , it must also hold in a symmetric way in region L. We refer to this as dynamic symmetry of system (4) and use it to shorten the length of the proof of most conclusions in this paper. The iteration of system (4) switches between two modes depending on whether the state of the system falls in L or U at each step. To study the switching mode of the system, we define the mode-switching region of the dynamic system (4) as follows.
Definition 1 A set A s in
(α, β) is referred to as a mode-switching region of dynamic system (4) if for Apparently, B 1U and B 1L are symmetric with respect to the line β = α.
According to the dynamic symmetry of the system, it is also true that Proposition 1 says that B 1 := B 1U ∪ B 1L is a modeswitching region of system (4).
The following proposition reveals the relationship of the mode-switching regions of the system in the noisy channel case and noise-free case.
Proof of Proposition 2 is given in Appendix A. Proposition 2 is illustrated by Fig. 3, Corresponding to mode-switching region, mode -holding region is defined as follows. (α, β) is referred to as a mode-holding region of dynamic system (4) if for all
Definition 2 A set A h in
For convenience of discussion, let us define a func- and denote the mth composition of this function as And then, we further divide the state space of system (4) as follows. F(b, β)) < 1}, where F(b, * ) is defined by (1). Apparently, B mU and B mL are symmetric about line β = α.
Proof of Proposition 3 is given in Appendix B. According to Proposition 3, if the initial state (α 0 , β 0 ) ∈ B mU , where m ≥ 2, the iterative trajectory of (α k , β k ) in the plane of (α, β) is subject to the order: B mU → B (m−1)U → · · · B 2U → B 1U ; similarly, if (α 0 , β 0 ) ∈ B mL , where m ≥ 2, the iterative trajectory of (α k , β k ) is subject to the order: B mL → B (m−1)L → · · · B 2L → B 1L . Therefore, the regions B m , m ≥ 2, are mode-holding regions because the iterations from B mU to B 1U or the iterations from B mL to B 1L share a same iteration mode f 1 or f 2 .
Evolution in the whole state space
We have shown that B 1 is a mode-switching region, B m := B mU ∪ B mL are mode-holding regions for all m ≥ 2, in other words, · · · B m ∪ B (m−1) ∪ · · · ∪ B 2 can be considered as a mode-holding region composed of the upper and lower parts. In the state space of the system, is there any other area outside ∪ ∞ m=1 B m ? The following proposition gives a negative answer to this question.
Proof of Proposition 4 is given in Appendix C. Proposition 4 says ∪ ∞ m=1 B m covers the whole state space U ∪ L. Therefore, by Proposition 3, all the areas outside B 1 in the state space is the mode-holding region, and the following corollary is obvious.
Corollary 1
For any initial state (α 0 , β 0 ) ∈ U ∪ L, there must exists a positive integer k * such that the trajectory of system (4) starting from(α 0 , β 0 ) must enter the region B 1 after k * -step iteration , i.e., (α k * , β k * ) ∈ B 1 . Figure 4 presents a simulation example of the iterative trajectory when (α 0 , β 0 ) ∈ U \(B 1U ∪ B 2U ). From the trajectory, we see that before it enters B 1 there is no area switching from U to L or from L to U , which implies that there is no iteration mode switching between f 1 and f 2 . Once it enters B 1 , at the next state it must jump from U to L or from L to U , which implies that the iteration mode switches from f 1 to f 2 or from f 2 to f 1 . However, the results we have obtained up to now do not tell us into which region it will falls after such a jump, mode-switching region or mode-holding region. This problem is left for the next section.
Globally attractive invariant set
In this section, we investigate the dynamic properties of system (4) after its trajectory enters the region B 1 .
Firstly, let us define and denote R = R U ∪ R L . (4).
Proposition 5 Consider dynamic system
The case of (α k , β k ) ∈ B 1U can be proved similarly.
Proof of Proposition 6 is given Appendix D. and respectively. Observing these equations, we find that the upper boundary of R U goes through the origin of the coordinate system, but the upper boundary of B 1U does not when b = 0. Combining Eqs. (7) and (8) yields It can be considered as a quartic equation of t = √ α.
Note the admissible value range of α is [0, 1 4 ) because the abscissa of the intersection of the upper boundary of R U and the line α + β = 1 is 1 4 . Considering the value range of α and using the discriminant of quartic equation, we know that when b ∈ 3−2 √ 2 2 , 1 2 , Eq. (9) has no solution; and when b ∈ 0, 3−2 √ 2 2 , Eq. (9) has only one solution given by Substituting this solution into (8) gives the corresponding ordinate of the intersection Symmetrically, the intersection coordinate of the lower boundary curves of B 1L and R L is given by Now, we are ready to prove the following theorem.
Theorem 2
The region R is a globally attractive invariant set in the state space of system (4). Proof By Corollary 1, we know that for any initial state (α 0 , β 0 ) ∈ U ∪ L, there must exists a positive integer k * such that the trajectory of system (4) starting from(α 0 , β 0 ) enters the region B 1 after k * -step iteration , i.e., (α k * , β k * ) ∈ B 1 . Then, by Proposition 5, we have (α k * +1 , β k * +1 ) ∈ R. The rest of this proof is to show that R is an invariant set, i.e., the system trajectory will remain in R for all k > k * .
By Proposition 6 we know R ⊂ B 1 ∪ B 2 . If R B 2 = ∅ i.e., R ⊂ B 1 , using Proposition 5 again we have (α k * +2 , β k * +2 ) ∈ R and so on. The theorem is thus true in this case (see Fig. 5 for illustration). Now, we just need to consider the rest case: R B 2 = ∅. For this case we just need to prove that once (α k * , β k * ) ∈ R U B 2U , its next step (α k * +1 , β k * +1 ) will not fall in the region B 1U \R U . By Proposition 7, R B 2 = ∅ implies that there is only one intersection point between two boundary curves, a part of the region R is inside B 2 , and the other part of R is outside B 1 (see Fig. 6 for illustration). Since R ⊂ B 1 ∪ B 2 by Proposition 6, to ensure that R is an invariant set in this case it suffices to prove the following fact: Once the system state falls in the region R U B 2U , its next step will not enter the region B 1U \R U . This fact is indeed true because the region B 1U \R U is on the left of the region B 2U R U (i.e., the abscissa of any point in B 1U \R U is less than the abscissa of any point in B 2U R U ), and α k+1 = 1 − (1− F(b, α k ) The following theorem characterizes the dynamic property of the system trajectory in the invariant set R.
and system (4) in R is in a switching iteration mode with the maximum beat number of 2.
Proof The proof is obvious based on Propositions 6, 7 and Theorem 2.
Asymptotic property of total error probability
The total error probability L k at layer k of the tree network is the sum of false-alarm probability and misseddetection probability at the same layer, i.e., L k = α k + β k . The total error probability is a very important performance index of detection network. Since the false-alarm probability and the missed-detection probability usually do not increase or decrease simultaneously, the asymptotic property of the total error probability with the increase in the detection network size is an important topic worthy of attention. (4). L k+1 (< , >, =)L k is equivalent to
Proposition 8 Consider dynamic system
Proof We just give the proof for the ">" part because the proof for "<" and "=" is the same. Moreover, because of the dynamic symmetry of the system, we just need to consider the case of α k ≤ β k . Since L k = α k +β k and L k+1 = 1−(1 − F(b, α k )) 2 + (F(b, β k )) 2 , we have By Eq. (1), the above inequality is also equivalent to The proposition is thus proved.
Now, let us define
Based on these definitions and Proposition 8, the proof of the following theorem is obvious. (4).
Theorem 4 Consider dynamic system
Theorem 4 provides a necessary and sufficient condition of the convergence of the total error probability. From Theorems 2 and 4, it is natural to draw the following corollary.
Corollary 2
If b = 0, then L k does not approach to zero as k → ∞.
Proof We prove this corollary by contradiction. Suppose that L k → 0 as k → ∞. By Theorem 2, we know that (α k , β k ) will remain in the invariant set R when k is large enough. If L k → 0, then α k → 0 and β k → 0. This implies that (α k , β k ) will enter a sufficiently small neighborhood of (0, 0) as k becomes large enough. Noticing that R is not related to b we know that the intersection of R with this neighborhood must belong to M as long as b = 0. By Theorem 4, once (α k , β k ) enters M, L k+1 > L k always holds. This contradicts the hypothesis L k → 0.
Although (0, 0) is an equilibrium point of dynamic system (4), Corollary 2 tells us that when there are bit errors in communication channels of the tree detection network, it is impossible to make the total error probability of the fusion center tend to zero by increasing the scale of the network.
Since M is related to b but R is not, it can be expected that when b is large enough, the entire invariant set R may fall into M, which means that when k is large enough, the total error probability L k is continuously increasing. Obviously, this is a situation that should be avoided in network design. In addition, since B 1 is the only mode-switching region in the state space, clarifying the relationship between region M and region B 1 is helpful to analyze the iterative mode of the dynamic system when (α k , β k ) falls into the non-decreasing region M for L k .
Theorem 5
The following statements are true for dynamic system (4).
Corollary 3 Consider dynamic system
2 , 1 2 , then for any initial state (α 0 , β 0 ), there must exist a positive integer k * such that the total error probability L k will continuously increasing when k ≥ k * .
In order to avoid the situation mentioned in Corollary 3, in the design of wireless sensor networks, the bit error probability must be kept in the range b < 2−
Conclusion
In this paper, the evolution of detection performance of balanced binary tree networks with channel noise is studied. Firstly, the iterative rules of false-alarm probability and missed-detection probability are modeled as a state-dependent nonlinear switched system. Then, according to the channel bit error probability b, the system state space is divided into two parts: the modeswitching region and the mode-holding region. The iterative trajectory of the system state is described, and a globally attractive invariant set which is independent of the channel bit error probability is found. The necessary and sufficient condition of the convergence of the total error probability of the detection network is obtained. Through our analysis, the following important conclusions are obtained: (1) as long as the channel bit error probability is not zero, the total error probability of the detection network will not tend to zero with the increase in the network size, which is essentially different from the conclusion obtained when the network has nodes and link failures [18]. (2) When the channel bit error probability is greater than 2− , since R\M = ∅ and M\R = ∅ (see Theorem 5), it is currently not clear whether the total error probability converges to a positive constant. Secondly, for the researchers of nonlinear dynamics, the unknown (possibly aperiodic) switching law of the system in the case of b ∈ 0, 3−2 √ 2 2 (see Theorem 3 and Fig. 8) may seem attractive. Finally, all our results are obtained under the assumption that the sensors make independent detection observations and have identical detection property, which is of course restrictive. The asymptotic detection performance of the network with correlated sensors is worth of further study.
It implies that the abscissa of the intersection point of the upper boundary of B 1U with the line α + β = 1 is less than 1 2 . Considering the fact that 1 − 2b > 0 by Assumption 2 and 1 − F(b, α) − F(b, β) ≥ 0 by (3), we get β b > 0. The proposition is thus proved.
Proof of Proposition 3
Suppose that (α k , β k ) ∈ B mU , m ≥ 2. By definition of B mU , we get that and Note that here we let D 0 (b, y) = y 2 for simplicity of notation.
By (4), it is easy to get Hence, from (10) to (11) it follows that Note that here we assume D −1 (b, F(b, x)) = x and D −1 (b, (1 − F(b, x))) = 1 − x for simplicity of notation. The above two inequalities imply that (α k+1 , β k+1 ) ∈ B (m−1)U . Due to the dynamic symmetry of the system, the case of (α k , β k ) ∈ B mL can be proved similarly.
Proof of Proposition 4
Due to the dynamic symmetry of the system, we just need to prove U ⊆ ∪ ∞ m=1 B mU . Let us check the intersection point of the upper boundary of B mU with the line α + β = 1. The upper boundary equation of B mU is given by 1 (b, (1 − F(b, α)) + D m−1 (b, F(b, β)) = 1.
Combining this equation and the line equation α + β = 1, we get In Proof of Proposition 2, we have shown that for m = 1, this equation implies β = Using mathematical induction, we can show that Eq. (12) implies which indicates that the ordinate of the intersection point of the upper boundary of B mU with line α+β = 1 is not less than 1 as m → ∞. This also implies that the ordinate of the intersection point of the upper boundary of B mU with the axis α = 0 is not less than 1.
Since the convexity of the upper boundary curve of B mU is obvious, the upper triangular of the state space of the system is below this boundary as m → ∞, i.e., U ⊆ ∞ m=1 B mU . Proposition 4 is thus proved.
Proof of Proposition 6
Due to the dynamic symmetry of the system, we just need to prove R U ⊂ B 1U ∪ B 2U , i.e, to prove the upper boundary of B 2U is always higher than the upper boundary of R U . Our proof will use the following lemma which is proved in [15].
Proof of Theorem 5
By the dynamic symmetry of the system, the proof is just provided for the region U . | 8,188.6 | 2021-06-17T00:00:00.000 | [
"Computer Science"
] |
Lung Function Measurements in Rodents in Safety Pharmacology Studies
The ICH guideline S7A requires safety pharmacology tests including measurements of pulmonary function. In the first step – as part of the “core battery” – lung function tests in conscious animals are requested. If potential adverse effects raise concern for human safety, these should be explored in a second step as a “follow-up study”. For these two stages of safety pharmacology testing, both non-invasive and invasive techniques are needed which should be as precise and reliable as possible. A short overview of typical in vivo measurement techniques is given, their advantages and disadvantages are discussed and out of these the non-invasive head-out body plethysmography and the invasive but repeatable body plethysmography in orotracheally intubated rodents are presented in detail. For validation purposes the changes in the respective parameters such as tidal midexpiratory flow (EF50) or lung resistance have been recorded in the same animals in typical bronchoconstriction models and compared. In addition, the technique of head-out body plethysmography has been shown to be useful to measure lung function in juvenile rats starting from day two of age. This allows safety pharmacology testing and toxicological studies in juvenile animals as a model for the young developing organism as requested by the regulatory authorities (e.g., EMEA Guideline 1/2008). It is concluded that both invasive and non-invasive pulmonary function tests are capable of detecting effects and alterations on the respiratory system with different selectivity and area of operation. The use of both techniques in a large number of studies in mice and rats in the last years have demonstrated that they provide useful and reliable information on pulmonary mechanics in safety pharmacology and toxicology testing, in investigations of respiratory disorders, and in pharmacological efficacy studies.
INTRODUCTION
Safety pharmacology studies are necessary for the development of drugs and for protection of clinical trial participants and patients from potential adverse effects. The ICH guideline S7A recommends safety pharmacology tests including measurements of pulmonary function. There are no differences in the guidelines of the European Union, the USA and Japan since ICH S7A has been adopted by the EMA, the FDA, and the MHLW. Their objective is to identify potential adverse or undesirable effects of a compound in relation to dosage within the compounds therapeutic range and above. Those safety pharmacology studies on the respiratory system are typically small studies, mostly independent from toxicological studies, with single treatment or inhalation exposure conducted in accordance with GLP guidelines for regulatory submission. Usually these studies are performed in rodents (mostly in rats, rarely in mice).
The principles governing ventilation, air flow, lung volume, and gas exchange are common among most if not all mammals (Costa and Tepper, 1988). Inhalation toxicological studies and studies using specific experimentally induced lung diseases in animals have shown that functional responses of man and animals to different types of lung injury are similar (Mauderly, 1988(Mauderly, , 1995. Examination of pulmonary function is a non-destructive procedure of assessing the functional consequences of alterations of lung structure or (temporary) changes in the tonus of airway smooth muscle cells, providing information on the presence, the type, and the extent of alteration (Mauderly, 1989). Existing methods for measuring respiratory function in vivo include non-invasive and invasive technologies.
Changes in respiratory function can result either from alterations in the pumping apparatus including nervous and muscular components that controls the pattern of pulmonary ventilation or from changes in the mechanical properties of the gas exchange unit consisting of the lung with its associated airways, alveoli, and interstitial tissue (Murphy, 2002). Defects in pumping apparatus and reflex-related alterations can change the breathing pattern and are tested non-invasively in a conscious animal model. Defects in mechanical properties of the lung can result in obstructive or restrictive disorders which often can also be detected by non-invasive lung function parameters but can be better evaluated by invasive lung function tests and pulmonary maneuvers in anesthetized animals due to their higher sensitivity and specificity.
LUNG FUNCTION MEASUREMENT TECHNIQUES FOR RODENTS -AN OVERVIEW
Lung function is a relevant endpoint in pharmacological studies (e.g., in models for asthma, COPD, or infection), in safety pharmacological studies performed according to the ICH guideline S7A (core battery, part lung, and follow-up), and finally in toxicological studies, particularly if the airways are in the focus of interest (e.g., tests on allergenic or irritant potential and functional tests on obstructive or restrictive lung alterations or diffusion disorders). The safety pharmacological and toxicological studies have to be performed in compliance with the GLP Principles.
The existing methods to measure pulmonary function in rodents in vivo are divided in invasive and non-invasive approaches which all have their advantages and disadvantages (for short overview, see Table 1). However, such experiments present particular technical challenges, and each method lie somewhere along a continuum on which non-invasiveness must be traded off against experimental control and measurement precision (Bates and Irvin, 2003). As an extreme of non-invasiveness unrestrained plethysmography (Penh) in conscious mice or rats is highly convenient but provides respiratory measures that are so tenuously linked to respiratory mechanics that they were seriously questioned recently by several authors (Lundblad et al., 2002;Mitzner and Tankersley, 2003;Adler et al., 2004;Bates et al., 2004), discussed in detail in Section "Advantages and Disadvantages of Invasive and Non-Invasive Techniques". At the other extreme, the measurement of input impedance in anesthetized, paralyzed, tracheostomized mice is precise and specific but requires that an animal be studied under conditions far from natural (Bates and Irvin, 2003).
In the following two well-established methods -an invasive and a non-invasive technique -are presented which we use in our labs at the Fraunhofer ITEM to record lung function repetitively in rats and mice in vivo. In safety pharmacology studies of the stage one (core battery) as well as in tests on irritant potential the non-invasive head-out plethysmography technique is used. In safety pharmacology stage two -the follow-up studies -or in some toxicological studies which both need the most sensitive endpoints the invasive technique in intubated rodents is used.
NON-INVASIVE HEAD-OUT PLETHYSMOGRAPHY IN CONSCIOUS RODENTS FOR SAFETY PHARMACOLOGY CORE BATTERY STUDIES
The ICH guideline S7A requires the assessment of effects on the respiratory system as one of the three "vital organ systems that should be studied in the core battery". In this first stage of the safety pharmacology package mostly non-invasive techniques are used in conscious rodents which avoid the need of anesthesia. Yves Alarie and coworkers have shown already in 1993-1994 that this technique is highly useful to assess effects on breathing pattern and to detect sensory irritation, pulmonary irritation, and airflow limitation (Vijayaraghavan et al., 1993(Vijayaraghavan et al., , 1994. Subsequently, a lot of studies have been performed using head-out plethysmography for validation purposes and to test on adverse effects of chemicals and drugs, amongst others with inhalation exposure to allergens or bronchoconstricting agents (Neuhaus-Steinmetz et al., 2000;Glaab et al., 2001Glaab et al., , 2002Glaab et al., , 2005Hoymann et al., 2009;Legaspi et al., 2010;Nirogi et al., 2012). Today, the head-out body plethysmography is a well-established and widely accepted technique which has been proven as a reliable method to measure pulmonary function (for reviews, see Murphy, 2002;Hoymann, 2006Hoymann, , 2007Renninger, 2006;Glaab et al., 2007).
Prior to the measurements, the animals are trained for 5 days in increasing time periods to get accustomed to the plethysmograph. For lung function measurements for the core battery, the animals are placed in body plethysmographs while the head of each animal protrudes through a neck collar of a dental latex dam into a head exposure chamber (see Figure 1). This chamber consists of a 5.9 L Plexiglas® cylinder in the rat system which is ventilated with a continuous bias flow of ca. 1 L/min for two rats for lung Table 1 | Short overview of in vivo methods used to measure lung function in rodents.
Monitoring of pulmonary function is started when animals and individual measurements settled down to a stable level ("steady state", after about 4-5 min in rats). The respiratory flow is measured as the flow through a calibrated pneumotachograph connected with the plethysmograph and caused by the thoracic movements of the animal. The flow is measured by using a differential pressure transducer connected with the pneumotachograph. From the amplified flow signals several parameters are obtained: the tidal volume (V T ) of the spontaneously breathing animal in mL, its respiratory rate (f, in min −1 ), the respiratory minute volume (MV, mL), the tidal midexpiratory flow (EF 50 , mL/s, see below), and the time of in-and expiration (TI, TE; time taken to inspire/expire, ms) are calculated for each breath with a commercial software (HEM, Notocord, France). In addition, two parameters can be evaluated which can indicate irritation effects: the time of brake (TB) quantifies an elongation of the period from the end of the inspiration until the start of the expiration and the time of pause (TP) quantifies an elongation of the period from the end of the expiration until the start of the new inspiration (in ms).
If an airflow limitation is present, e.g., caused by bronchoconstriction, edema, or accumulation of mucus, the main changes in the tidal flow signal occur during the midexpiratory phase: EF 50 (mL/s) is defined as the tidal flow at the midpoint (50%) of expiratory V T (see Figure 2), and is used as a measure of bronchoconstriction/-obstruction (Glaab et al., 2001(Glaab et al., , 2002(Glaab et al., , 2005Hoymann, 2007). We and others have described the utility of EF 50 as a physiologically meaningful, non-invasive parameter of bronchoconstriction for rats and mice (Vijayaraghavan et al., 1994;Neuhaus-Steinmetz et al., 2000;Finotto et al., 2001;Glaab et al., 2001Glaab et al., , 2002Glaab et al., , 2005Hantos and Brusasco, 2002;Hoymann, 2007;Nirogi et al., 2012). The degree of bronchoconstriction in response to inhalation challenge was determined from minimum values of EF 50 and was expressed as percent changes from corresponding baseline values.
The group size recommended for safety pharmacology studies is n = 8 for standard studies. As an example the time course of typical lung function parameters from a core battery study is given in Figure 3: Male Brown Norway rats (n = 8/group) have been treated with a pharmaceutical test compound intragastrically in three dose groups and then lung function was monitored using head-out plethysmography. In the week prior to the measurements, the animals were trained for 5 days in increasing time periods to get accustomed to the plethysmograph ("tube training"). Typical parameters of pulmonary function have been measured in this study: the tidal midexpiratory flow (EF 50 ), the tidal volume (V T ), the respiratory frequency (f), and the time of inspiration (TI) and expiration (TE). From these parameters EF 50 and f are shown in Figure 3. No significant effect on EF 50 was observed during 4 h after treatment. In contrast, a dose-dependent decrease of the frequency was found -in this case probably due to a central-nervous effect of this test compound.
Experimental comparison of EF 50 with invasive lung function parameters
Several studies have been performed to compare the parameter EF 50 with invasively measured gold standard parameters for validation purposes. In particular, lung function was measured invasively and non-invasively in Brown Norway rats (see Figure 4) and BALB/c mice as standard strains for safety pharmacological and pharmacological studies. These experiments were performed with www.frontiersin.org inhalation exposure to allergens such as ovalbumin and Aspergillus fumigatus extract as well as to the bronchoconstrictors methacholine and acetylcholine (Glaab et al., 2001(Glaab et al., , 2002(Glaab et al., , 2005(Glaab et al., , 2006. The results of these studies showed a good correlation of EF 50 with the classical invasive measurements of lung resistance and dynamic compliance with a somewhat lower sensitivity and greater variability of EF 50 . The measurement of EF 50 is particularly appropriate for quick and repeatable screening of respiratory function in large numbers of rodents or if non-invasive measurement in mice and rats without use of anesthesia is required. These data support the use in safety pharmacology core battery part respiratory system.
Tests on irritant effects (the Alarie test)
A standard bioassay for testing of airborne substances as, e.g., industrial chemicals on potential irritant effects is the wellestablished Alarie test. In the case that a drug is tested on irritant potential this test can be a special form of a safety pharmacological study or an extension of it. The Alarie test uses the same equipment and technique as described above (see Non-Invasive Head-out Plethysmography in Conscious Rodents for Safety Pharmacology Core Battery Studies): the head-out plethysmography in conscious mice. The test was standardized by the American Society for Testing and Materials (ASTM, 1984). A decade later the extended computerized version of the Alarie test was created to determine effects on three levels of the respiratory system (Vijayaraghavan et al., 1993(Vijayaraghavan et al., , 1994: the upper respiratory tract (i.e., sensory irritation), the conducting airways (airflow limitation) and the alveolar level (pulmonary irritation). The Alarie test is very well validated: An excellent correlation was found between RD50 values (see below) of 89 substances and Threshold Limit Values (TLV) of human exposure representing possibly the largest data base in toxicology (Schaper, 1993).
If a substance stimulates the trigeminal nerve endings in the upper respiratory tract of mice, which in humans may result in a burning and painful sensation, it causes a reflexively induced decrease in the respiratory rate (f; Vijayaraghavan et al., 1993). This decrease is caused by an elongation of the period from the end of the inspiration until the start of the expiration, termed"TB". Therefore, the sensory irritation can be detected by a decrease in f using the concentration at which a respiratory decrease of 50% is reached ("RD50") but a more specific detection is possible by directly using the TB in the modern form of the Alarie test. Additionally, airflow limitation can be detected by using the EF 50 as already described above. Non-invasive determination of the decline in EF 50 to ACh was followed by invasive recording of simultaneously measured decreases in EF 50 and G L (G L = 1/R L ) to ACh exposure in the same animals 24 h later. EF 50 and G L were allowed to return to baseline before each subsequent challenge. Results are means ± SD (n = 8 rats) of percent changes to corresponding baseline values, which were taken as 0%. No significant differences in dose-related changes were observed between non-invasively and invasively measured EF 50 . [reprinted with permission from Glaab et al. (2002)].
Stimulation of vagal nerves at the alveolar level may result in two types of respiratory effects. One effect -usually found at lower concentrations and beginning irritation effect -is rapid, shallow breathing which increases f and reduces V T . At higher concentration and effect level an increase in the TP is observed which is the time period from the end of the expiration to the initiation of the following inspiration. Therefore, the latter form of pulmonary irritation causes a decrease in f and can be quantified by it and the term RD50, but TP is the more specific of the two parameters (Vijayaraghavan et al., 1994).
INVASIVE LUNG FUNCTION MEASUREMENT IN OROTRACHEALLY INTUBATED RODENTS FOR SAFETY PHARMACOLOGY FOLLOW-UP STUDIES
The ICH guideline S7A requires extended measurements of pulmonary function as follow-up study if adverse effects may be suspected based on the pharmacological properties of the test compound or if the results of a conducted core battery study give rise to concerns. Especially, if the core battery indicates, e.g., flow limitation by a decrease in EF 50 or a rapid shallow breathing pattern, the mechanical properties of the lung can be further evaluated functionally by invasive lung function tests or pulmonary maneuvers in anesthetized animals using their higher sensitivity and specificity. For the measurement of lung resistance and compliance a pressure-sensitive catheter has to be inserted into the pleural cavity or the esophagus for the measurement of pleural, airway, or transpulmonary pressure. Therefore, the animals are generally anesthetized.
Invasive measurements of pulmonary function in rodents with the option of simultaneous aerosol inhalation are facilitated by careful orotracheal intubation in spontaneously breathing animals such as the rat (Hohlfeld et al., 1997;Hoymann and Heinrich, 1998;Hoymann, 2007) or the mouse (Glaab et al., 2004(Glaab et al., , 2005Hoymann, 2006Hoymann, , 2007 which yield the feasibility of repetitive experiments in the same individuals. Briefly, the animals are anesthetized by injection and/or inhalation of volatile anesthetics: e.g., for rats with a combination of 13 mg/kg pentobarbital sodium i.p. +0.8% isoflurane by inhalation or with 1.3-1.7% isoflurane alone, and for mice with a combination of 35 mg/kg pentobarbital sodium i.p. +0.8-1% isoflurane or with a combination of 23 mg/kg Etomidate i.p. +0.05 mg/kg Fentanyl i.p. When an appropriate depth of anesthesia is achieved, the rodents are intubated carefully orotracheally under visual control by transillumination of the neck. For rats, the tracheal cannula is prepared from a Cathlon IV 14G intravenous catheter (ID 1.78 mm, OD 2.1 mm, length reduced to 52 mm). For mice, a tracheal tube made from Teflon® (Abbocath®-T 20Gx32, ID 0.80 mm, OD 1.02 mm, length 32 mm) or, alternatively, made from steel is used. After intubation, the spontaneously breathing animal is placed in supine position in a body plethysmograph (see Figure 5). In 2002-2004, we developed a plethysmograph system for invasive lung function measurement with simultaneous inhalation administration in anesthetized mice in cooperation with Hugo Sachs Elektronik/Harvard Apparatus (Glaab et al., 2004). A thermostat-controlled warming pad [mice: water basin (37˚C)] built in the plethysmograph chamber ensures a normal body temperature. The animals can breathe air spontaneously out of a tubing system providing the animal with air containing 30-40% oxygen to prevent hypoxia by using a source of compressed air (bias flows of ca. 0.8-1.2 L/min per animal used for two rodents simultaneously).
The orotracheal cannula of each animal is directly attached to a capillary pneumotachograph tube (rats: PTM 378/1.2, mice: PTM T16375; HSE-Harvard) installed in the front part of the chamber. The pneumotachograph tube is connected to a differential pressure transducer to determine tidal respiratory flow. A waterfilled PE-90 tubing is inserted into the esophagus to the level of the midthorax and coupled to a pressure transducer to measure transpulmonary pressure (P TP ). By processing these two primary signals lung resistance (R L ) and dynamic compliance (C dyn ) can be calculated which are known as the gold standard parameters of lung function, especially when assessing bronchoconstricting or -obstructing effects. These parameters are defined as By using the isovolumetric method (Amdur and Mead, 1958) or by using an integration method applied to flow, volume and pressure signals (Roy et al., 1974;Glaab et al., 2004) -which we prefer in our labs -R L and C dyn are calculated over a complete respiratory cycle (software: HEM/Notocord). Recording of pulmonary function is started when the measured signals reached a stable level ("steady state"). The recommended group size is comparable to non-invasive technique: for standard safety pharmacology studies n = 8 rats or mice.
This plethysmography technique is combined with an appropriate inhalation system for intubated mice or rats which allows www.frontiersin.org simultaneous lung function measurement. Inhalation treatments or provocations via the orotracheal tube are performed using an effective and computer-controlled aerosol generator such as the Bronchy III (constructed in Fraunhofer ITEM) with an effective drying system: solutions of drugs or provocative agents are sprayed into an evaporation chamber warmed to, e.g., 40˚C and dried, the solvent is removed and the aerosol re-cooled to 25˚C in a diffusion dryer module and is then conducted to the animal (Hoymann, 2006). The exact dose is calculated and controlled by a computerized feedback dose-control system (Fraunhofer ITEM) which has been successful in generating constant dosing in rats and mice (Hoymann and Heinrich, 1998;Glaab et al., 2004Glaab et al., , 2005Hoymann, 2006). Based on the inspiratory aerosol concentrations continuously measured by a gravimetrically calibrated photometer and the respiratory MV (mL/min) the exact inhalation doses are calculated by this system. The software allows to pre-select a dose in µg and an exposure time (Hoymann, 2006). For example: if an inhalation dose of 300 µg and an exposure time of 10 min is preselected, the system controls the aerosol concentration to reach 300 µg in 10 min in each animal independently of the MV of this animal. If the MV rises the concentration is decreased (by increasing a dilution air flow via mass flow controllers) and vice versa. This results in a constant dose and also dose/time relation in each animal. An example is given in Figure 6: a marked increase in R L and decrease in C dyn is shown during and after inhalational exposure of a sensitized rat with the allergen.
LUNG FUNCTION MEASUREMENTS IN JUVENILE RATS
Most drugs intended for use in children have not been formally developed for use in this age group. As juvenile animal studies are requested more and more often by the regulatory authorities during drug development, the need to modify standard FIGURE 6 | Characteristic time course of an early airway response in an anesthetized, orotracheally intubated Brown Norway rat during and after inhalational challenge with ovalbumin. Marked increase in lung resistance (R L ) is shown and a decrease in dynamic compliance (C dyn ) is paralleled by decreases in lung conductance (G L ), midexpiratory flow (EF 50 ) and tidal volume (V T ). Also a slight increase in respiratory frequency (f ) was observed. The x-axis represents the experimental time (unit = 1 min). experimental procedures to be applied to neonatal and juvenile animals and to provide basic data on respiratory parameters is becoming urgent (European Commission, 2006a,b). The European Medicines Agency (EMA, before 2009: "EMEA") published a "Guideline on the Need for Non-clinical Testing in Juvenile Animals of Pharmaceuticals for Pediatric Indications" in 2008(EMEA, 2008: Approval of these medicinal products for pediatric use "requires a special risk/benefit assessment, where the possible effects of the product on the ongoing developmental processes in the age group(s) to be treated are also taken in consideration. This risk/benefit assessment should be based on safety and pharmacokinetic data from non-clinical and clinical studies". Therefore, we have developed a technique based on modified head-out plethysmography (see chapter Non-Invasive Head-out Plethysmography in Conscious Rodents for Safety Pharmacology Core Battery Studies) to measure pulmonary function noninvasively in juvenile rats between post-natal days (PNDs) 2 and 50 (Lewin et al., 2010). Briefly, the measurements in juvenile rats were taken with equipment designed for mice on PND 2, 4, 7, and 10, and with equipment designed for rats on PND 21,25,30,35,40,45,and 50. For airflow measurement, a calibrated pneumotachograph (for rats up to PND 10: a capillary tube PTM 378/1.2, for rats > PND 10: a wire mesh pneumotachometer with six layers of wire mesh cloth, HSE-Harvard, March-Hugstetten, Germany) and a differential pressure transducer (Validyne DP 45-14, HSE-Harvard) coupled to an amplifier were attached to each plethysmograph. Typical lung function parameters were recorded for approximately 15 min in four animals simultaneously. Two examples are given in Figure 7. Each time point was represented by two litters of four males and four females each. All pups were weighed before each measurement and observed for possible clinical symptoms after the measurement.
The methods proved to be feasible and did not interfere with normal growth and development of the animals. This technique in juvenile rats therefore permits new insights to support human neonatal risk assessment and therefore this animal model is suitable for regulatory studies.
DISCUSSION: ADVANTAGES AND DISADVANTAGES OF INVASIVE AND NON-INVASIVE TECHNIQUES
As has been shown invasive and non-invasive methods for measurement of lung function both have their advantages and disadvantages (see Table 1). The non-invasive head-out body plethysmography in conscious rodents is simple to handle and the breathing pattern is nearly natural since no anesthesia is required (see Table 2). On the other hand there is a certain amount of stress for the animal (but limited by using "tube training"), lung resistance and compliance cannot be obtained since a P TP signal is not available (therefore EF 50 is used to describe flow limitation). Though Murphy et al. (1998) have introduced a technique with implanting a pressure-sensitive catheter that resides below the serosal layer of the esophagus to enable direct measurement of subpleural pressure. This does not appear to be widespread due to the [reprinted with permission from Lewin et al. (2010)]; see there for additional data).
Non-invasive head-out body plethysmography:
Is performed in spontaneously breathing conscious rats or mice Includes measurement during inhalation exposure [+]Nearly natural breathing pattern, simple handling, higher throughput [−]Stress, volume and flow derived parameters but no resistance, and compliance measurable, inhalation exposure includes nasal and gastro-intestinal uptake
Invasive body plethysmography (orotracheally intubated animals):
Is performed in anesthetized but spontaneous breathing rats or mice Includes measurement during inhalation exposure with optimal dose-control [+]No stress, gold standard parameters resistance, and compliance available, inhalation exposure is focused to the lungs www.frontiersin.org limitations and the surgical procedure necessary. Ewart et al. (2010) tried to validate this telemetry technique but they found that the pressure signal in the telemetered rats was extremely variable and concluded that assessment of airway resistance is best confined to the anesthetized rat. In addition, the inhalation exposure when using conscious rodents includes nasal and gastro-intestinal uptake which is not desired in special cases.
In comparison, the invasive body plethysmography in orotracheally intubated but spontaneously breathing rodents requires anesthesia of the animals, more training of the technicians, and more time to conduct the measurements. Anesthesia has a depressant effect on respiration which decreases the breathing frequency and changes the breathing pattern. On the other hand, the parameters airway resistance and dynamic compliance are available which are known to be the "gold standard" for detection and quantification of bronchoconstriction and -obstruction (Glaab et al., 2005;Hoymann, 2007;Ewart et al., 2010). Inhalation exposure in the orotracheally intubated animal is focused to the lungs since nose and skin exposure as well as oral intake are excluded.
Therefore, advantages and disadvantages have to be compared in relation to the aim of the study to decide whether invasive or non-invasive lung function should be chosen. Non-invasive measurement of lung function in conscious animals preferentially by head-out plethysmography is recommended for the core battery of safety pharmacology testing or is used if natural breathing pattern is important (with parameters such as midexpiratory flow EF 50 , time of expiration, TB, and TP) or if "high throughput" measurements and a simple technique are important. Invasive lung function testing in orotracheally intubated animals is preferred if the most sensitive and specific parameters such as lung resistance and dynamic compliance are required and therefore are recommended for follow-up studies of safety pharmacology testing, or if controlled inhalation administration into the lungs without other pathways (nose, stomach, skin) or with high deposition doses of drugs or agents are required or if pulmonary maneuvers are desired.
Reliable non-invasive measurement -comparison of EF 50 with Penh
Some years ago the application of the empiric variable enhanced pause (Penh) had gained widespread popularity -also in safety pharmacology core battery studies -due to its simple and convenient handling. But then the criticism arose on the side of the experts in the field and the reviewers of the scientific journals. Unrestrained plethysmography (Penh) provides respiratory measures that are so tenuously linked to respiratory mechanics that they were seriously questioned recently by several authors (Lundblad et al., 2002;Mitzner and Tankersley, 2003;Adler et al., 2004;Bates et al., 2004). Penh is an empiric variable which has been shown to be primarily related to ventilatory timing and unrelated to airway resistance Tankersley, 1998, 2003;Lundblad et al., 2002). Several studies have shown that changes in Penh and respiratory resistance sometimes do not correlate (Petak et al., 2001;DeLorme and Moss, 2002;Flandre et al., 2003;Adler et al., 2004;Pauluhn, 2004) which leads to misinterpretation. Therefore, a correspondence written by 22 leading experts in the field (Bates et al., 2004) to the editors of the AJRCMB emphasized the danger of the increasing uncritical use of Penh, with potentially misleading assessment of pulmonary function in animal models of lung disease. In addition, many authors have claimed not to use this simple technique and the parameter Penh to reflect airway function without an independent assessment of airway resistance (Drazen et al., 1999;Hantos and Brusasco, 2002;Bates and Irvin, 2003;Kips et al., 2003).
In order to compare and contrast differences in methods, a short comparison of the non-invasively measured EF 50 (head-out plethysmography) with the non-invasively measured Penh (whole body plethysmography) is given in Table 3. In addition, to support the argument that non-invasive EF 50 measurement is valid in contrast to Penh we conducted an experiment to examine whether EF 50 , unlike Penh, parallels the actual changes in pulmonary mechanics in response to hyperoxia in C57BL/6 mice. Whereas Petak et al. (2001) showed a significant ∼4.5-fold increase in Penh following 48 h exposure to 100% O 2 which did not correlate with a slight decrease in resistance measured in the same animals (Petak et al., 2001), in our study no significant change neither in EF 50 nor in lung resistance was measured in the same animals after 48 h exposure to 100% O 2 (see Figure 8; Glaab et al., 2005). Therefore, in contrast to the Penh results, head-out plethysmography has been proven to provide a reliable correlation between EF 50 and pulmonary resistance. Petak et al. (2001); DeLorme and Moss (2002); Lundblad et al. (2002); Flandre et al. (2003); Adler et al. (2004); Pauluhn (2004).
FIGURE 8 | No impact of hyperoxia shows good correlation of invasively and non-invasively measured respiratory parameters in mice
Additionally, Legaspi et al. (2010) reported a concurrent validation of volume, rate, time, flow, and ratio variables in head-out plethysmography. They confirmed "the suitability of head-out plethysmography in rats for respiratory safety pharmacology as previously reported by Hoymann (2006)" and found flow derived parameters such as EF 50 as "highly valuable complement for interpretation of respiratory response". In an infection model, head-out plethysmography has been reported to be very useful for monitoring infection with Pseudomonas aeruginosa in mice showing a decrease in V T and EF 50 (Wölbeling et al., 2010). Recently, Nirogi et al. (2012) compared whole body and head-out plethysmography using respiratory stimulant and repressant in conscious rats and concluded that "ventilatory function can be accurately assessed using head-out plethysmography compared to whole body plethysmography".
The ICH guideline S7A defines the respiratory system as a "vital organ system" that is considered one of the most critical ones, and as such should be assessed with the same scientific rigor as the other organ systems (i.e., CNS and cardiovascular; Nirogi et al., 2012). Therefore, the ability to accurately and reliably evaluate respiratory function in animals has become increasingly important (Renninger, 2006). Since the intent of safety pharmacology is to minimize the human risk on one hand but also to minimize the unnecessary removal of potentially useful drugs from the further development on the other hand, only such methods should be used which are proven to be reliable and are well validated for detecting adverse effects. Since it has been shown (see above) that Penh and the barometric plethysmography in some cases predicted false-adverse effects and is therefore and due to lack of a theoretical basis not really predictable for human risk, this leads to the conclusion that this method is not suitable for the safety pharmacology core battery. In contrast, the head-out plethysmography including the parameter EF 50 has been proven to be a valid and reliable method (Vijayaraghavan et al., 1994;Glaab et al., 2002Glaab et al., , 2005 which is easy to use, allows high throughput measurements, and was recommended to determine lung function non-invasively (Hantos and Brusasco, 2002;Glaab et al., 2005Glaab et al., , 2007Hoymann, 2006) which was recently confirmed by results of methodological comparison studies of Legaspi et al. (2010) and Nirogi et al. (2012). Therefore, head-out plethysmography is recommended to be used in the core battery of safety pharmacology testing (Renninger, 2006;Hoymann, 2007;Legaspi et al., 2010;Nirogi et al., 2012).
CONCLUSION
The intent of safety pharmacology is to minimize the human risk on one hand but also to minimize the unnecessary removal of drugs from the development line on the other hand. Therefore only such methods should be used which are proven to be reliable and are well validated for detecting adverse effects on the respiratory system. The head-out plethysmography including EF 50 measurement fulfill this requirement: it is proven as a valid and reliable method, is easy to use, allows measurements with a relatively high throughput, and is therefore recommended to be used in the core battery of safety pharmacology testing. It has also been shown to be successfully used in high throughput studies using, e.g., asthma models, in lung function measurements in infection models or as Alarie test on irritant potential in mice. In contrast, since it has been shown that the barometric plethysmography (Penh) in some cases predicted false-adverse effects and leads to misinterpretation, it is recommended not to use this technique without an independent assessment of airway resistance.
In safety pharmacology follow-up studies, repetitive invasive lung function testing in orotracheally intubated rodents is recommended since the most sensitive and specific parameters such as lung resistance and dynamic compliance are required. This technique allows also extremely controlled inhalation administration directly into the lungs -under exclusion of other pathways -or with high deposition doses. It has also been used extensively in pre-clinical studies using, e.g., asthma or infection models, e.g., to assess airway hyperresponsiveness. | 7,810.6 | 2012-06-10T00:00:00.000 | [
"Biology",
"Medicine"
] |
Involving resilience in assessment of the water–energy–food nexus for arid and semiarid regions
Due to the unsustainable consumption of resources and climate change, it is increasingly difficult to maintain the security of the water–energy–food nexus, especially in regions with low availability. Thus, the need to design more resilient systems is key to sustainable development. Quantifying resilience in integrated systems such as the water–energy–food nexus is a useful way to identify vulnerable areas of the system and thereby take corrective actions to reduce the incidence of interruptions to basic services. Therefore, this work presents a systematic approach to assess the resilience of the water–energy–food nexus in arid and semi-arid regions. Through the proposed approach, it is possible to evaluate how the system failures caused by hurricanes, low-temperature events, and droughts affect the supply of water, energy and food. A resilience index is proposed, which involves penalization costs associated with the resource supply failures of the system. To apply the proposed approach, scenarios corresponding to past conditions and future projections were evaluated for two Mexican arid cities. The results show that in future years the nexus will be vulnerable to extreme events if the conditions for the resource management do not change. The proposed approach allows estimating the economic losses associated with the existence of natural disasters, making it an efficient decision-making tool to implement strategies and improve the security of the water–energy–food nexus.
Introduction
Nowadays, satisfying the needs of basic resources for life, such as water, energy, and food, represents one of the biggest problems around the world; this is mainly due to urbanization, resource constraints and inadequate management, governance structures, policies, and the impacts of climate change. The water-energy-food nexus (WEF) approach emerges with the potential to study the interlinkages among water, energy, and food, reduce tradeoffs, and promote resource security and efficiency to ensure urban sustainability and facilitate greater climate change adaptation. Climate change puts stress on water security due to extreme events such as floods and droughts. Regarding the energy sector, renewable energy sources are impacted by changes in precipitation patterns, temperature, solar radiation, wind speed, or water availability (Yalew et al. 2020). On the other hand, impacts of climate change also put food security at risk since food supply can be interrupted and temperature changes may affect the plant's growth (Fahad et al. 2021a). In addition, climate extreme events can impact the infrastructure of the water-energy-food systems and affect energy transmission and water distribution, which in turn causes severe consequences on food production.
The security of the water-energy-food nexus has become a high concern due to future uncertainties (Yadav et al. 2021), especially concerning the impacts of climate change, as it can potentially affect the productivity of resources (Sönmez et al. 2021); therefore, tackling natural disasters is a crucial challenge for sustainable cities. Recently, the frequency in the occurrence of different natural disasters (such as freezing, droughts, and flood) has been increased (Botzen et al. 2019). These events have put the production and availability of resources at risk; therefore, resilience is a critical factor that must be included in the nexus assessment for urban sustainability. The term resilience has been employed in various areas, but in general, resilience aims to resist and adapt to a disturbance and recover its normal state (Ribeiro and Pena Jardim Gonçalves 2019). Núñez-López et al. (2021) defined resilience as the ability of the system to deliver its functional service(s) during and after an interruption process. There are different means to address resilience. In the first contributions, resilience was evaluated using qualitative methods (Bruce et al. 2020;Pawar et al. 2021), but lately, special attention has been paid to the development of quantitative methods mainly in the process engineering area (Nezamoddini et al. 2017;Ahmadi et al. 2021). The quantification of resilience in optimization systems has brought great benefits since it allows identifying the scenarios where some failures that decrease the system performance may occur and thus be able to make decisions on how to address the problem before, during, or after the design of the system. This shows the importance of implementing resilient planning frameworks compared to conventional planning methods.
Addressing resilience is a key factor for a sustainable system. In the food sector, resilience has been incorporated to supply chain systems to improve food security (Martínez-Guido et al. 2020). Moreover, studies about mitigation processes to develop climate-resilient crops have been addressed (Fahad et al. 2021b), and new management strategies that consider the interactions between soil and plant interaction have been proposed (Fahad et al. 2021c;Turan 2021). In the water sector, studies have focused on water supply resilience to disasters (Balaei et al. 2021), water management (Behboudian and Kerachian 2021), and water quality (Imani et al. 2021). In previous studies, resilience in the energy sector has been addressed focusing on weather events, which is the cause for most of the energy supply disruptions (Jasiūnas et al. 2021), geopolitical conditions (Wilson 2019), and technical failures (Cadini et al. 2017). In this regard, quantitative resilience assessments (Moslehi and Reddy 2018;Abdin et al. 2019), indicators or metrics for planning energy systems (Gholami Rostam and Abbasi 2021), and simulation models (Senkel et al. 2021) have been proposed. The resilience of energy supply chains exposed to disturbances has been studied (Emenike and Falcone 2020). Recently, particular attention has been paid to the study of building energy prediction (Sun et al. 2020) and energy distribution (Fontenot et al. 2021) considering climate change, providing this way recommendations to reduce energy consumption (Ashouri et al. 2020). In the context of the water-energy-food nexus, models that propose strategies to reduce tradeoffs between sectors and facilitate the decision-making process have been reported; however, some of them are limited in the sense that they do not consider the uncertainties associated with unexpected events and risks. Hence, resilience methods are capable of improving decision-making, mitigating emerging risks, and motivating system planning. For instance, Govindan and Al-Ansari (2019) addressed the WEF resilience by developing a computational framework based on reinforcement learning to quantify the emerging risks and determine rewards or the best action policy for network and adaptation to face risks. Nevertheless, a limitation of these frameworks is that they require dense information to infer the transition probabilities and that resilience depends on the reward awarded in a particular state since its location in the system could take an environmental, economic and social dimension. To mitigate these limitations and facilitate the quantification of resilience, other indexes have been proposed. In this context, Shu et al. (2021) developed indexes to quantify the resilience and security of the WEF in industrialized nations through the measure of the availability and accessibility of resources. However, although this methodology does not require much data or information to calculate the resilience indicator, subindicators must be weighted based on experts´ opinions, and this could bring uncertainty in the resilience calculation.
Considering that there is wide research on resilience for water, energy, and food systems independently, but minor research to quantify the WEF resilience of systems involving uncertain unexpected events, this paper aims to develop a systematic approach for evaluating the resilience in the security of the water-energy-food nexus at different periods. The main contributions of this research are discussed below: • The proposed methodology can identify and solve the blind spots of the system through alternatives that help to improve the functioning of the same. • This approach has wide applicability since it can analyze the functionality of macroscopic systems and can be used to evaluate the WEF resilience in any region. • According to the discretization used, it is possible to estimate the system resilience in future years. • One of the primary benefits of this approach is to evaluate the resilience and identify critical disturbances to prevent unwanted events, propose corrective actions, and/or carry out an adequate design of the system in an anticipated way.
On the other hand, the main limitations lie in modifying the scale at which the methodology was designed, as well as the difficulty in accessing parameters to make use of another discretization measure. This could modify the results of the predictions for future years.
The paper is organized into 6 sections. In section "Problem statement", the problem statement is explained. Section "Methodology" describes the methodology used for the quantification of the resilience index. Section "Case study" presents the applicability of the proposed approach to two case studies for two economically important areas of Mexico. In section "Results and discussion", the obtained results are presented and discussed. Finally, the conclusion of the study is summarized in section "Conclusions".
Problem statement
There are many regions with unfavorable climatic conditions in which satisfying the water, energy, and food needs of their population is a challenging task. In the last years, more frequent and extreme events have occurred causing disruptions in the management of water, energy, and food. Frequently, places are not prepared to overcome failures related to natural disasters. Furthermore, these places are not able to cope with the growth in demand associated with the mobilization of residents with economic and social aspirations. Resilience analysis is an efficient tool to estimate the future conditions or failures that affect the security of the water-energy-food nexus. Therefore, this problem consists in determining the resilience of the water-energy-food nexus security in arid/semiarid regions that are exposed to natural disasters (Fig. 1). To address this problem, the methodology proposed by Núñez-López et al. (2021) to assess the resilience in process systems engineering, together with the methodology proposed by CENAPRED (Jiménez et al. 2012) for risk assessment, was considered for the analysis One of the advantages of quantifying resilience in process systems engineering is that it allows estimating the repair costs of system components in the event of failure, as well as the costs associated with the loss of functional service. The identification of certain risks allows overestimating the considered system and, based on that, adapting it to avoid the total or partial functional loss of any service.
Methodology
In this paper, a methodology for quantifying the resilience in the security of the water-energy-food nexus in arid/semiarid regions is presented. A mathematical model is proposed to analyze the resilience of different scenarios to identify those in which a significant risk may occur that compromises the satisfaction of functional services. The quantification of resilience consists in first defining the system, its characteristics, needs, types of failures that can occur, inputs, and outputs. Subsequently, functional services would be affected if a failure mode exists and their respective penalty costs should be established; in this case, the functional services are the amount of m 3 of water, kWh of energy, or ton of food. It continues to identify the three-dimensional matrix, which is composed of scenarios formed by the functional services that can be affected in a certain period due to some type of failure, specific period, and possible failures that may occur (Fig. 2). These sequential steps are described as follows: a. Identify a three-dimensional matrix (Fig. 3), by analyzing the scenarios obtained from the mathematical model. b. Calculate a bidimensional matrix (Fig. 4) through the estimation of the costs imposed by the system considering the priorities of each of the scenarios according to Eq. (1).
The imposed cost related to each possible failure in each period is calculated as follows: Here, the penalty cost of each (PenalizationCost fs ) of the functional services (fs) in every analyzed year is multiplied by the amount of water (m 3 ), energy (kWh), or food (ton) that is represented as Unit fs , which also corresponds to each value in the three-dimensional matrix. The PenalizationCost fs is a parameter that represents the unitary cost of water ($/m 3 ), energy ($/kWh), or food ($/ton).
(1) ImpCost f ,t = ∑ fs PenalizationCost fs Unit fs In this work, the CENAPRED methodology is used to analyze the risk indices for a hurricane, low-temperature, and drought events, which are described below.
• For hurricane
To calculate the hurricane risk index, the use of Eq. (2) is required, and a count of the trajectory segment of tropical cyclones within a 1° to 1° geographic box, for this count the highest category that the tropical cyclone has reached, such count is done through the computer program "Cyclone count" (León-Aristizábal and Peréz-Betancourt 2018).
where IPCT is the risk index for hurricanes, v i is the exceedance rate for intensity I, and I is the intensity (Table 1) The risk index for a hurricane (IPCT) is calculated by Eq. (2), where the intensity (I) associated with the category of tropical cyclones Saffir-Simpson scale (presented in Table 1) is multiplied by the exceedance rate for intensity I ( v i ) ( Table 2).
• For low temperature The highest incidence of freezing in arid/semiarid regions generates severe health problems, as well as damage to the population's property. To determine the risks associated with this phenomenon, the low-temperature risk index (IPBT) is used (Eq. 3), which involves parameters related to days with freezing in a certain region ( H hel ) (Table 3) and extreme minimum temperatures ( H tmext ) (Table 5).
where IPBT is the risk index for low temperatures, H tmext is the index for extreme low temperatures, and H hel is the index for low temperatures days It is important to mention that the parameters presented from Table 1 to Table 3 were previously calculated and reported in the methodology proposed by CENAPRED (Jiménez et al. 2012).
• For drought
The drought phenomenon has been evaluated by government institutions in such a way that studies have been made for each of the country's municipalities, where the rainfall deficit and its duration have been considered. Escalante- Sandoval (2005) has proposed a classification on the risk index for droughts, which is shown in Table 4. Tropical storm 3 Hurricane category 1 4 Hurricane category 2 5 Hurricane category 3 6 Hurricane category 4 7 Hurricane category 5 The resilience index is calculated using the maximum and minimum imposed costs and the imposed cost for the current scenario: The resilience index ranges between 0 and 1. Re f,t = 0 represents the scenarios that the system would not be able to deliver any of its functional services, and Re f,t = 1 is when the functionality of the system would not be interrupted at all. e. Identify the critical components of the system based on the obtained resilience indexes and propose a strategy to improve them.
It is important to mention that the assumptions that were made in this framework are that the system operates statically, the penalty costs associated with the lack of functional services do not vary for the failure modes, and that the population increase (on which the resource demand for future scenarios depends) shows a linear growth.
Case study
To show the applicability of the proposed approach, two case studies were selected, which correspond to the city of Monterrey, Nuevo León, and the city of Hermosillo, Sonora, both located in Mexico. The selection of case studies was made based on the limited resource conditions that these cities faced, a consequence of the climate change impacts that have put the security of water, energy, and food at risk.
Monterrey
The Monterrey Metropolitan Area (AMM) is selected as a case study because of the economic importance that it represents to the national gross domestic product. Its main activity is the manufacturing industry, which allows having ImpCost Max − ImpCost Min a strong economic relationship with the USA. It is located in the Northeast Region of Mexico, in the state of Nuevo León. The AMM has the second-highest human development index in the entire country, after Mexico City. Around 88% of the total population of the state is concentrated in the AMM due to the developed economic activities, making the city a point of migration for the population of neighboring states. However, the current situation suffered by natural resources puts at risk the ability to cope with accelerated growth and achieve sustainable development. In addition, there are severe weather conditions in the city, which are dry and extreme (SET-NL 2019). Water is supplied to the population by surface and underground sources, which completely depend on the rainfall registered in the area and upstream, causing conflicts derived from the rights to use water. The agriculture and livestock sector is exploited to a lesser extent due to weather conditions. The main crops harvested are pastures and forage sorghums and some vegetables to a lesser extent. Livestock activities consist of carcass meat, milk, and egg production. The energy sector is supplied mainly from non-renewable sources, such as combined cycle power plants. The generation of energy by renewable sources used are wind and solar; however, due to topographic characteristics, they could be exploited to a greater extent. According to the historical climate recorded (CONAGUA 2021a), there are great probabilities of having long periods of drought and sometimes heavy rainfall that leads to flooding. These floods are a consequence of the location and local topography in the face of tropical storms and have, on several occasions, saved the population from being without water in their aquifers. The most devastating hurricanes in the AMM have been "Gilbert, 1988," "Emily, 2005," "Alex, 2010," and "Hanna, 2020" the most recent (CONAGUA 2021b). The damages recorded range from economic losses, interruption of basic services such as water and energy, to human losses. In 2013, there was a lasting drought considered one of the worst in the last 50 years, where Hurricane Ingrid saved the city from a real emergency due to water shortages (CONAGUA 2021c). Figure 6 describes the case study, as well as the most important hurricanes and droughts that have affected the city.
Hermosillo
Hermosillo, the capital of the state of Sonora, is the city with the greatest water supply problem in the country, and this is mainly due to its geographical location. Sonora is almost covered by the hottest desert in the country; therefore, Hermosillo is located in an arid region where the low availability of resources predominates. Given that Hermosillo is the most economically developed city in the Sonora state, satisfying the demands of energy, water, and food is a challenging problem that in recent years has been aggravated . .
Fig. 5
Resilience matrix representation by the impacts of climate change. Natural disasters have caused great economic losses due to damage to hydraulic infrastructure and the agricultural, livestock, fishing, and energy sectors. The natural disasters that have occurred in Sonora have generally been of hydrometeorological origin, which is generated by extraordinary rains and tropical storms (Fig. 7). Even though the city of Hermosillo is located in an arid region with low levels of precipitation, the presence of torrential and short-lived rains causes large floods that have social, economic, and environmental consequences. Likewise, another meteorological phenomenon that also causes damage to the population is the low-temperature environment. This phenomenon has caused great losses in agriculture; only in the 1983-2004 period, more than 17,000 hectares of crops were lost with a production value of more than 3 MMUSD. The regions most affected by low-temperature events in Sonora are those of the north, northeast, and east, with occasional effects on the coastal regions. In the case of Hermosillo, the presence of low temperature is scarce, covering the months of December to February, with an incidence of 0 to 20 days per year. In addition to the problems caused by excess precipitation and low temperatures, droughts have a direct impact on all sectors, producing large losses, since according to the information provided by the National Water Commission (CONAGUA 2021d) and the National Institute of Statistics, Geography, and Informatics (INEGI 2020), there are approximately 70,093 km 2 of permanent drought, which comprises 38% of the total surface of the State. According to the drought severity calculation, the Sonora State is located between severe and very strong drought zones, while the Municipality of Hermosillo has an index of very strong. One of the main consequences of droughts is that capacity of the dams located in Hermosillo is at their minimum levels; thus, water has been transported from other regions to provide the water required in the city. However, these regions are the areas with the highest agricultural production in the state and therefore the agricultural users of these regions demand that their irrigation rights be respected. It is estimated that water scarcity events have Fig. 6 Monterrey case study caused a decrease in the sowing area, by more than 160,000 hectares. The cities of Monterrey and Hermosillo have faced the impacts of climate change and unexpected events during the past few decades; due to this, strategies for better management of resources are needed to improve the security of the water-energy-food nexus.
Results and discussion
A methodology to evaluate the water-energy-food nexus resilience was implemented to assess the considered case studies. Figures 8 and 9 show the tridimensional matrices with the proposed case studies of Hermosillo and Monterrey, respectively. The functional services referring to the amount of water (m 3 ), energy (MW), and food (Tons) demanded by each city are presented in the corresponding matrices. Scenarios corresponding to five different years were evaluated, corresponding to previous and future years. Due to the frequency of natural disasters in the case studies, the possible failures modes considered are hurricanes, freezing, and drought. It can be seen that there is a variation through each of the years in the functional services. However, the functional service value is the same for each of the failure modes because the maximum value of the functional service, which is at risk in each of the periods, is the same regardless of the reason why it is found at risk. Then, each of the functional services was multiplied for the respective penalization cost (Table 5) for constructing the dimensional matrices (Tables 6 and 7). The values for penalty costs have been reported by different government agencies in Mexico (FAO 2021;CONAGUA 2021e;CFE 2021). For both cases, it can be observed that functional services vary through the years; however, these are the same for each of the possible failure modes. This way, a risk index associated with each of the possible failure modes was used. The indices were calculated based on the methodology established by CENAPRED (Jiménez et al. 2012). According to Tables 6 and 7, an increase in the projection of imposed costs of 41.52% and 43.10% can be observed, for the City of Hermosillo and Monterrey, respectively, from 2019 to 2030. Although a decrease in energy costs is expected in the coming years due to the increase in the use of renewable energy (38.28% for both cities), the trend shows an increase in water and food costs, as well as in the number of functional services required due to population growth. The increase in the cost of water for Hermosillo is 22.33% and 20.40% for Monterrey, while the cost of food will increase around 15% for both cities.
According to the methodology with which the indices were calculated, the risk levels do not vary significantly in the proposed years, even for the projections for future years. Therefore, the risk index for each of the case studies regarding each possible failure is the same throughout all the years. These indices are shown in Figs. 10 and 11, where VL is an exceptionally low risk, L is a low risk, M is a medium risk, H is a high risk, and VH is an extremely high risk. Note that even though the risk indices do not vary over the years, it is very significant in the results because it represents the probability of occurrence of the failures modes in each of the time periods.
Subsequently, the resilience indices were calculated for each of the case studies. As the risk indices do not vary through the selected years, only one resilience matrix for each case was calculated. For both cases, only medium to very high-level risks are considered. Table 8 presents the resilience indices obtained from the case study of Hermosillo, and it can be observed that some of the resilience indices are equal to 1, which represents a scenario that can completely satisfy the demands of resources even when any possible failure appears. However, in the event of a drought, there is a value of 0.85 of resilience index for a medium-risk index (with a probability of 15% according to Fig. 10), which indicates that this natural phenomenon could have important consequences for the interrelation of the water-energy-food nexus. This implies an imposed cost of $US 42,016,162 for the year 2019 and a cost of $US 59,463,607 for 2030, to obtain the necessary nexus resources to satisfy the needs of the population. Similarly, lower values can be observed in the failure modes for low and very low-risk indices, but because they are low risks, they are not considered. Table 9 presents the results of the case study of Monterrey; in this case, high resilience values for the case of hurricanes are presented, which indicates that this city will not present a major problem if any occurs (probability of 92% for very low risk). However, the occurrence of low temperatures or drought presents resilience values below 0.50, which indicates that these phenomena can drastically alter food and/or energy production, as well as the availability of water in this area. Besides, Monterrey presents a probability of 55% in medium risk in case of scenarios with low temperature, and a probability of 56% in high risk in case of drought (Fig. 11) Both case studies present similar conditions concerning climate and resource scarcity; for both cities, the main issue is related to the availability of water. Therefore, the more threatening factor for the security of the water-energy-food nexus is the intensification of droughts, and a result of climatic changes due to global warming. The analyzed cities are of national importance; however, their current distribution and management systems of basic resources are not resilient to climate change disturbances, nor to the growth demanded by society, limiting sustainable development. The resilience index indicates that in future years the nexus will be vulnerable to extreme events if conditions do not change; 201,220,890 230,369,099 280,107,749 343,405,393 396,424,044 Drought 201,220,890 230,369,099 280,107,749 343,405,393 396,424,044 Low temp 201,220,890 230,369,099 280,107,749 343,405,393 396,424,044 Hurricane then, actions to improve the integration of resources are essential. To overcome the low availability of water, water reuse is an effective way to reduce freshwater consumption and increase its accessibility (Tan and Foo 2018;Livia et al. 2020); consequently, this can be used to enhance the security of the nexus and its resilience. Furthermore, the optimization of water management in the agricultural sector is key to decrease water consumption. In this sense, different types of irrigation could help significantly; in addition, new forms of cultivation can contribute to the change in land use and the production of food that under normal conditions would not be achieved (Fahad et al. 2021b). Likewise, a balance in food production and consumption must be found to reduce food waste. On the other hand, renewable energy must be included in the energy sector, and the incorporation of flexible energy systems to enhance the security and the resilience of the nexus is urgent. Therefore, the resources and climatic conditions of these cities must take advantage of to create an environment of a circular economy. Addressing resilience is a key factor for a sustainable system, it has many contributions in the ecological, economic, and social sectors, and it has been used in many practical and social applications. Practical applications of the assessment of resilience in the water-energy-food nexus are to design systems that could recover quickly from disturbances by performing corrective actions. In addition, through the study of the systems and the probability of disruptive events that may occur, preventing actions can be implemented to build resilience optimally and effectively as well as maximizing the security of resources at those conditions. The water-energy-food nexus has important societal applications, but even more if the nexus thinking involves a resilience approach. For instance, the study of these concepts together is essential for the development of long-term policies that help to regulate the unsustainable consumption of resources and this way reduce the impacts of climate change. Furthermore, planning resilient systems could ensure access to sufficient resources, reduce the poverty percentage, improve living conditions, and create job opportunities. All this implicates to improve human well-being.
The main limitation of the proposed approach is that it requires accurate data for the studied resources in the considered region; therefore, the seasonal analysis would enhance the accuracy, but in cases where predictions of future years are made, there is linked a range of uncertainty in the results. In addition, the proposed model is designed for its application at a regional level, and its application to large scales could bring more uncertainty in the results. On the other hand, it is important to mention that the main uncertainties related to the model are in the hydrometeorological variables for the calculation of the risk indexes. Data available related to natural disasters usually corresponds to monthly average parameters, but this type of data is constantly changing; therefore, this implies that the uncertainty is linked to the model.
Conclusions
In this work, a systematic approach for analyzing resilience in the security of water-energy-food nexus in areas that present significant climatic variations throughout the year has been presented. The interruption of functional services such as water, energy, and food due to weather conditions (droughts, floods, and droughts) is analyzed, where penalty costs are associated with total or partial failures in the systems. Two case studies from Mexico that have characteristic severe weather conditions have been presented to show the applicability of the proposed approach. Through the results of the resilience indices, it is possible to identify that among the failure modes considered (hurricane, drought, and low-temperature events), droughts are the phenomenon that puts the security of the water-energy-food nexus at risk. The imposed cost by drought for Hermosillo in 2019 is $US 42,016,62, while for the same event in Monterrey the imposed cost is 6.5 times more than for Hermosillo ($US 269,026,017). It is estimated that by 2030, the imposed costs of the case studies evaluated will increase due to the growing resource demand, $US 59,463,607 for Hermosillo and $US 384,985,528 for Monterrey. Projections for future years do not present significant variations regarding the resilience of the nexus; nevertheless, improvements can be made to enhance the availability and sustainability of resources.
The assessment of the resilience of integrated systems has the advantage to identify vulnerable sectors even when it is statically analyzed. Therefore, even with the variations presented by the required resources in different periods of time, it is possible to satisfy the resource demands by overestimating the system. It should be noticed that overestimating the design and operation of a system entails an increase in the total system cost; however, it ensures a better performance. With the presented framework, it is possible to make predictions about the trend on the availability of resources such as water, energy, and food and look for alternatives to improve their sustainability. Furthermore, it offers the possibility to estimate the economic losses associated with the existence of certain natural disasters due to climate change, making it an efficient decision-making tool. The interruption of each functional service can be reduced to a greater extent with the help of security policies by optimally building resilience. This type of study makes it possible to design strategies to deal with events in advance and to determine the costs of damage repair, overestimating the analyzed system. This way, the proposed approach allows maximizing the security of the resources that make up the water-energy-food nexus. | 7,386.4 | 2021-09-15T00:00:00.000 | [
"Environmental Science",
"Engineering"
] |
The system optimization perspective for multiproduct supply chain network
In this paper, multiproduct supply chain network model is developed with system optimization perspective. Each kind of products has an individual cost function and, at the same time, contributes to its own and other product’s cost function in an individual way. The well-known equilibrium algorithm is extended to find system optimization pattern for such multiproduct supply chain network.
Introduction
Today, supply chains are more extended and complex than ever before.At the same time, the current competitive economic environment requires that firms operate efficiently, which has spurred interest among researchers as well as practitioner to determine how to utilize supply chains more effectively and efficiently.
Furthermore, although there are numerous articles discussing multi-echelon supply chains, the majority deal with a homogeneous product, but, in application, deal is performed with multiproduct supply chain.Therefore, homogenous restriction is relaxed and worked with multiproduct in the same supply chain network.In this paper, system optimization perspective is utilized which minimizes the total cost in the system.So, we extend equilibrium algorithm to construct systemoptimizing flow pattern [3].
Note that Min and Zhou [5] provided a synopsis of supply chain modeling and the importance of planning, designing, and controlling the supply chain as a whole.Nagurney, [6] subsequently, proved that supply chain network equilibrium problems, in which there is cooperation between tiers, but competition among decision-makers within a tier, can be reformulated and solved as transportation network equilibrium problems.Cheng and Wu [2] proposed a multiproduct, and multi criterion, supply-demand network equilibrium model.Davis and Wilson [4], in turn, studied differentiated product competition in an equilibrium framework.
Supply Chain Network Structure
Assume that multiproduct supply chain network involves a firm A, as depicted in figure 1. Let, G = [N, L], denote the graph consisting of nodes [N ] and directed links [L].Firm A is involved in the production, storage, and distribution of J products, denote typical product by superscript j.Assume that, firm A, has n M manufacturing facilities, M .A path consists of a sequence of links originating at a node A and denotes supply chain activities comprising manufacturing, storage, and distribution of the products to the retail nodes.Assume that, node A, to be the origin, R k , k = 1, . . ., n R be the destinations and every origin-destination pair, O/D, be denoted by w.Let x j p , denote the nonnegative flow of product j, on path p, f j a , flow of product j on link a and let p w , denote the set of paths connecting the origin/destination pair w.Also, let P , denote the set of all paths in the network and W , denote the set of all O/D pairs of nodes.The path flows group into the vector x and link flows into the vector f .In other words, the following notations is used: ).The links from the top-tiered A, to the manufacturing nodes M 1 , . . ., M n M , in figure 1, represent the manufacturing links.The links from the manufacturing nodes, in turn, to the distribution center nodes, correspond to the shipment links.The links joining first distribution center nodes and second distribution center nodes correspond to the storage links for the products.Finally, the links joining second distribution center nodes to retails correspond to the storage links for the products.
In this paper, supply chain model is constructed to include several products and flow of each product effects other products so that each product has an individual cost function and, at the same time, contributes to its own and other product's cost function in a particular way.An extended equilibrium algorithm is developed to construct an optimal flow pattern in which the total cost in network is minimized.Modified equilibrium algorithm is defined as composition of operators for several products in the same network in comparison with prior methods that first, the multiproduct supply chain network was converted into single-product network.Therefore, this method needs a few number of iteration for converging in comparison with other methods as discussed in [8].
Let, c j a (f 1 a , . . ., f J a ), denote the cost of one unit shipment of product j, j = 1, . . ., J on link a, which is a function of other product's flows on same link.That is: (1) In other word, C ≡ {c a : a ∈ L}, c a ≡ (c 1 a , . . ., c J a ) The total cost shipment f j a unit for j product on link a that are denoted by ĉj a (f 1 a , . . ., f J a ), define as follows: (2) That is, the total cost on a link a, is equal to the link cost on the link times the flow on the link.The total cost on a path p, Ĉp , is given by the sum of the product costs on the links that comprise the path, that is: where δ ap = 1, if link a is contained in path P and δ ap = 0, otherwise.
In the system optimized problem, the total cost in network is minimized, where the total cost in network is given by: Perhaps, the simplest nontrivial example of a cost function is provided by linear model: (5) And accordingly, the total cost functions are nonlinear (quadratic), given by: for all a ∈ L and j = 1, . . ., J, where g jl a , h j a are given constants.
The following conservation of flow equations must hold for firm A in which, each product j, and each retail outlet R k : ) That is, the demand for each product must satisfy each retail outlet.
A flow pattern x, with the demand D, is called feasible, if equation ( 7) satisfies for every w ∈ W .
Every flow pattern x, generates a load pattern f and x is called compatible with f .A flow pattern f , is called feasible, if there exists at least one feasible flow pattern x, compatible with f and following flow equation holds: x j p δ ap , j = 1, . . ., J, ∀a ∈ L, (8) where δ ap = 1, if link a is contained in path P and δ ap = 0, otherwise.Expression (8) states that the flow on a link a is equal to the sum of all the path flows on path p containing link a.
The path flows must be nonnegative, that is: Assume that, the total cost function for each product on each link is convex, continuously differentiable, and it has bounded third order partial derivatives.The multiproduct supply chain cost minimization problem can be formulated jointly as follows: Minimize Subject to : x j p δ ap = f j a , j = 1, . . ., J, ∀a ∈ L x j p ≥ 0, j = 1, . . ., J, ∀p ∈ P Observe that, this problem is a system optimization problem.
The triple T = (G, D, C) will be called a supply chain network, where G is a directed network, D is a demand vector and C is a cost vector as defined before.
Definition 1. For a given multiproduct supply chain network T = (G, D, C), a feasible flow pattern x that minimizes the total cost function
is called a system optimization flow pattern.
Theorem 1.For a given multiproduct supply chain network T = (G, D, C), suppose that in (10), T C(f ), is strictly convex and its feasible set is convex.Then, there is a unique systemoptimizing flow pattern, such that T C(f ) is the minimum of T C.
Theorem 1, concludes that, every feasible flow pattern x, compatible with f is a systemoptimizing flow pattern.Thus, a systemoptimizing flow pattern always exists and, in particular, is unique, if and only if there exists a unique feasible flow pattern x compatible with f .
In addition to the above existence and uniqueness theorem, the assumption of convexity of T C(f ) implies that the system-optimizing flow pattern satisfies the (K.K.T.)1 conditions [1].
Theorem 2. For a given T = (G, D, C), x, is a system-optimizing flow pattern if and only if it enjoys the following property.Let w ∈ W , be connected by the paths p 1 , . . ., p m , then these paths can be numbered as: where the total marginal cost is denoted by Therefore, all used paths have equal and minimal total marginal costs and unused paths have higher (or equal) total marginal costs than those of the used paths.
In fact, a system-optimizing solution corresponds to Wardrop's second principle [7] and is one that minimizes the total cost and all utilized paths connecting each O/D pair have equal and minimal marginal total costs.
Algorithm
In this section, an extended variant of well-known equilibrium algorithm is developed to find the system optimization pattern for a multiproduct supply chain network.The algorithm constructs a system-optimizing flow pattern by iteration, i.e., starting from an arbitrary initial feasible flow pattern2 x 0 , it generates a sequence {x n } of feasible flow patterns converging to the set of systemoptimizing flow patterns.The passage from x n−1 to x n is attained by applying an operator E, i.e., x n = Ex n−1 .Once E has been defined, the description of the algorithm is complete.
Let Z[T ] stand for the set of all feasible flow patterns of the supply chain network T .An operator is defined as the composition of operators where w(1), . . ., w(m) is an arbitrary ordering of the set W .In turn, E w(l) will be defined as the composition of operators where E j w sends a feasible flow pattern x to another feasible flow pattern x = E j w x, which is constructed by the following procedure.
Among the elements of P w , determine the paths q and r requiring with minimum and maximum total marginal cost where Ĉ′j p (f ) denotes the total marginal cost on path p for product j, given by Ĉ′j p (f ) and then set where δ is selected so that the total cost T C( f ) is minimized over the class of admissible load patterns f that are induced by the class of flow patterns x given by ( 8).In the case of the quadratic model, δ can be determined explicitly through the formula In fact, the equilibrium algorithm conveys the flow from paths with positive and maximum flow to paths with minimum flow till the flow in network is equilibrated.Theorem 3. Let f be the (unique) system optimizing flow pattern.For any x 0 ∈ Z, let x n ≡ E n x 0 where f n denotes the flow pattern induce by x n .Then Proof.The proof is similar to the proof of Theorem 3.1 in Dafermos and Sparrow [3].
Numerical Example
In this section, a numerical example is presented to demonstrate the algorithm for a simple 2typical-product supply chain network so that the reader can become familiar with the realization of the operators E j w , E w , and E. In practice it is important to decide for which n.x n is sufficiently close to a system-optimizing flow pattern in order to stop the algorithm.
Characteristics of the Network
Set of nodes: Set of links: L = {1, 2, 3, 4, 5, 6, 7} as defined in Table 1 Set of admissible paths: Set of paths which connect w 1 : P w 1 = {p 1 , p 3 } Set of paths which connect w 2 : P w 2 = {p 2 , p 4 } Cost structure: Suppose that the quadratic model is applied, i.e., the total shipment cost for products of typical j on a link a of the network is of the form where, g jl a , h j a are given constants.Also, the set of demands are given by D The marginal costs correspond to the products of each typical product along the paths of the network is evaluated by using equation (20) and then application of the algorithm can be performed.First, an initial feasible flow pattern x 0 is selected, so that, it equally distributes the demands among the available paths, i.e., where x j i is an abbreviation of x j p i , i = 1, 2, 3, 4. The resulting feasible flow pattern is
Now by applying operators
, the feasible flow pattern can be improved.Until, a system-optimizing flow pattern is obtained.
The paths with minimum and maximum total marginal costs in network and the total marginal costs for these paths and δ values in each iteration was reported in Table 2.
Table 3 and 4 contain the link and path flow patterns obtained when E 1 w 1 through E 2 w 2 are applied, so that the resulting optimal flow patterns in Table 5 are the system-optimizing flow patterns for this network.
Therefore, the minimum total cost in supply chain network for optimal flow pattern is obtained as following:
Conclusion
In this paper, multiproduct supply chain network model is constructed utilizing a systemoptimization perspective and the total cost is minimized in the network.Therefore, systemoptimizing pattern is necessary in order to optimize the network.Since the deal is with multiproduct, modified equilibrium algorithm is defined as composition of operators.In fact, system-optimizing flow pattern can be determined with the extended equilibrium algorithm, so that total cost for sending products in supply chain network is minimized.
Table 1 .
Definition of links and associated personal link cost functions
2 7 Table 2 .
The paths with minimum and maximum total marginal costs in network and δ values in each iteration by applying E 1 w1 through E 2
Table 3 .
The resulting path flow patterns by applying E 1 w1 Through E 2
Table 4 .
The resulting links flow patterns by applying E 1 w1 Through E 2
Table 5 .
The resulting optimal flow patterns | 3,288.6 | 2013-05-29T00:00:00.000 | [
"Business",
"Engineering"
] |
Quantitative transportation assessment in simulated curved canals after large apical preparations
Aim: To evaluate the ability of rotary (ProTaper Universal [PTU] and ProTaper Next [PTN]), reciprocating (Reciproc [R] and WaveOne [WO]) and adaptive (Twisted File Adaptive [TFA]) systems in maintaining the original canal profile in straight and curved parts after apical preparations up to size 40. Methods: Resin blocks with simulated curved canals were randomly assigned to five groups: PTU, PTN, R, WO and TFA. Images were captured from each block before and after canal preparation (n=10). Assessment of canal transportation was obtained for the straight and curved parts of the canal. ANOVA followed by Tukey’s test was used (α = 5%). Results: Transportation values were increased at the curved part (P = .00). For both canal levels, TFA system induced the lowest mean of canal transportation followed by PTN, R, WO and PTU systems. At the straight portion, transportation for R and TFA systems were similar (P > .05), and these values were significantly lower than for WO, PTN and PTU (P = .00). At the curved portion, TFA resulted in less canal transportation, followed by PTN, R, WO and PTU systems (P = .00). Conclusions: TFA system produced less canal transportation than other systems tested during large apical preparations.
Introduction
Root canal shaping is considered one of the most challenging tasks during endodontic treatment.Although several techniques were developed to minimize errors arising from root canal instrumentation 1 , accidents such as zips, ledges, root perforation and canal/apex transportation can occur, especially in narrow and curved canals 2,3 .Overall, nickel-titanium (NiTi) instruments have improved canal preparation procedures and reduced the odds of iatrogenic defects 4,5 .Briefly, NiTi files have improved mechanical canal preparation by offering better centering ability, less extrusion of debris and a reduced learning curve for the clinician 4,5 .Further developments of novel NiTibased root canal preparation systems have been primarily centered on modifications in instruments design, alloy and shaping movements.
Canal preparation with reciprocating movement has been shown reduce file fracture [6][7][8] .The reciprocation-based systems available on the market Reciproc (R) (VDW, Munich, Germany) and WaveOne (WO) (Dentsply Maillefer, Ballaigues, Switzerland)] allow single-file preparation for the entire root canal, thus requiring less time when compared to multi-file rotary systems.These files are made of a new NiTi alloy (M-wire), that in conjunction with the reciprocating kinematic, provided an increase in flexibility and an improved resistance to cyclic fatigue 8 .
Recently, the Twisted File Adaptive system (TFA) (SybronEndo; Orange, CA, USA) has been introduced onto the market.In theory, this system claims to maximize the advantages of using the reciprocation movement while downgrading possible disadvantages associated with this kinematic.The TFA system uses a patented unique motion technology, which automatically adapts the movement according to the instrumentation stress input to the file.According to the manufacturer, when stresses are imposed to the TFA instrument inside the canal, the motor performs a conventional clockwise movement, allowing better efficiency and removal of debris.In contrast, during increased torsional stress, the movement automatically changes into a reciprocation mode.Furthermore, TFA files have three unique design features: a special surface conditioning, an R-phase heat treatment and a twisting of the metal 9,10 .
Recent scientific evidence shows that larger apical canal preparations promote more effective irrigation into the apical area, improved infection control and better quality of the root fillings [11][12][13] .However, limited data regarding canal transportation after larger apical preparations using these new NiTi systems are available.Thus, the aim of this study was to assess the ability of rotary [ProTaper Universal (PTU) (Dentsply Maillefer) and ProTaper Next (PTN) (Dentsply Maillefer)], reciprocating (R and WO) and adaptive (TFA) systems in maintaining the original profile in straight and curved parts of the canal after apical preparations up to size 40.The null hypothesis tested was that there would be no differences in canal transportation values among the tested systems.
Digital image acquisition
A total of sixty Endo Training ISO 15 simulated curved canals in clear resin blocks (Dentsply Maillefer) with 2% taper, 70° angle of curvature, 10-mm radius of curvature and 17-mm length were assigned to five experimental groups and one control group (n = 10) according to the system used for canal preparation: PTU, PTN, R, WO and TFA.A circular base with a rectangular slot matching the resin block dimensions was inserted into the microscope base used to record the digital images (1005t Opticam stereomicroscope; Opticam, São Paulo, Brazil).After that, each specimen was positioned again in the slot, and color images were obtained and saved in TIFF format using a dedicated digital camera (CMOS 10 megapixels; Opticam).After the preparation procedures, new images were taken from each block following the same described protocol.In order to check the accuracy of the repositioning method, ten resin blocks were used as a control group where no canal preparation was performed.In this group, two stereoscopic images of each block were taken after consecutively inserting and removing each specimen from the silicon slot 14 .
R. R40 (40/0.06)instruments were used at the pre-setting program (RECIPROC ALL) powered by a torque-controlled motor (Silver Reciproc).The instrument was gradually advanced in the root canal using pecking motion with a 3 mm amplitude limit.After 3 complete pecking movements, the instrument was removed from the simulated canal and were cleaned using a sponge.
WO. WO Large (40/0.08)files were used similarly to the R group, under WAVEONE ALL pre-setting program.
A single experienced operator performed all preparation procedures, and only new instruments were used.Apical patency was established with a 10 K-file (Dentsply Maillefer) just beyond the WL between each preparation step.Canals were irrigated with 1.0 mL sterile water using 30G Max-i-Probe needle (Dentsply Rinn; Elgin, IL, USA) placed to a depth immediately short of binding.After the completion of each preparation, the canal was irrigated with 1.0 mL sterile water and post preparation images were acquired, as described previously.
Image processing and analysis
Filtering, registration, segmentation and extraction of attributes from the acquired images were performed using an open source software (FIJI) and its associated plugins 15 .Image processing and analysis was based on a previously published methodology 14 .In short, the images were first converted to 8-bit grayscale.Then, each pair of image (before and after canal preparation) was registered using the "Rigid Registration" plugin.An iterative polygon tracing tool was used as a threshold method to segment each canal (sound and instrumented) from the background.Canal boundaries were visually determined by the Quantitative transportation assessment in simulated curved canals after large apical preparations Braz J Oral Sci.15(3):221-225 aided by an automatic edge-segmentation algorithm.After tracing, a simple binarization scheme (0 for background, 255 for the defined polygon) was applied and, after that, a skeletonization algorithm was applied to the segmented images.This algorithm finds the centerlines (skeleton) in segmented images by applying binary thinning procedures (symmetrical erosion) 16 .The distance (in mm) between each XY coordinate in the sound and in the instrumented skeleton images were calculated using the following formula: where x and y b are coordinates for the sound canal and x i and y i are coordinates for the instrumented canal.Figure 1 shows images taken before and after preparation and the corresponding centerlines obtained after the analysis.
The transportation measurements were converted from pixels to millimeters (mm) with the aid of a microscope magnification scale.After that, the transportation values were quantified for the complete canal or for two independent regions (straight and curved parts), as depicted in Figure 2.
Statistical analysis
Quantification of deviation by pixels resulted in a great number of data points (straight canal part = 21,960 and curved canal part = 33,600).In this study, each pixel was considered as a unit for statistical analysis.Considering the data size, a bell-shaped distribution has been assumed, and a Univariate analysis of variance (two-way) procedure, with a significance level of α = 5%, has been selected considering root canal portion and instrumentation systems as independent variables and canal transportation (in mm) as the dependent.Tukey Honestly Significant Difference test was used for pair-wise comparisons.
Results
The control group showed no canal transportation, confirming the consistency and the reliability of the current methodology.As seen in Table 1, canal transportation was significantly influenced by the different instrumentation systems (P = .00).Compared with the straight part of the root canal, transportation values were increased at the curved canal parts (P = .00)for all instrumentation systems.When all canal extension was considered, TFA system induced the lowest mean of canal transportation followed by PTN, R, WO and PTU systems.However, a significant interaction between canal part and instrumentation system was found.At the straight portion, R and TFA systems produced similar transportation (P > .05),which was significantly lower than WO, PTN and PTU systems (P = .00);at the curved part, TFA system resulted in the lowest canal transportation followed by PTN, R, WO and PTU systems (P = .00). Figure 3 illustrates transportation values for each used system in each part of the simulated canal.
Discussion
Biomechanical preparation during canal treatment is aimed to prevent or eliminate apical periodontitis, contributing to higher endodontic predictability 17 .Recent scientific evidence shows that larger apical preparations presents several advantages, such as increasing the irrigant solution volume at the apical third of the root canal system 13,18 , more debris elimination and a reduction of non-instrumented areas in the root canals 19,20 .Moreover, the increase in apical diameter is related to a reduction in bacterial population and improved canal filling procedures [11][12][13] .However, it is certainly not an easy clinical task to achieve a larger apical diameter, especially in curved, narrow and long canals.One major point of concern is associated with the higher incidence of mishaps such as canal transportation, which can occur during preparation of curved canals 2,3 .Thus, it is relevant to assess the efficacy of root canal preparation instruments.Therefore, the current study compared the ability of PTU, PTN, R, WO and TFA systems in maintaining the original canal anatomy after large apical preparations.
The standardization of the experimental design is important during the evaluation of the shaping ability of different NiTi systems.However, due to the great number of confounding variables present in the experimental design of the commercially available instruments such as the manufacturing process, the number of files and kinematics, it is not always possible to isolate the influence of each variable on the obtained results.
Overall, the findings of the present study showed that the TFA system presented lower canal transportation than the other systems tested.The null hypothesis was then rejected.Previous studies have already shown that the TFA system induced lower canal transportation and produced a better centering ability when compared to rotary 14 and reciprocating systems 10,21 .This result may be explained by the TFA system unique design features (special surface conditioning, R-phase heat treatment and twisting of the metal) 9,10 associated with its lower taper (0.04).Regarding the adaptive motion, Silva et al. 14 (2015) evaluated canal transportation using Twisted File system both in adaptive and in rotary motions and concluded that the latter produced overall lower canal transportation 10 .Therefore, this new kinematics cannot be Braz J Oral Sci.15(3):221-225 considered as the solely factor influencing the good results of TFA system herein.On the other hand, the PTU system exhibited the worst results, showing higher canal transportation, which are in agreement with previous studies 5,14 .Some aspects might give some rationale to support these results.First, the PTU system used seven instruments to prepare the canals.In addition, it is conceived that the PTU system has a tendency to straighten curved canals causing transportation toward the furcation at the middle-coronal thirds and toward the outer aspect of the curvature at the apical third 22 .
Although R and WO systems have some similarities, such as the reciprocating motion and the M-wire NiTi alloy, R system showed less transportation in the straight portion and when all canal was considered.These results may be explained by differences in the cross-sectional design: the larger cross-sectional area of W system influences the bending resistance of the instrument, making it less flexible and increasing the straightening tend in curved canals 23 .The results obtained by the PTN system may be explained by the use of a progressive tapers on a single file and the unique offset mass of rotation.This design serves to minimize the contact between a file and dentine, decreasing dangerous taper lock, screw effect and root canal transportation.
Simulated curved canals in clear resin blocks were used in the current study.This method has been previously validated in order to evaluate the centering ability and canal transportation provided by endodontic instruments 3,14 .In addition, it is particularly attractive due to the possibility of standardizing the full canal anatomy.Nevertheless, the use of resin blocks has a few disadvantages such as micro-hardness differences between the resin material and the root dentin, and the possible side effects created by heat generated during preparation procedures, which may soften the resin, leading to binding of the cutting blades and enhancing the chance of instrument fracture 24 .Thus, care should be taken before extrapolating these results directly to a clinical situation.
The present investigation used a recent described methodology to study transportation in simulated curved canals by confronting images before and after canal preparation 14 .This procedure has the potential to reduce the bias related to the subjective operatorbased image superimposition schemes and evaluation of canal transportation, once it is almost not dependent on user input and also provides information from the whole canal length, and not only from selected slices.Although the bi-dimensional approach can be considered a limitation of this method, it is of outmost importance to state that current three-dimensional-based techniques used to assess canal transportation have not yet provided fully quantitative volumetric data 10,25 , which again results in the evaluation of limited slices and manual selection of gravity center points.
Conclusions
Under the conditions of this study, it can be concluded that TFA system produced less canal transportation than the other systems tested during large apical preparations.
Mean (mm) ± Standard Deviation.Different lowercase letters indicate a significant difference (P < 0.05) in columns confirmed by Tukey Honestly Significant Difference for both straight and curved canal portions and for all canal extension.
Fig. 1 -Fig. 2 -
Fig. 1 -Image of a simulated canal block taken before (A) and after (B) preparation and corresponding centerlines obtained after the analysis.(C) Superposition of baseline and final centerlines showing that transportation occurred after instrumentation.
Fig. 3 -
Fig. 3 -Transportation in the straight and curved portions for all instrumentation systems.
Table 1 -
Canal transportation after large apical preparations using different instrumentation systems. | 3,378.2 | 2017-08-11T00:00:00.000 | [
"Medicine",
"Materials Science"
] |
Development of Dosimetric Verification System for Patient-Specific Quality Assurance of High-Dose-Rate Brachytherapy
Purpose: The aim of this study was to develop a dosimetric verification system (DVS) using a solid phantom for patient-specific quality assurance (QA) of high-dose-rate brachytherapy (HDR-BT). Methods: The proposed DVS consists of three parts: dose measurement, dose calculation, and analysis. All the dose measurements were performed using EBT3 film and a solid phantom. The solid phantom made of acrylonitrile butadiene styrene (ABS, density = 1.04 g/cm3) was used to measure the dose distribution. To improve the accuracy of dose calculation by using the solid phantom, a conversion factor [CF(r)] according to the radial distance between the water and the solid phantom material was determined by Monte Carlo simulations. In addition, an independent dose calculation program (IDCP) was developed by applying the obtained CF(r). To validate the DVS, dosimetric verification was performed using gamma analysis with 3% dose difference and 3 mm distance-to-agreement criterion for three simulated cases: single dwell position, elliptical dose distribution, and concave elliptical dose distribution. In addition, the possibility of applying the DVS in the high-dose range (up to 15 Gy) was evaluated. Results: The CF(r) between the ABS and water phantom was 0.88 at 0.5 cm. The factor gradually increased with increasing radial distance and converged to 1.08 at 6.0 cm. The point doses 1 cm below the source were 400 cGy in the treatment planning system (TPS), 373.73 cGy in IDCP, and 370.48 cGy in film measurement. The gamma passing rates of dose distributions obtained from TPS and IDCP compared with the dose distribution measured by the film for the simulated cases were 99.41 and 100% for the single dwell position, 96.80 and 100% for the elliptical dose distribution, 88.91 and 99.70% for the concave elliptical dose distribution, respectively. For the high-dose range, the gamma passing rates in the dose distributions between the DVS and measurements were above 98% and higher than those between TPS and measurements. Conclusion: The proposed DVS is applicable for dosimetric verification of HDR-BT, as confirmed through simulated cases for various doses.
INTRODUCTION
High dose-rate brachytherapy (HDR-BT) can be used to effectively treat cancer by delivering high doses of radiation locally and improving both target coverage and organ sparing. Its effectiveness is remarkably high for large clinical targets with complex topologies (1). However, the risks during HDR-BT are higher than those during external beam radiotherapy when a treatment accident occurs and the causing error is not immediately identified. Errors in HDR-BT have been reported in the previous studies (2)(3)(4)(5), and they are mainly caused by inappropriate radiation source selection, source strength units, entry into a treatment planning system (TPS), and source dwell position.
To prevent such errors, recommendations and guidelines for quality assurance (QA) of HDR-BT have been proposed (6)(7)(8)(9)(10). Most existing QA procedures are performed as basic tests of specific dosimetric parameters and include measuring the source activity, verifying the source position, and checking the timer accuracy and linearity using well-type chambers, special rulers, and established techniques for teletherapy sources. Although existing QA procedures are suitable for conventional HDR-BT, they may fail for the latest HDR-BT interventions. This is because the accuracy assessment paradigm for HDR-BT has shifted from determining the conventional source dwell pattern using 2D imaging to patient-specific 3D image-based optimization and inverse planning. Therefore, QA of HDR-BT based on dosimetric verification has been addressed recently (11)(12)(13)(14).
Qi et al. (11) performed treatment plan verification for HDR-BT using a specific water phantom and metal-oxidesemiconductor field-effect transistor. Palmer et al. (12) proposed dose distribution verification using radiochromic film dosimetry in clinical brachytherapy and found that the EBT3 GAFCHROMIC TM film can perform accurate dose verification in HDR-BT owing to its excellent spatial resolution, tissue equivalence, and self-development properties. In addition, they compared planned with measured dose distributions using an in-house water phantom. However, these methods can only be used to measure dose in a specific water phantom. To overcome this limitation, various methods have been proposed to replace water phantoms with solid phantoms by adopting water equivalent materials (13,14). Meigooni et al. (13) obtained the conversion factor (CF) for media between water and various materials, such as solid water, polystyrene, and acrylic, using Monte Carlo (MC) simulations. Aldelaijan et al. (14) used the CF to perform dosimetric QA of HDR-BT with the Solid Water TM phantom replacing a water phantom. However, the CF has not been applied to compare the planned and measured dose distributions but only to evaluate point doses due to limitations of commercial TPSs. In fact, dosimetric verification of HDR-BT using a commercially available TPS generally provides dose distributions by using embedded algorithms based on the AAPM (American Association of Physicists in Medicine) Task Group 43U1 report (15). These algorithms assume that the overall dose calculation is done in water, and access to the algorithm is restricted. Therefore, several previous studies have been either limited to specific water phantoms to measure and verify the dose distribution in the treatment plan or focused on evaluation of point doses by applying the CF (11)(12)(13)(14).
In this study, a dosimetric verification system (DVS) intended for the solid phantom was developed for HDR-BT. First, we fabricated a solid phantom that can be used for film dosimetry and calculate CF(r), a CF value that is a function of the radial distance between water and the phantom material. Second, we developed an independent dose calculation program (IDCP) to apply the obtained CF(r). Third, we compared the gamma evaluation for the dose distribution calculated by the IDCP with the measured dose distribution obtained by using EBT3 film in the solid phantom. Thus, this study aimed to demonstrate the feasibility of the proposed DVS as a patient-specific QA tool for HDR-BT through various simulated cases.
MATERIALS AND METHODS
Simple Solid Phantom Figure 1 shows the solid phantom made of acrylonitrile butadiene styrene (ABS, density: 1.04 g/cm 3 ) used to measure the dose distribution. The solid phantom comprises a normal slab (dimensions 30 × 30 × 1 cm 3 ) and a catheter-inserted slab (dimensions 30 × 30 × 2 cm 3 ). The catheter-inserted slab contains a parallel hole in the center of one side and the depth of the hole is 133 mm. The hole accommodated the catheter of 3 mm, and the end of the hole was attached to the catheter tip.
Conversion Factor Between Water and ABS
The CF(r) between water and ABS was determined as a function of radial distance r by MC simulations using the Geant4 Application for Tomographic Emission (GATE4, Version 8.1). For the MC simulations, the 192 Ir mHDR-v2 source was modeled as described by Granero et al. (16), and the calculated grid size was 1 × 1 × 1 mm 3 . The electromagnetic standard model, option3 (Emstandard_opt3), was selected from the physics engine list. The virtual phantoms with water and ABS had the same dimensions, 30 × 30 × 20 cm 3 . The ABS was composed of 45.5% carbon, 51.5% hydrogen, and 3% nitrogen (17). The CF(r) was obtained by the ratio of the dose profiles for ABS and water, ABS/water, along the vertical axis with respect to the source.
Independent Dose Calculation Program (IDCP)
The proposed IDCP combines the obtained CF(r) to calculate the dose in the solid phantom. The IDCP is based on the AAPM Task Group 43U1 report and the dose calculation algorithm proposed in our previous study (18). The line source model is implemented in the IDCP, and the equation of the model is expressed as where r is the distance from the center of the source, θ is the polar angle between the source longitudinal axes, r 0 and θ 0 are the reference distance (1 cm) and angle (90 • ), respectively, and the air-kerma strength (unit U: cGy·cm 2 ·h −1 ), dose-rate constant (unit: cGy·h −1 ·U −1 ), geometry factor, radial dose function, and anisotropy function are denoted as S K , , G(r, θ ), g(r), and F(r, θ ), respectively. The values of S K and are obtained from the TPS, and those of g(r) and F(r, θ ) are provided by the source manufacturer. Then, CF(r) can be obtained through the MC simulations.
Validation of the Developed Dosimetric Verification System Using Film Dosimetry
All the measurements were performed using EBT3 film (Ashland ISP Advanced Materials, NJ, USA) from a single batch. Before film dosimetry, doses of 0-19 Gy were irradiated using a 6 MV external photon beam generated by VitalBeam (Varian Medical Systems, Palo Alto, CA, USA) to calibrate the film. The net optical density (netOD) curve was obtained from two channels (i.e., red and green), as shown in Figure 2, because each channel has a different sensitivity depending on the dose range. Specifically, the red channel provides the optimal performance at doses below 10 Gy, whereas the green channel is adaptable to high doses above 10 Gy. The netOD curve was applied with an appropriate selection for the dose range in the simulated cases. The measurements for DVS validation were performed 1 cm below the source. The radiation exposure was implemented by NUCLETRON microSelectron Afterloader (Elekta, Stockholm, Sweden) with an 192 Ir mHDR-v2 source. The measurements adhered to the procedure for handling EBT3 film recommended by the AAPM Task Group 53 report. The DVS validation was executed by dosimetric verifications for three cases: First, the single dwell position was measured to evaluate the feasibility of the CF(r) and DVS operation. The source was in the catheter tip in the phantom, and doses of 4 Gy were irradiated 1 cm below the source. The corresponding dose distributions and profiles according to the angle were then measured. Second, the elliptical dose distributions at the same dwell-time at a linear dwell position of 5 mm was measured. Third, the concave FIGURE 2 | Net optical density curve for GAFCHROMIC EBT3 film calibration using red (square) and green (circle) channel. elliptical dose distribution was established using different dwelltimes, and the corresponding dose distribution was measured. We also evaluated the applicability of the DVS for high-dose by performing dosimetric verification for various high-dose ranges including 9.5, 10.75, 13.5, and 15 Gy.
Each dose distribution on the same measurement plane was calculated using the Oncentra Brachy software (Elekta) and the proposed IDCP. The gamma analysis developed by Low et al. (19) was used to evaluate the measured and calculated dose distributions using global normalization with a 3% dose difference and 3 mm distance to agreement (3%/3 mm criterion). Figure 3 shows the calculated percentage dose profiles according to the radial distances obtained from the virtual water and ABS phantoms in the MC simulations. The CF(r) obtained from the ratio between both profiles is also depicted. All profiles were normalized with the calculated dose at a radial distance of 1 cm from the virtual source for water. The percentage dose for the radial distance of 1 cm was 93.20% in ABS, being lower than that in water. In addition, the acquired profiles for water were higher than those for ABS up to a radial distance of 2.1 cm and smaller for radial distances above 2.1 cm. The percentage difference between both profiles was 47% at a radial distance of 0.5 cm and gradually decreased until the difference between profiles can be negligible. At a radial distance above 1.5 cm, the difference was below 1%.
Percentage Dose and CF(r) for Water and ABS Phantom Using MC Simulation
The CF(r) was 0.88 at a radial distance of 0.5 cm and gradually increased until a radial distance of 6 cm, reaching 1.08 [ Figure 3 (bottom)]. At a radial distance above 6 cm, the value slowly decreased, reaching 1.04 at 8 cm. For both materials, the CF(r) was 1 at a radial distance of 2.1 cm. In this study, the values of CF(r) were only obtained within 8 cm because the percentage dose difference between both profiles at radial distances above 8 cm was negligible (< 1.5%). Figure 4 shows the isodose map obtained from the film measurements, IDCP, and TPS at a single dwell position. The central point dose for each dose distribution was 3.70 Gy in film measurements, 3.73 Gy for the IDCP, and 4 Gy for the TPS. The IDCP and film measurement profiles were similar. Although the TPS profile was higher than the other profiles within 0.8 cm from the center of the isodose map, there was no considerable difference in each profile according to the angle. Figure 5 shows the gamma analysis between the measured and calculated dose distributions for the three simulated cases. Based on the measured dose distribution at a single dwell position, the gamma passing rates using the 3%/3 mm criterion were 99.41 and 100% for the TPS and DVS, respectively. In the dose distribution near the source, there was a lower passing rate in the TPS than in the DVS. For the elliptical dose FIGURE 5 | Gamma analysis between measured and calculated dose distributions using 3%/3 mm criterion for three simulated cases: single dwell position, elliptical dose distribution, and concave elliptical dose distribution.
Validation of the Developed Dosimetric Verification System
distribution, the passing rates analyzed with 3%/3 mm criterion were 96.80 and 100% for the TPS and DVS, respectively. The gamma failure of the TPS was higher than that of the DVS near the source. For the concave elliptical dose distribution, the gamma passing rates were 88.91 and 99.70% for the TPS and DVS, respectively. Figure 6 shows the gamma analysis for the concave elliptical dose distributions at high doses. The gamma passing rates using the 3%/3 mm criterion in the dose distributions between the DVS results and measurements were 98.12, 98.32, 99.68, and 98.36% for 9.50, 10.75, 13.50, and 15 Gy, respectively. For these doses, the gamma passing rates in the dose distributions between the TPS results and measurements were 84.76, 89.71, 91.92, and 89.75%, respectively. Compared with the measured distributions, the dose distributions calculated by the DVS have higher gamma passing rates than those calculated by the TPS at all the high doses.
DISCUSSION
Accurate QA procedures for HDR-BT are important to increase the likelihood of desired treatment outcomes, minimize the risk of errors in clinical practice, and ensure the efficacy of clinical trials. In previous studies (11)(12)(13)(14), these dosimetric verifications were able to perform point-dose measurements using the specific water phantom or Solid Water TM phantom. However, these QA procedures had limitations in its practical application because there were inconvenient to use a specific water phantom and were insufficient to verify the treatment plan with only point-dose measurement. Therefore, the DVS, which can easily apply QA and verify dose distribution as well as point-dose, was developed in this study.
The CF(r) is the most important factor for the DVS to perform the dosimetric verification with the solid-phantom. The concept of the CF has already been reported in previous studies (13,14,20). However, since the CF was obtained only at a certain-depth and was limitedly used to verify the point-dose, it was not applied to convert the dose distribution in the solid phantom. In this study, we determined the modified CF as a function of the radial distance, the CF(r), and applied it to the IDCP to calculate the accurate dose distribution in the solid phantom. Through the DVS validation, it has been proven that the QA procedure using IDCP and ABS solid phantom can be applied to the dosimetric verification of HDR-BT if the CF(r) is appropriately considered.
To investigate the feasibility of the DVS for QA of HDR-BT, we compared the gamma evaluations of the dose distributions calculated by the DVS/TPS with those measured using film for the three cases under various dose ranges. All gamma passing rates were higher in the dose distributions between the DVS results and measurements than in those between the TPS results and measurements. Thus, the dose distribution calculated by the DVS is more consistent with that measured by the film. In addition, the results suggest that the DVS establishes an effective verification method for complex dose distributions using high dose ranges. Thus, we believe that the DVS can be used as a QA tool for pretreatment verification in HDR-BT.
We consider that the DVS can support the verification of the dosimetric parameters of the source and the QA of HDR-BT. If dosimetric parameters such as radial dose and anisotropic functions are not accurate, the dose distribution cannot be calculated correctly. This can be simply verified by comparison with the measured dose distribution at a single dwell position. In addition, the evaluation of the elliptical dose distribution can support various QA procedures for HDR-BT, such as source position verification, timer accuracy, and linearity testing.
For pretreatment verification of HDR-BT, the DVS was derived from patient-specific QA used in external beam radiotherapy delivery techniques, such as intensity modulated radiation therapy and volumetric modulated arc therapy. The HDR-BT treatment plan was established by determining the dwell position of the source with respect to the shape of the applicator and using multiple catheters depending on the number of channels. To perform pretreatment verification in the DVS, the complex dwell position of each channel was modified to a linear dwell position that fitted the catheter hole in the solid phantom. Thus, the verification of the treatment plan for HDR-BT can be performed using the DVS and the solid phantom. However, the treatment plan converter was not applied in this study because only a simulated plan instead of a clinical plan was used. In a future study, we will perform dosimetric verification in clinical cases using the DVS to apply the treatment plan converter.
To evaluate the DVS performance, film dosimetry was used in this study. Although films are generally energy dependent, some studies have demonstrated that the EBT3 film can be used in dose measurements for HDR-BT. Parmer et al. (11) reported the successful application of the EBT3 film to dose measurements in HDR-BT. In addition, Devic et al. (21) noted that the energy range response of the film does not change by more than 0.3%. Consistent with previous studies, the DVS relies on the EBT3 film to verify treatment plans generated at high doses as well as doses for actual clinical conditions. | 4,242 | 2021-03-09T00:00:00.000 | [
"Physics",
"Medicine"
] |
The impact of the Xpert MTB/RIF screening among hospitalized patients with pneumonia on timely isolation of patients with pulmonary tuberculosis
In South Korea where the tuberculosis (TB) burden is intermediate, the risk of in-hospital transmission of TB remains high. We conducted a retrospective cohort study of 244 inpatients diagnosed with pulmonary TB (2015–2018) to evaluate the impact of the Xpert MTB/RIF assay (Xpert) screening on timely isolation. TB screening was performed with smear microscopy and a polymerase chain reaction test, and the Xpert was additionally used from November 2016. Among all patients with pulmonary TB, the median time-to-isolation was significantly reduced (22.6 vs. 69.7 h; p < 0.001) and segmented regression analysis adjusting for the time trend showed a reduction in time-to-isolation with the introduction of the Xpert (− 39.3 h; 95% CI − 85.6, 7.0; p = 0.096). Among 213 patients who were timely screened (≤ 72 h after admission), time-to-isolation decreased significantly (− 38.2 h; 95% CI − 70.6, − 5.8; p = 0.021) with the introduction of the Xpert, and its decreasing trend continued. The Xpert provided a shorter turnaround time (4.8 vs. 49.1 h; p < 0.001) and higher sensitivity (76.6% vs. 47.8%; p < 0.001) than smear microscopy. Thus, the Xpert can be a useful screening test for pulmonary TB in real-life hospital settings with an intermediate TB burden.
Tuberculosis (TB) is one of the most common infectious diseases worldwide. Despite the effort to control TB, the disease burden in South Korea is intermediate with the incidence of TB being 51.5/100,000 population in 2018 1 . The number of Air-borne Infection Isolation Rooms (AIIRs) is limited in the majority of Korean hospitals, and therefore, patients with respiratory symptoms are usually hospitalized in multi-bed rooms despite continuous admission of patients with pulmonary TB. If patients are not clinically suspected to have pulmonary TB, they usually remain in multi-bed rooms until they are diagnosed with pulmonary TB, increasing the risk of TB exposure in other patients and healthcare workers (HCWs) 2 . However, predicting pulmonary TB based on patients' symptoms, signs, and chest X-ray (CXR) findings is unsatisfactory 3 , and it is particularly challenging among old individuals as they often present with atypical symptoms and radiological findings 4 . Therefore, microbiological screening for pulmonary TB in patients with respiratory symptoms and abnormal CXR findings is critical for the timely identification and isolation of patients with pulmonary TB. Smear microscopy has been performed as a conventional screening modality for TB; however, its low sensitivity (48.9-67.1%) and possible false positivity in patients with non-tuberculous Mycobacterium infection are concerns 5,6 . The Xpert MTB/RIF assay (Cepheid Inc., Sunnyvale, CA, USA; Xpert), which is a rapid, automated, and cartridge-based real-time polymerase chain reaction (PCR) test, has been endorsed by the World Health Organization and is widely used for detecting Mycobacterium tuberculosis (MTB) 7 . Because the Xpert can deliver a result of MTB detection and rifampin resistance in approximately 2 h with higher sensitivity than conventional smear microscopy, its utility has been explored in various clinical settings with high or low TB burden 6,8,9 .
Results
During the study period, a total of 82,610 adult patients were admitted to the hospital, and 79,771 (96.6%) of them were examined with a CXR (Fig. 2). Among them, a total of 3,675 (4.6%) patients with pneumonia on CXR on admission were identified, and 137 (3.7%) of them had been receiving anti-TB medication for recently diagnosed TB (Fig. 2). Among the 3,538 patients indicated for pulmonary TB screening, 244 patients (6.9%) were microbiologically diagnosed with pulmonary TB, excluding 530 (15.0%) patients who were not screened during their hospitalization, 2744 (77.4%) patients who were negative for pulmonary TB, and 20 (0.6%) patients who were pathologically diagnosed without microbiological evidence for TB. Microbiological diagnosis of TB was made based on MTB culture plus PCR or Xpert (n = 174), MTB culture (n = 52), either PCR (n = 7) or Xpert (n = 5) or both (n = 6). Of these 244 patients diagnosed with pulmonary TB, 213 (87.3%) patients were timely screened (≤ 72 h of admission). Timely screened patients were more likely to be admitted to the departments of pulmonary or infectious diseases or thoracic surgery than other departments (Odds ratio [ Comparison of characteristics between the pre-and post-intervention periods. The characteristics of all patients diagnosed with pulmonary TB and timely screened patients were summarized in Table 1. Their median age was 71 years (range 19-97), and 92 (37.7%) patients were ≥ 80 years old. There were no significant differences in distributions of sex and age, accompanying symptoms, and TB-suspected radiological abnormalities between the pre-and post-intervention periods ( Table 1). The median of time-to-screen was 12.3 h (interquartile range [IQR]: 4.2-31.4), and this did not significantly differ between the pre-and post-intervention periods. The median of turnaround time (TAT) of the Xpert was 4.8 h (IQR: 3.4-8.4), which was significantly shorter than that of the PCR (p < 0.001) and smear microscopy (p < 0001; Table 2). Time-to-isolation was significantly reduced during the post-intervention period (median: 22.6 h; IQR: 2.3-75.7) compared to that during the pre-intervention period (median: 67.7 h; IQR: 20.9-219.8; p < 0.001). Consistent with this finding, patients were more likely to be isolated within 8 h after admission during the post-intervention period than those during the pre-intervention period (p < 0.001; Table 2). Among patients who were timely screened, time-to-isolation was further reduced during the post-intervention period (median of 15.1 h; IQR: 1.1-50.9, Table 2). Otherwise, the results were similar to those for all patients diagnosed with pulmonary TB. www.nature.com/scientificreports/ In terms of clinical outcomes, 226 (92.6%) patients received anti-TB treatment, and 25 (10.3%) patients, including seven (3.3%) patients who died before the diagnosis, died of pulmonary TB during their hospitalization. Patients in the post-intervention period received anti-TB treatment more frequently and earlier than those in the pre-intervention period although the mortality attributable to TB was not significantly different between the two study periods ( Table 2). In a multivariate analysis, ≥ 80 years of age (OR 3.7; 95% CI 1.5-9.0; p = 0.005) and immune suppressed states (OR 3.5; 95% CI 1.4-9.0; p = 0.009) were significantly associated with in-hospital mortality attributable to TB. (Table 3; Fig. 3a). For patients who were timely screened (Group I), the time trend of decreasing time-to-isolation was significant. During the pre-intervention period, a trend in time-to-isolation nearly plateaued (1.0 h/month; 95% CI − 1.7, 3.7; p = 0.462), whereas a significant decrease in time-to-isolation (− 41.9 h; 95% CI − 82.2, − 1.7; p = 0.041) at the intervention point was observed, and a decreasing trend (− 0.6 h/month; 95% CI − 1.7, 0.5; p = 0.246) was maintained during the post-intervention period in the unadjusted segmented regression analysis. In a multivariate segmented regression analysis, a significant step change (− 38.2 h; 95% CI − 70.6, − 5.8; p = 0.021) and a decreasing trend during the post-intervention period (− 0.4 h/month; 95% CI − 1.3, 0.5; p = 0.368) were similarly observed (Table 3; Fig. 3b). In the Group II (n = 191) and Group III (n = 150), the results were similar with the trend of time-to-isolation being more reduced at the intervention point (Table 3; Fig. 3c,d).
Discussion
This study demonstrated that the application of the Xpert as a screening test significantly reduced time-toisolation in patients with pulmonary TB. A higher number of patients with pulmonary TB were detected and isolated earlier after introducing the Xpert than before because the Xpert has shorter TAT and higher sensitivity than smear microscopy. After accounting for the time trend during the study period, the impact of the Xpert in reducing time-to-isolation was still significant. The Xpert showed higher sensitivity and comparable specificity for TB diagnosis with smear microscopy in previous studies 5,6,10,11 . Furthermore, the Xpert can be performed in a short hands-on time, and requires minimal training, which might allow point-of-care positioning even in settings with limited resources 7 . In a developed country with a low TB incidence, the Xpert was cost-effective for reducing unnecessary isolation of TB-suspected patients 12 . Its use also reduced the duration of isolation and hospitalization and medical costs of TB-suspected patients 9,13-15 . However, in countries with a high or intermediate TB incidence, early isolation of patients with TB rather than early de-isolation of TB-suspected patients should be more emphasized for the prevention of in-hospital TB transmission. In South Korea where the TB burden is intermediate, a continuous influx of patients with pulmonary TB into hospitals can increase the risk of in-hospital TB exposure among hospitalized patients and HCWs. Moreover, the TB incidence was highest among older patients who often present with atypical symptoms and radiological findings, which could make early diagnosis of TB and early isolation more challenging 1,2,16 . In this study, the majority of patients were not pre-emptively isolated on admission and screening for pulmonary TB was more likely to be delayed in patients without symptoms consistent with pulmonary TB, suggesting the limitation of prediction of pulmonary TB based on clinical and radiological characteristics. Further, the median age of patients was 71 years, 28.3% of them had no TB-related symptoms, and 36.1% showed no TB-suspected radiological findings. This is particularly concerning as patients are often admitted in multi-bed rooms in South Korea, which might further facilitate the spread of TB in hospitals. Given Table 1. Comparison of demographics and clinical characteristics between the pre-and post-intervention periods. Data are numbers (%) or medians (interquartile range). TB tuberculosis, COPD chronic obstructive pulmonary disease, HIV human immunodeficiency virus. *Immune suppressed states included connective tissue disease, leukemia, lymphoma, and solid tumor.
Factor
All patients with pulmonary TB (n = 244) Timely screened patients with pulmonary TB (n = 213)
Pre-intervention period (n = 103)
Post-intervention period (n = 110) P www.nature.com/scientificreports/ www.nature.com/scientificreports/ www.nature.com/scientificreports/ that the Xpert provides more sensitive results with minimal technical expertise, the Xpert can be a more useful screening test for TB than smear microscopy. In addition, the strategy to screen patients with pneumonia for pulmonary TB can help timely identification of patients with pulmonary TB who are not clinically or radiologically suspected to have pulmonary TB during their initial presentation. According to the Korea Disease Control and Prevention Agency guideline, individuals who stayed with a patient with active pulmonary TB in a closed space for more than 8 consecutive hours are potentially at risk of contracting TB, and initiation of a contact investigation for these individuals is recommended 17 . Although time-to-isolation was markedly reduced and the proportion of timely screened patients increased in the postintervention period, less than half of the patients in the post-intervention period were isolated within 8 h of admission, and there were still a number of patients who were neither timely screened nor tested with the Xpert. A recent study showed that the introduction of the Xpert alone did not significantly increase the proportion of early isolated patients after admission in those with TB 16 . Therefore, multifaceted efforts, including continuous education and campaigns to improve compliance to the TB prevention strategy and to promote pre-emptive isolation of patients with suspected pulmonary TB should be performed in parallel with the application of the Xpert.
In this study, the Xpert demonstrated a sensitivity of 76.6%. Previous clinical field studies in South Korea reported the sensitivity of the Xpert (74.1% and 79.5%) 18,19 , which was lower than that (89%) in a meta-analysis including well-controlled clinical trials 5 . In our study, less than half of the included patients were positive in smear microscopy, suggesting that more than 50% of the study patients had low MTB burden in their respiratory specimens 4,18 . Considering that the semi-quantitative results of the Xpert correlated with the MTB burden in the respiratory specimen 11,19,20 , the low MTB burden of the these patients might reduce the sensitivity of the Xpert. The relatively low sensitivity of the Xpert in this study was concerning because the infectivity of TB patients with a negative Xpert result is not known. Thus, it is recommended that de-isolation of patients with negative Xpert results should be decided in conjunction with patients' clinical and radiological findings.
Consistent with the previous study 6 , this study did not show a significant decrease of mortality despite early diagnosis and treatment of TB with the introduction of the Xpert. Old age was a significant factor for mortality attributable to TB in this study. This emphasizes that more efforts are needed for screening and early detecting patients with pulmonary TB among the elderly population on the primary care and community level in South Korea to prevent the elderly from visiting a referral hospital with progressed pulmonary TB. Screening for TB or latent TB infection in the elderly admitting to long term care facilities may be useful, and screening for latent TB infection seems to be more suitable in countries with a low/intermediate incidence of TB 21,22 . In South Korea, an annual TB screening for the elderly who are aged ≥ 65 years and the homeless population were established in 2018 23 . Its cost-effectiveness for reducing TB incidence among the elderly in the national, regional, and hospital levels should be determined in the future.
This study has some limitations. First, this study was conducted over a 4-year period, and thus, the HCWs' increased awareness through continuing education and campaigns on early isolation of patients with TB might have contributed to the overall reduction in time-to-isolation. The proportion of pre-emptively isolated patients increased and time-to-screen was shortened during the post-intervention period. However, trends and step change were analyzed using a segmented regression analysis to account for such changes over time, which demonstrated a significant reduction in time-to-isolation after the introduction of the Xpert and a decreasing trend thereafter. Second, an economic analysis of the Xpert for deciding isolation based on its positive result was not examined. The cost-effectiveness of the Xpert should be performed for a future study in perspectives of reduction of in-hospital TB transmission and consequent investigation of TB contacts, and clinical outcomes of patients with TB. Third, this was a single-center study, making generalization of results to other healthcare settings challenging. However, this study may provide a strategy to limit in-hospital TB transmission, which can be potentially applicable in countries with an intermediate TB incidence.
In conclusion, the introduction of the Xpert as a screening test for pulmonary TB promoted the early diagnosis and isolation of patients with pulmonary TB. Application of the Xpert for TB screening is expected to reduce in-hospital TB transmission as well as the cost and labor for investigation of TB contacts.
Materials and methods
Study design and patients. Daejeon St. Mary's Hospital is a 630-bed, university-affiliated hospital having four AIIRs in the hospital wards and two AIIRs in the emergency department, in Daejeon, South Korea. Daejeon has a population of 1.5 million people and this hospital has about 24,300 admissions per year. A multifaceted strategy for early identification and prompt isolation of hospitalized patients with pulmonary TB was implemented in January 2015 ( Fig. 1): (1) prompt pre-emptive isolation and diagnostic tests in patients with clinically suspected pulmonary TB; (2) screening for pulmonary TB among hospitalized patients with pneumonia regardless of clinical or radiological suspicion; (3) regular education and campaigns to increase the compliance of HCWs to this strategy; and (4) direct notification of positive screening results to the attending physicians using a text message since April 2016. CXR was routinely performed in hospitalized patients before or at the time of admission, except for those who planned for a minor surgery/procedure or short-term hospitalization. In November 2016, the Xpert was introduced and has been used as a TB screening test since then.
We conducted a retrospective cohort study of adult inpatients (aged ≥ 18 years) screened for pulmonary TB from January 2015 to December 2018. Among this cohort, patients who were microbiologically diagnosed with pulmonary TB (positive results for at least one of the Xpert, PCR, or MTB culture) were included for study analysis, and among them, those who were timely screened (≤ 72 h of admission) were also separately analyzed to exclude the effect of non-compliance to the strategy. Patients who were under anti-TB treatment for previously diagnosed TB and who were pathologically diagnosed without microbiological evidence of TB were excluded. Demographics, clinical characteristics, radiological study results (CXR and chest computed www.nature.com/scientificreports/ tomography), time-to-screen (time between a patient's arrival and submission of the first specimen to the laboratory), time-to-isolation (time between a patient's arrival and isolation of the patient in an AIIR or in a single room), TAT of each screening test (time between submission of the first specimen to the laboratory and reporting the test results), and time-to-treatment (time between a patient's arrival and the initiation of anti-TB medication) were retrieved from electronic medical records. TB-suspected radiological findings were identified and classified based on radiologists' reports. The study period was divided into two by the introduction of the Xpert: a pre-intervention period (January 2015 to October 2016) and a post-intervention period (November 2016 to December 2018). The primary outcome was time-to-isolation before and after the intervention (introduction of the Xpert for TB screening), and secondary outcomes were clinical outcomes and the sensitivities and specificities of the Xpert and other screening tests. This study was approved by the Institutional Review Board of Daejeon St. Mary's Hospital that waived the need for informed consent (Approval No.: DC19RESI0030). This study was performed in accordance with the Declaration of Helsinki and relevant guidelines and regulations.
Microbiological tests. For TB screening tests, three consecutive sputum specimens with an interval of 8-24 h and/or single bronchial washing fluid specimen were collected from patients. All screening tests (smear microscopy, PCR, and the Xpert) and MTB culture were performed using the first sputum specimens collected simultaneously or single bronchial washing fluid specimen during patients' stay in the hospital. Smear microscopy and MTB culture were performed for two additional sputum specimens.
Smear microscopy was performed using concentrated specimen following liquefication and decontamination with N-acetyl-L-cysteine-sodium hydroxide. Auramine-rhodamine fluorescence staining was used for acid-fast bacilli stain, confirmed by Ziehl-Neelsen staining. PCR tests were performed using a commercially available kit (AdvanSure TB/NTM real-time PCR kit, LG Life Science, Seoul, South Korea) according to the manufacturer's recommendations. The Xpert was performed according to the manufacturer's recommendations. If the initial result was invalid, re-testing was performed using the same specimen when the residual volume of the specimen was adequate. If the collected specimen was inadequate, re-sampling was requested. Smear microscopy (Monday to Saturday) and the PCR test (on Mondays, Wednesdays, and Fridays) were performed during working hours. The Xpert was performed daily anytime during working and duty hours.
For the MTB culture, solid (3% Ogawa medium, Asan Pharmaceutical Co., Seoul, South Korea) and liquid (BACTEC MGIT 960 system, Becton Dickinson diagnostic instrument systems, Sparks, MD, USA) media were used. Cultures on liquid media were continuously and automatically monitored in the incubator for 6 weeks, and cultures on solid media were monitored weekly for 8 weeks.
Statistical analysis. Comparisons were performed using a Pearson χ2 test for categorical variables and
Wilcoxon signed rank test for continuous variables. Because time-to-isolation was right-skewed, a quantile regression analysis was used to examine the crude effect of the intervention and variables on reducing time-toisolation. To adjust for time trend and statistically significant confounding variables in the univariate analyses (p < 0.1) in assessing differences in step-change and monthly trend of time-to-isolation between the pre-and post-intervention periods, we performed multivariate segmented regression analyses. A 95% CI for the level and trend was obtained using a bootstrap.
Since not all of the patients were screened nor used the Xpert during the post-intervention period and preemptive isolation (isolation < 2 h of admission) was predicted to be more frequent during the post-intervention period than the pre-intervention period, the same analyses for the following three sub-groups were performed: all of the timely screened patients (Group I), all of the timely screened patients except for those not screened with the Xpert in the post-intervention period (Group II), and all of the timely screened patients except for those preemptively isolated during the whole study period and those not screened with the Xpert in the post-intervention period (Group III). Factors associated with delayed screening and TB-attributable in-hospital mortality were analyzed with a logistic regression analysis.
The sensitivity, specificity, and positive and negative predictive values of three types of screening tests were determined with 95% CIs and compared to one another using the results of MTB culture from simultaneously sampled specimens as a standard reference. Stata version 13.0 software (Stata Corporation, College Station, TX, USA) was used for all data analyses.
Data availability
The data is available only upon a reasonable request to the corresponding author. | 4,835.6 | 2021-01-18T00:00:00.000 | [
"Medicine",
"Biology"
] |
The role of imperforate tracheary elements and narrow vessels in wood capacitance of angiosperm trees
Summary – There is a broad diversity of imperforate tracheary elements (ITEs) — libriform fibers, fiber-tracheids, true tracheids andvasicentric/vasculartracheids—describedthoroughlybySherwinCarlquist.However,inaquantitativesense,thefunctional meaningofdifferentITEtypespresentinthewoodofvessel-bearingangiospermsremainsunclearbecauseveryfewstructure– functionstudiesmeasureITEs’properties.ITEswithabundantpitsandwidepitborders—vasculartracheids,vasicentric tracheids,andtruetracheids sensu Carlquist — have been shown to conduct water and, thanks to this conductive ability and the multitude of pits, they could also contribute to wood capacitance. A dataset of 30 temperate angiosperm tree species was reanalysed to record the presence/absence of true, vasicentric, and vascular tracheids including data on conduits 15 fraction and vessel-conduit 15 contact fraction (conduits 15 were defined as cells resembling vessels and with a maximum lumen diameter of 15 μm. They encompassed narrow vessels, vasicentric tracheids, and vessel tails). The presence of tracheids, conduits 15 fraction, and contact fraction had no effect on wood capacitance, except, per given wood volumetric lumen water content, species with true tracheids tended to have lower capacitance. These results suggest that the presence of tracheids or conduits 15 properties do not limit wood capacitance, but the results do not exclude the potential role these cells may play in internal water dynamics
Introduction
Properties of wood anatomical structure underpin tree functions, for instance, vessel diameter determines water conductivity (Tyree & Zimmermann 2013).Capacitance -the amount of water released from storage into the transpiration stream -is considered to be an important component of tree hydraulic strategies because it allows stems to buffer transient imbalance between water supply and demand on a daily (Goldstein et al. 1998;Meinzer et al. 2004;Scholz et al. 2007;Carrasco et al. 2015) and seasonal basis (Hao et al. 2013).Capacitance might also aid in drought coping strategies but this remains to be determined experimentally (Hölttä et al. 2009).It has been commonly assumed that wood parenchyma is a primary capacitance reservoir (Borchert & Pockman 2005;Scholz et al. 2011;Morris et al. 2016;Pratt & Jacobsen 2017;Nardini et al. 2018) but more recent studies showed a lack of or weak negative (opposite to the expected) correlation between parenchyma fraction and capacitance (Pratt et al. 2007(Pratt et al. , 2021;;Jupa et al. 2016;Fu et al. 2019;Ziemińska et al. 2020), suggesting that parenchyma abundance does not limit capacitance.So what anatomical properties are the primary determinants of wood capacitance?
Most insightful and progressive structure-function studies rely on careful anatomical observations.Sherwin Carlquist was a champion of scrupulous and detailed descriptions of plant anatomical structure in diverse species, demonstrated in his multitude of papers and compiled in his book (Carlquist 2001).Carlquist used these observations and other authors' physiological work to develop structure-function hypotheses.Capitalising on Carlquist's detailed descriptions of anatomical structure the following questions can be asked: (1) what did Carlquist say about the anatomy-capacitance link? ( 2) what is currently known about this link?(3) what Carlquist's other observations could be relevant to the anatomy-capacitance link, particularly, observations of angiosperm tracheids, and (4) the subsequent hypotheses can be tested across 30 angiosperm tree species (Ziemińska et al. 2020).
What did Sherwin Carlquist say about the anatomy-capacitance relationship?
Carlquist uses the term "water storage" meaning water stored in cells that could be utilized for transpiration or some form of water stress survival strategy.Here, it is referred to as "capacitance" because this term is used in current literature and, more precisely, it indicates the amount of water released from a given wood volume per water potential change.It needs to be noted that Carlquist's perspective encompasses a wide range of species from cacti and perennial herbs to woody shrubs, and emergent trees, and across both gymnosperms and angiosperms.However, this work focuses on wood capacitance in vessel-bearing angiosperm shrubs and trees, primarily because these organisms are crucial components of forest ecosystems worldwide and understanding capacitance may improve our predictions of forest responses to drought, and, also, angiosperm shrubs and trees have been the focus of recent literature on capacitance.
Carlquist identified several cell types that could store and release water: axial and ray parenchyma; wide, thinwalled libriform fibers; living fibers (when devoid of starch); and the gelatinous layer in tension wood fibers (Carlquist 2012b(Carlquist , 2015(Carlquist , 2018)).Higher capacitance is expected in species with abundant axial and ray parenchyma, large diameter cells, thin cell walls, and more isodiametric cell shape (particularly, ray cell, but not applicable to fibers).In vesselbearing woods, these parenchyma cells would be devoid of cell content such as starch and less densely pitted than cells whose primary role is photosynthate storage and transport.Carlquist recognized, however, that water content in stems of "typical" woody angiosperms is relatively small in comparison with succulent woody angiosperms, and that capacitance strategies may vary across species and growth forms.That is, in some species axial parenchyma may play an important role, in others -ray parenchyma (Carlquist 2015(Carlquist , 2018)).
What do we currently know about the wood anatomy-capacitance link?
In general, experimental studies linking wood anatomy with capacitance in angiosperm shrubs and trees are not common and, together, cover 78 species (Pratt et al. 2007(Pratt et al. , 2021;;Jupa et al. 2016;Fu et al. 2019;Ziemińska et al. 2020).Contrary to Carlquist's hypothesis, a fraction of axial and/or ray parenchyma was not correlated or was negatively correlated with capacitance (Pratt et al. 2007(Pratt et al. , 2021;;Jupa et al. 2016;Fu et al. 2019;Ziemińska et al. 2020).However, these studies encompassed the lower half of the worldwide parenchyma fraction variation in angiosperm trees (Morris et al. 2016).So, perhaps it is not implausible that in species with more abundant parenchyma (up to 0.9 fraction in Baobabs, and up to 0.7 in other tropical trees), parenchyma may indeed contribute capacitance water.Instead of parenchyma, vessel lumen fraction showed a positive, although weak, correlation with capacitance (Fu et al. 2019;Ziemińska et al. 2020;Pratt et al. 2021) suggesting that perhaps cavitating vessels may partake in capacitance to a larger extent than parenchyma fraction (Hölttä et al. 2009;Vergeynst et al. 2015;Knipfer et al. 2019;Yazaki et al. 2020).In terms of the size of water storage cells, Jupa et al. (2016) found a positive relationship between fiber/tracheid lumen area and capacitance across four angiosperms and one gymnosperm species (and across both branches and roots).In principle, the size of the cell lumen should influence capillary water release as the larger the lumen the smaller the capillary tension (Tyree & Yang 1990;Hölttä et al. 2009).The hypothesis linking cell size and capacitance is therefore plausible, at least for capillary water, but remains to be tested on a larger species set and other cell types (e.g., parenchyma cells, living fibers, and tracheids).(Jupa et al. 2016) also found that axial parenchyma double wall thickness (thickness of two adjacent parenchyma cells) was negatively correlated with capacitance, supporting the Carlquist hypothesis.Yet, similar to the previous hypothesis, it needs to be tested on a larger number of species.It is not clear, however, how high parenchyma wall elasticity (presumed to be associated with thinner walls) would contribute to capacitance.Cells in wood are densely packed and joined by a middle lamella, so shrinkage of living cells due to water release would need to occur concurrently with changes of volume in surrounding tissues (Holbrook 1995).This synchronous shrinkage, higher in more elastic stems, may indeed be the strongest driver of stem capacitance, and not traits of any individual tissue (but see earlier about vessel fraction).This hypothesis is supported by frequent reports that capacitance is negatively correlated with wood density, more strongly than with tissue fractions, in nature (Wolfe & Kursar 2015;Li et al. 2018;Ziemińska et al. 2020) and laboratory observations (Meinzer et al. 2003;Richards et al. 2014;Jupa et al. 2016;Pratt et al. 2021).Taking into account that stems indeed can shrink/expand on a daily basis (Irvine & Grace 1997;Scholz et al. 2008;Sevanto et al. 2011;Lintunen et al. 2017;Hölttä et al. 2018), it is reasonable to suggest that capacitance is not limited by tissue fractions but rather by the emergent property of wood structure, i.e., wood density.The amount of contact between vessels and other cell types may have an additional but minor effect on capacitance through the facilitation of horizontal movement of water between the different cell types (Ziemińska et al. 2020).
A lack of or negative relationship between parenchyma fraction and capacitance in the studies conducted to date, however, does not exclude a hypothesis that parenchyma may play a role in capacitance.It has been suggested that parenchyma generates osmotic gradients which could promote the refilling of embolized vessels (Secchi & Zwieniecki 2011;Brodersen et al. 2013b;Knipfer et al. 2016;Pagliarani et al. 2019).Perhaps a similar mechanism could be utilized for capacitance water movement between different cells.This hypothesis is also supported by the finding that more vessel-axial parenchyma contact fraction contributed to higher capacitance (Ziemińska et al. 2020).Alternatively, or in addition, ray parenchyma may transport water from bark or pith via rays (Goldstein et al. 1984;Cochard et al. 2001;Pfautsch et al. 2015a, b;Mason Earles et al. 2016).The contribution of pith and bark to stem capacitance remains to be quantified.
In parallel with the structure-function correlative studies, in vivo visualizations using high-resolution computed tomography (microCT), nuclear magnetic resonance (NMR), and cryo-scanning electron microscopy (cryo-SEM) provide unique insights into the movement and spatial distribution of water in the wood.However, only a handful of species have been examined thus far and so there is no consensus on the generality of the evidence.MicroCT and MNR studies showed that in samples (seedlings or cut shoots) experiencing decreasing water potential (more negative) water was released, first, from embolizing vessels and concurrently or afterward -from surrounding imperforate tracheary elements (ITEs) -a cell group encompassing fibers and vascular/vasicentric tracheids (see below for definitions); or ITEs did not empty at all or were already empty before the experiment (Fukuda et al. 2015;Umebayashi et al. 2016;Knipfer et al. 2017Knipfer et al. , 2019;;Yazaki et al. 2020;Baer et al. 2021).This evidence suggests that: (1) capacitance water comes simultaneously from both vessels and ITEs on a daily basis (if it can be refilled overnight) and/or (2) water stored outside vessels is not used for quick, daily release and prevention of embolism in conduits but, instead, may be used during drought or gradually, throughout the seasons.The first idea aligns with the finding of a positive correlation between vessel fraction and capacitance discussed above and is supported by a modeling study too (Hölttä et al. 2009).Additionally, the examination of capacitance and conductivity loss curves in Frangula californica (Eschsch.)A.Gray (Rhamnaceae) showed that considerable cavitation occurs in the initial phases of water release (Pratt & Jacobsen 2017) further supporting the idea that capacitance water comes primarily from vessels.However, this is in contrast with several other studies which show water present in vessels and absent in surrounding ITEs in samples in their native state (without experimental drought) (Utsumi et al. 1996(Utsumi et al. , 1998;;Fukuda et al. 2015;Yazaki et al. 2015Yazaki et al. , 2019;;Umebayashi et al. 2016;Ogasa et al. 2019).Sap flow studies also provide contrary evidence.A comparison of sap flow in branches and trunks showed that trunks lagged behind the branches at the beginning of the day.Sap flow evened up around midday and then branch sap flow decreased in comparison to trunks (Goldstein et al. 1998;Carrasco et al. 2015).This higher, initial sap flow in branches supposedly occurred due to water being drawn from storage outside vessels.Otherwise, embolized vessels would presumably cut off the water supply from trunks.Overall, the several in vivo visualizations, cryo-SEM, and sap flow studies seem to be in contradiction as to where capacitance water comes from on a diurnal time scale.The idea that stored water is used on a seasonal basis is supported by the measurement of water content changes throughout seasons (Hao et al. 2013) and by cryo-SEM and dye perfusion studies, showing diverse water distribution patterns within growth rings (Utsumi et al. 1996(Utsumi et al. , 1998;;Umebayashi et al. 2007Umebayashi et al. , 2008Umebayashi et al. , 2010Umebayashi et al. , 2016)).For example, in Fraxinus mandshurica Rupr.(Oleaceae) vessels remained water filled throughout the growing season, while fibers gradually emptied starting from earlywood and progressing into latewood (Utsumi et al. 1996).The authors suggested that emptying fibers were due to fiber maturation, but that does not exclude the possibility that water left from disintegrating cytoplasm could be used for seasonal capacitance.
Altogether, the evidence suggests that capacitance water in wood originates from dead cells -vessels and possibly ITEs, but there seems to be a disparity as to which one of these is the main source of capacitance water and at what time scales they would be operating.There is one anatomical feature, however, that has not been quantified by any (with one exception) of the above-mentioned work and which may aid in clarifying these disparities, namely, the type of ITE.
What other anatomical structures described by Carlquist could play a role in wood capacitance?
The wood anatomical diversity associated with ITEs is large and described in detail, although not quantitatively, by Carlquist.He defined ITE as "a cell with a secondary wall, derived from a fusiform cambial initial (in secondary xylem; derived from procambium in primary xylem) that neither has perforations (or a single perforation) nor is subdivided into a strand of cells each surrounded by a secondary wall.The last item in this definition permits strand parenchyma to be distinguished from septate fibers" (Carlquist 2001).An additional diagnostic feature implemented by Carlquist which helps to tell apart ITEs from non-stranded axial parenchyma is that ITEs typically elongate intrusively beyond the length of the fusiform cambial initial they originated from, in contrast to axial parenchyma cells, which do not elongate intrusively.
In short, ITEs encompass fibers and tracheids.More specifically, Carlquist recognized five ITE types (Carlquist 1986a(Carlquist , b, 2001)): (1) vasicentric tracheids resemble small vessels but have no perforation plate; they only occur in adjoining vessels, their pit border diameter and pit density are similar to those in vessels, (2) vascular tracheids are anatomically the same as vasicentric tracheids but occur at the end of growth ring (in species with growth rings present) and do not adjoin vessels (but, for a discrepancy, see Fig. 6 in Carlquist (2012a) which shows vascular tracheids in contact with small vessels), (3) true tracheids make up ground tissue (the bulk of ITEs outside vessels, irrespective of growth ring presence/absence) and have pit border diameter and frequency similar to those in vessels, (4) fiber-tracheids-ground tissue ITEs which either have infrequent pits with border diameter similar to that of vessels or small-border pits and (5) libriform fibers with simple pits.It needs to be noted that the literature recognizes different definitions of ITEs (Baas 1986;Carlquist 1986a, b;IAWA Committee 1989;Olson et al. 2020).The most commonly used terminology follows the IAWA list of microscopic features for hardwood identification (IAWA Committee 1989).According to IAWA terminology, vascular and vasicentric tracheids are imperforate tracheary elements with pits resembling that of small vessels, where vascular tracheids intergrade with narrow vessels while vasicentric tracheids surround the vessel circumference.Next, there are ground tissue fibers: (1) with simple to minutely bordered pits (border diameter <3 μm), (2) with distinctly bordered pits (border diameter >3 μm), (3) with pits common in both radial and tangential walls.In the present study, Carlquist's classification is applied for the following reasons: (1) measurements of pit properties were not available and (2) the functional meaning for the IAWA's border diameter cut-off at 3 μm is unclear and at odds with a previous study which found that conductive ITEs had a pit diameter of at least 4 μm (Sano et al. 2011).In the author's view, neither of these classifications is perfect because they are subject to either observer bias or arbitrarily chosen numerical thresholds which may not be functionally relevant.Ideally, ITE types would be replaced by continuous variables (e.g., pit border diameter, pit density) and linked with the measurement of a function, as utilized by (Sano et al. 2011).This approach should be a goal for future studies.Nevertheless, in the absence of unbiased, quantitative data, relying on categories is a reasonable starting point.
4
IAWA Journal Because of frequent and large pits, Carlquist suggested that tracheids (true, vasicentric and vascular) play a part in water transport, particularly as substitute conductive tissue when vessels embolize (Carlquist 1980(Carlquist , 1984(Carlquist , 1985(Carlquist , 1988(Carlquist , 2001;;Carlquist & Hoekman 1985).While fiber-tracheids and libriform fibers would primarily contribute to mechanical strength.The functional meaning of the different ITEs, however, has not been quantified because the overwhelming majority of current structure-function studies group all ITEs -particularly true tracheids, fibertracheids, and libriform fibers -into one category of "fibers".Living vs dead ITEs (fiber-tracheids or libriform fibers) are typically not identified either, despite their very different roles: living fibers may store starch, and dead fibers may not.
Limited work carried out on ITE types showed that tracheids (vascular, vasicentric and true) can conduct water, but their contribution to the overall conductivity in vessel-bearing angiosperms is minimal (<2.2% of total conductivity) at least in the handful of species studied (Sano et al. 2011;Cai et al. 2014;Pan & Tyree 2019), suggesting that the tracheid bridges, connections between vessels via tracheids, could play a role in the sideways movement of water between vessels and perhaps decrease the probability of air seeding and vessel embolism spread (by separating vessels from each other).Based on these reports, it can be hypothesized that due to tracheids' capacity for sideways water transport, they may play a role in water capacitance by releasing water into the transpiration stream.Very narrow and short vessels could potentially contribute to the sideways water movement, too (Brodersen et al. 2013a).To test this hypothesis, the dataset of anatomical and capacitance traits for 30 angiosperm tree species was re-examined (Ziemińska et al. 2020).Specifically, the following question was addressed, do species with tracheids and/or abundant narrow vessels have higher capacitance?
Materials and methods
Transverse and longitudinal sections of the 30 species studied in Ziemińska et al. (2020) were re-analysed and the presence/absence of true tracheids (TT) and vasicentric tracheids following Carlquist's definitions (see above, Carlquist 2002) were recorded.
The presence/absence of true tracheids (TT) and vasicentric tracheids following Carlquist's definitions (see above, Carlquist 2001) were examined.Tracheids adjoining vessels were classified as vasicentric tracheids.In several cases, vasicentric tracheids were at the end of a radial vessel multiple in latewood.Vascular tracheids -not adjoining vessels -were not observed in any of the studied species.While true tracheids were relatively easy to identify (Fig. 1), it was more difficult to unequivocally confirm the presence/absence of vasicentric tracheids because the presence/absence of perforation plate can only be reliably identified on macerations.Hence, additional criteria were developed for these tracheids: (1) wall thickness similar to a nearby vessel and (2) outer circumference similar to or smaller than the circumference of the nearest ground tissue ITEs from the same growth ring (fiber-tracheids or libriform fibers).In reality, these cells could be either tracheids or small vessels and are therefore referred to as vasicentric tracheids/vessels (VTV).In addition to these categorical variables, data for the conduits 15 fraction and contact fraction between vessels and conduits 15 (the proportion of vessel circumference in contact with conduits 15 called hereafter vessel-conduits 15 contact fraction) were reanalysed.Conduits 15 were defined as conduits that were either narrow vessels, vessel tails, or vasicentric tracheids (based on their large pit border and high pit density), and with maximum diameter <15 μm (hence conduit 15 ).Although this data was included in (Ziemińska et al. 2020), their relationship to capacitance was not described in that work.
Variance homogeneity and normality in the four groups (TTs present, TTs absent, VTVs present, and VTVs absent) were tested using Leven's and Shapiro-Wilk tests, respectively (α = 0.05).Based on these results, t-tests were utilized to compare capacitance between the groups.Next, multiple regression models were performed.The independent variables in the models included: tracheid presence (TTs and VTVs), conduits 15 fraction, vessel-conduits 15 contact fraction, wood density (WD), and wood lumen volumetric water content at predawn (VWC L-pd ).WD and VWC L-pd were used here because they were the strongest predictors of capacitance in Ziemińska et al. (2020).The structure of the models and results are presented in Table 2.All statistical analyses excluded Paulownia tomentosa which was an outlier in capacitance value.It did not have TTs, VTVs or conduts 15 .All analyses were done in R (R Core Team 2018).
The measurements were carried out in the peak summer of 2017 in the Arnold Arboretum of Harvard University, Boston, MA, USA.All sampled species were temperate, winter-deciduous, vessel-bearing angiosperm trees.Three individuals per species were sampled and data analysis was run on species average values.All measurements were done on approx.5 mm wood diameter twigs (excluding bark and pith).In contrast to most wood capacitance studies, the capacitance was measured in nature and calculated as the amount of water released from wood tissues per change in stem water potential between predawn and midday (kg/m 3 per MPa) referred to as day capacitance.Detailed information about the study location, climate, material, and methodology for anatomy, water content, and capacitance measurements is described in Ziemińska et al. (2020).
Results
Seven out of 30 species had TTs (23%) and 20 species had VTVs (77%, Table 1).Four species had both TTs and VTVs and seven species had neither of these cell types.In comparison, in Californian flora, TTs were present in 24% of the analyzed species (similar to the present study), vasicentric tracheids in 33%, vascular tracheids in 13%, and 30% had none of these ITEs (Rosell et al. 2007).According to InsideWood descriptions (InsideWood 2004;Wheeler 2011), only four species from my species set had vascular/vasicentric tracheids (description for one species was absent in InsideWood).This disparity in vasicentric tracheid presence between the present and other records implies that the tracheid circumference size criterion for VTVs applied in this study was inappropriate and/or InsideWood descriptions misidentified the presence/absence of tracheids.While the Californian dataset represented a warmer and drier climate than the one in North-East USA where the Ziemińska et al. (2020) study was conducted, and the species set was primarily composed of subshrubs (two-thirds) followed by other growth forms (shrubs, trees, herbs, and vines).
There was no difference in mean day capacitance between the species with and without TTs, or with and without VTVs (Fig. 2).Linear regression models (Table 2) showed the presence of TTs or VTVs did not affect day capacitance.Only per given VWC L-pd , species with TTs tended to have lower capacitance (r 2 adj = 0.37, p < 0.01, the TTs presence explained an additional 8% variation in capacitance, Fig. 3).Conduits 15 fraction and vessel-conduits 15 contact fraction were not correlated with day capacitance either in pairwise relationships or as independent variables in the regression models.protect them from embolism.This interpretation, however, is counter to microCT and some cryo-SEM studies which found that ITEs embolize simultaneously or shortly after vessels do (Fukuda et al. 2015;Umebayashi et al. 2016;Knipfer et al. 2017Knipfer et al. , 2019;;Yazaki et al. 2020;Baer et al. 2021).However, thus far these studies encompassed a handful of species only and they did not quantify the type of ITEs and, instead, combined them into a "fibers" category.One study (Yazaki et al. 2020) identified ITE types and found that in species with true tracheids (Cercidyphyllum japonicum, also studied here) vessels embolized as the water potential became more negative, but ITEs did not, which rejects my hypothesis (note that the authors identified ITEs as libriform fibers, while according to my observations, C. japonicum ITEs are composed of TTs and fiber-tracheids).The third possibility is that the TTs were emptied throughout the season and devoid of water by the time the day capacitance was measured, while species without TTs still had reservoirs of stored water.This would align with the evidence showing species-specific seasonal changes in water distribution within a growth ring such as, for instance, emptying of ITEs throughout the growing season (Utsumi et al. 1996(Utsumi et al. , 1998;;Umebayashi et al. 2016).This is also supported by observations that in some species, in their native state water is present in vessels, and absent in the surrounding ITEs.Altogether, the available evidence does not exclude the role of TTs in capacitance.The major task for the future would be to quantify the type of ITEs present and their properties.
The reason why VTVs were not correlated with day capacitance might be related to the diverse spatial distribution and abundance of these cells across the growth ring.In some species, VTVs surrounded vessels and were present across the entire growth ring (Q.muehlenbergii, see Table 1 for full names).In other ring-porous species, VTVs occurred in groups in latewood (e.g., C. speciosa, C. kentukea, M. pomifera, M. alba, P. amurense, T. danieli), and could also be present in earlywood in contact with vessels (Z.sinica).While in diffuse-porous species, VTVs were either intermixed with earlywood vessels (e.g., F. grandifolia, L. styraciflua, O. arboretum, S. pseudocamellia) or only appeared at the very end of latewood (e.g., B. dahurica, A. saccharum, L. tulipifera, S. obassia, T. japonica).It is possible that the latewood VTVs primarily function as water-conducting cells, while in the earlywood they could contribute to water storage.Yazaki et al. (2020) found that as water potential decreased, vessels and libriform fibers embolized while vasicentric tracheids retained water for longer in Quercus serrata Roxb., Fagaceae.This implies that vasicentric tracheids may chiefly function as an alternative water-conductive pathway.This evidence, together with the observed differences in VTVs distribution and abundance suggest their disparate functional roles and quantitative contribution to water movement in the wood and could explain the lack of relationship between VTVs' presence and day capacitance.Also, as mentioned earlier, the size threshold for VTV identification used in the present study could obscure the possible patterns.Altogether, the present study does not exclude the role of VTVs in capacitance.A promising path forward would be to quantify the different characteristics of VTVs as well as TTs (spatial distribution, abundance) concurrently with seasonal and spatial variation of water content and distribution.This would help clarify the diversity of tree hydraulic strategies and the functional role of ITEs.
The lack of relationships between day capacitance and the presence of tracheids/small vessels (TTs or VTVs) could also be due to the crude nature of the "presence/absence" metric as alluded to in the previous paragraph.Similar to VTVs, variation in TT's spatial distribution, abundance, lumen area, or wall thickness was observed but not quantified.For example, F. grandifolia and C. kousa had abundant TTs and VTVs in earlywood, but were scarcer in latewood while the abundance of fiber-tracheids (ITEs with infrequent bordered pits) increased.The size of the pit border and the density of pits in tracheids are examples of continuous variables that could also influence capacitance, as they did conductivity (Sano et al. 2011).Identifying ITE types and their presence/absence is an essential starting point implemented in the current study.Yet, to clearly identify ITE functions, continuous ITE properties need to be measured, such as border diameter, pit frequency, wall thickness, cell spatial distribution, abundance, and amount of contact between them and other cell types.Because identifying tracheid presence (especially, vascular and vasicentric tracheids because they are so similar to small vessels) on a cross-section is very difficult, establishing some criteria, like for the conduits 15 allows for unbiased quantification of the anatomical structure.Yet, ideally, this criterion would be developed based on observations of cross-and longitudinal sections, and, importantly, on macerations, because they most clearly show the presence/absence of tracheids.Quantification of these characteristics would allow a robust data analysis.And it would also circumvent disagreements in the definitions of the different cell categories and researcher's biases when identifying them.
The continuous variables -conduits 15 fraction and vessel-conduits 15 contact fraction -measured in the current study were not related to capacitance.The reasons might be the same as discussed above for TTs and VTVs.It is possible, that they contribute water to capacitance, but the amount of water is not limited by either their fraction or vessel-tissue contact fraction.The inaccuracy resulting from the grid method might be at play, too.This method counts several grid points that fall in a given tissue.That number divided by the total number of grid points analyzed per area gives tissue fraction.This technique provides reliable results for abundant tissues, but less so for scarce tissues such as conduits 15 .
Conclusions
Based on the present analyses and previous structure-function studies, there is no convincing evidence whether TTs, VTVs, or conduits 15 contribute to day capacitance, but this possibility cannot be excluded either.To answer the question of the functional role of ITEs types in tree hydraulics, future work needs to combine detailed anatomical quantification of vessels and ITEs properties with in vivo (microCT, cryo-SEM, NMR) and physiological observations across variable time scales (daily and seasonal).
The present study was an exercise in understanding the function of ITEs, which are often neglected in current quantitative anatomical studies and merged into a single category of "fibers".Although for some studies, depending on the question, this approach is satisfactory, for others, it simplifies the anatomical structure to the point where we fail to see the structure-function link, while one might really exist.Highly detailed studies of comparative anatomy, like the ones carried out by Carlquist provide a crucial wealth of knowledge on anatomical diversity and can generate better-informed functional hypotheses.The way forward would be to harness the wealth of comparative anatomical descriptions, translate them into numerical variables, and measure them in association with physiological function.
Fig. 1 .
Fig. 1.Examples of true tracheids (a-c) and vasicentric tracheids (d-f) in six species as seen on twig wood cross-sections.The examples of species with vasicentric tracheids shown here were also reported to have tracheids by InsideWood.The smaller photo is a twice-enlarged fragment of the photo above it.The scale bar for the large photos corresponds to 25 μm, and in the small photos to 10 μm.All large photos are at the same magnification and all small photos are at the same magnification.Arrowheads outside the photos indicate a growth ring boundary, with earlywood at the top and latewood at the bottom.Arrowheads in the large photos: true tracheids, vasicentric tracheids, or small vessels.Arrowheads in the small photos: bordered pits.A, Eucommia ulmoides; B, Oxydendrum arboretum; C, Liquidambar styraciflua; D, Zelkova sinica; E, Quercus muehlenbergii; F, Betula dahurica.
Fig. 2 .
Fig.2.Boxplots showing differences in day capacitance between the groups of species with/without true tracheids (a) and with/without vasicentric tracheids (b).Symbols correspond to species mean.Groups were compared using t-tests and the differences were not statistically significant (ns).
Fig. 3 .
Fig. 3. Relationship between day capacitance and volumetric lumen water content at predawn (VWC L-pd ).Symbols correspond to the species' average value and error bars represent one standard deviation.**p < 0.01.
Table 2 .
Models predicting day capacitance, their r 2 adj and p values. | 7,042 | 2023-03-10T00:00:00.000 | [
"Materials Science"
] |
Strength analysis in regulatory design documents and computational software
Modern building design standards have a long history. During this time, they have undergone a number of changes, but some of their provisions and recommendations, once proclaimed, remain unchanged. And although they do not meet the modern possibilities of computational analysis, but continue to exist due to the established tradition. In this paper, attention is paid to only some of the mentioned conflicts, which are related to the software implementation of regulatory requirements. The first of them is connected with two different parts in the process of design justification: calculation of the stress-strain state and verification of the accepted cross sections. It is noted that when the calculation model adopted for computer analysis of the structure does not correspond to the model that was meant when compiling the regulatory document, there may be contradictions or inaccuracies that cannot be resolved without decoding the approach adopted in the standards. Unfortunately, such a decoding is not provided in our rationing system. Another group of conflicts is connected with conducting the structural analysis taking into account geometrical and physical nonlinearity declared by standards. The matter is there are some problems that cannot be solved when using nonlinear calculation, for instance, dynamic analysis using eigenmode decomposition with the subsequent summation of modal reactions. The problem of choosing an unfavorable combination of loads is also in this list. In the final part of the article some proposals are formulated. This proposals aimed at eliminating contradictions between the desire to develop simple and understandable design rules and the ability of modern computer to solve problems without the use of dubious simplifications.
Introduction. The experience of design activity in recent decades shows that the development of automation of engineering calculations has the most serious impact (unfortunately, both positive and negative) on the quality of justifications for design decisions. The level of detail and accuracy of calculation, which is now available to designers en masse, yesterday was still unattainable even for the most qualified organizations and professionals. At the same time, the availability of modern powerful computing systems creates a number of new problems. One of them is the growing number of inconsistencies between the capabilities of software systems, which are focused on a detailed analysis of the work of structures, and the requirements of regulations, which are focused on established experience.
Almost all modern tools of building design automation implement to some extent the requirements of existing regulations. At the same time the inclusion of regulatory requirements in software systems is not only a problem of their developers, but also a problem of a wide range of users. The point is that users have to understand which requirements for regulatory documents can and should be imposed on the relevant software, when deciding on its use. An almost complete disarray takes place here today. Some users would like everything to be implemented (including departmental, company and other detailed instructions), others would like the developers to allow them to decide which rules should be followed and which can be ignored, still others want detailed references to justify regulatory requirements and etc.
Of course, one can rely on the following principle. The implementation of regulatory requirements in the software must strictly adhere to the text of the regulatory document. In cases where this is not possible in general (examples of such a situation are given below), the program should refuse to perform the appropriate function in the part that does not adequately reflect norms, notifying the user. In this case, an accurate reflection of possible limitations of this kind in the program documentation should be a prerequisite.
Another set of problems is due to the fact that modern software systems focus on the use of universal provisions of such disciplines as the theory of elasticity, the theory of plasticity, structural mechanics, etc. while some provisions of the norms are based on simplified approaches, test results and experience of operation of existing structures. But being presented in the regulatory document, such provisions suddenly take advantage over scientifically sound and more accurate solutions, which do not appear in the codes only due to the complexity of calculations.
Almost all modern tools of construction design automation implement to some extent the requirements of existing regulations. Meanwhile there are certain problems of technical, legal and economic nature, which often arise due to the fact that the developers of regulations did not forecast the possibility (and necessity!) of their software interpretation.
Two interpretations of the concept of "calculation of structures"
The design justification of design decisions is a multi-stage process, in which, at least, two main parts should be distinguished: calculation of the stressstrain state (SSS) and verification of the accepted cross sections (or their reinforcement). Unfortunately, this fact is not emphasized and when talking about the calculation of structures is not always clearly stated what we are talking about.
At the same time from the point of view of rationing the differences here are fundamental: the calculation of SSS is the problem of structural mechanics and this process in principle should not be the subject of rationing, while checking the bearing capacity of sections is a conditional procedure aimed at achieving a certain degree of safety. The rationing, i.e. the establishment of certain requirements of society, is quite appropriate here.
Returning to the stage of the SSS calculation, we can say that only some "permitting procedures", which establish acceptable simplifications of the problem, can be controlled by the design code. It is important to note here that it is a question of allowable simplifications, instead of their obligatory application though in texts of regulatory documents this fundamental difference is not stipulated in any way. The question arises here about the inequality of the results of the simplified calculation performed in accordance with design standards and the possible result of a more accurate analysis.
It should be noted that modern software systems often have the ability to perform the structure calculation in much more detail and accuracy than required by regulations. Such details of the stress-strain state and such details of the behavior of the structure under load can be found, which were not taken into account by the authors of the normative document or, more often, taken into account in the design standards by applying some special coefficient of working conditions or other ways to take into account additional bearing capacity. Since these techniques are not deciphered in detail in the regulations, the corresponding feature may be taken into account twice: the first time in the framework of computer simulation and the second time in the regulatory verification, which is performed using the above additional coefficient. As a result (and this has happened many times) a project with a more thorough calculation justification will be less economical than a rougher calculation according to the standards.
The situation may be even more complicated when the normative document provides for a calculation procedure in which some empirical correction factors are used. A typical example is the standards for seismic analysis of structures [3], where the results of the response specrtum method are adjusted by the reduction factor K 1 , which is introduced to take into account the plastic behavior and local damage. Since the degree of plasticization of structural elements and the amount of local damage is not specified, it remains unclear what to look for when using other methods of calculation (direct integration of equations of motion, deformation method of checking the ultimate forces, etc.).
Another example is the calculation for temperature effects. The fact is that the design standards of structures set the maximum distances between the temperature seams (see, for example, section 1.13.2 16 of the State Building Codes of Ukraine DBN B.2. 6-198: 2014 [6]). Traditionally, it is considered that the calculation of temperature effects can be omitted when a compartment length does not exceed these limits. But it has been repeatedly detected that such a calculation leads to the conclusion of a significant overstrain of the loadbearing structures, which causes surprise and numerous discussions.
The discovered contradiction is due to the fact that the standard calculation models of force calculation do not take into account some flexibility of the nodal joints (for example, slippage of the base of the steel column on the foundation within the black holes for anchor bolts). Such shifts, which are absolutely insignificant under force loading, are decisive under the kinematical influence of the thermal deformation type. Their values may be compared with thermal elongations and they dramatically affect the stress-strain state. Here, the rules, which are based on many years of practical experience, are "smarter" than traditional analysis.
Thus, it can be stated that if the calculation model adopted for computer analysis of the structure does not correspond to the model that was meant when compiling the regulatory document, there may be contradictions or inaccuracies that cannot be resolved without decoding the approach adopted in the standards. Unfortunately, such a decoding is not provided in our rationing system.
On regulation of calculation methods
Although the science of "structural mechanics" can not set standards, if we keep in mind the methods and rules of calculation, but when it comes to choosing a calculation model, the question is not so clear.
The fact is that the design standards are a chain of trade-offs, where some inaccuracies in the calculation of some parameters (e.g., internal efforts in the system) are offset by safety factors embedded in other parameters (e.g., in the design strength). In addition, the method used by the authors of the rules can be based on a certain calculation model, and this model occurs to be specified in the normative document.
Traditionally, building design standards have focused on certain set of calculation schemes. Most often, these were plane bar systems loaded in one plane or in mutually orthogonal planes and operating in a uniaxial stress state. Spatial structures, especially of shell type, are considered much less often. However, they are almost standard when calculating using software. And here there is a certain imbalance of possibilities, when many cases, normalized for traditional calculations, are simply absent for the calculations of spatial systems.
As an example, let us mention the fact that the design standards for steel and reinforced concrete structures provide a material stress-strain diagram only for uniaxial state and there are no recommendations for assessing the performance of structures in 2D or 3D stress state. In this case, the normative documents on the design of reinforced concrete structures, which are calculated by a nonlinear deformation model, for example, are focused on checking the values of ultimate deformations, but such criteria are given only for uniaxial stress state. How they should be transformed with respect to the 2D stress state is completely unclear. After all, there is no theoretical justification for the use of deformation criteria here. Moreover, any theory of plasticity is based on the concept of the boundary surface in the stress space, whereas the concept of the boundary surface in the space of deformations simply does not exist.
Another problem concerns the interpretation of the results of the spatial calculation model analysis in accordance with the regulatory documents. So, for example, for bar elements we receive six internal forces and N, M x , Q y , M y , Q x , M z instead of three N, M, Q and even if any element works "in plane" that nonzero values (probably small on size) can have all six internal forces. How small must be certain forces, so that they can be neglected, is not specified.
For example, the concept of a beam used in [6] and [5], obviously implies the ability to neglect the influence of longitudinal force in comparison with the influence of moments. But if in the first case for steel structures in 1.6.2.2 there is a record that for the value of the given relative eccentricity m ef > 20 the calculation can be performed as for the bent element (i.e. to neglect the influence of longitudinal force), then for reinforced concrete structures such idealization is not defined. And for steel structures, the limit m ef = 20 is specified to test the stability, and whether this recommendation of standards allows a common interpretation is unknown.
Of course, a competent engineer can determine this limit in each case, but some rule is required for the software implementation, and its absence creates a situation for unnecessary controversy.
The above is a fairly typical situation when the normative document contains some information (for example, tabular values), but in the program it is more profitable to calculate them than to borrow it from the table. What degree of disagreement is permissible (or non-existent) is the subject of many meaningless discussions. But the requirements of design norms are not laws of nature, they only approximate these laws with one or another degree of accuracy. Unfortunately, information about the errors that are permissible according to regulatory documents can be found nowhere. The only exception that can be found is the use of 10 The problem of joining results at conditional borders is connected with the delimitation of basic concepts. Since some simplifying hypotheses were used in various variants of the stress-strain state, belonging to one or another category of normalization (compressed-bent bar, bent beam, etc.), it is often difficult to implement a smooth border crossing.
Especially many problems are connected with necessity (possibility, desirability?) of performance of the general static calculation taking into account geometrical and physical nonlinearity declared by standards.
For example, in State Building Codes of Ukraine DBN B.2.6-198: 2014 [6] it is formulated as "5.3.6… Steel structures should, as a rule, be calculated as a entire spatial system taking into account the factors that determine the stress and strain state if necessary, taking into account the nonlinear properties of the design schemes".
And at the same time there are such statements: "When dividing the system into separate elements, the design forces (longitudinal and shear forces, bending moments and torques) in statically indeterminate structures may be calculated without taking into account geometry changes and with the assumption of steel elasticity … Calculation of statically indeterminate structures as entire systems can be performed using deformed model within the limits of elastic work of steel".
However, the following question remains unclear. Is it possible to apply the results of the calculation which was performed taking into account geometry changes if it concerns the coefficient of longitudinal bending φ? This coefficient is of great importance for the stability of compressed steel rods analysis and calculated using deformed model (but for the element, not entire system).
Problems that cannot be solved when using nonlinear calculation
For a number of computational cases that inevitably arise in the actual design, regulations establish rules necessarily requiring a linear approach to solving the problem. An example is dynamic analysis closely related to such concepts of linear dynamics of structures as the frequency and mode of the natural vibrations of the system. For a nonlinear system, the very concept of individual forms of natural oscillations disappears and all recommendations based on this (i.e. the procedure of decomposition of motion into a superposition of normal modes) lose their meaning.
An alternative approach suitable for accounting for nonlinear effects is sometimes (though rarely) present in standards, such as direct dynamic calculation by instrumental or synthesized accelerograms, but more often it is not only not mentioned, but simply not developed. Analysis of the response to the pulsating wind loadings can be typical here.
Another problem that is not solved in the nonlinear analysis is the problem of choosing unfavorable load combination. In practice, there are virtually no structures that work only on one load option. It is usually necessary to anticipate the possibility of the occurrence of many temporary loads and, therefore, it is necessary to somehow determine their estimated combination. This problem has a solution with a linear approach to the calculation, when you can use the principle of superposition. If you focus on nonlinear analysis, then at the same time you should specify for which combination of loads you should perform strength and stability analysis. This type of instructions in regulations is often missing.
Stable equilibrium
Examination of the equilibrium stability of the complex bar type structure in the general case requires the calculation accounting the geometric nonlinearity and inelastic operation of the structural elements. Calculation of this type, in addition to computational complexity, also requires overcoming a number of other difficulties associated with the great uncertainty of the design assumptions (patterns of load change, idealization of material properties, initial irregularities, residual stresses, etc.). In this regard, in engineering practice there is a tradition of performing an idealized elastic calculation of the stability of the system as a whole in combination with checking individual elements for which more detailed account of the inelastic behavior of the material, initial bends and eccentricities and other circumstances is performed.
Most often, the stability problem is replaced by a refined calculation of the deformed model with increasing bending moments in compressed bars or other similar way by multiplying by some buckling length coefficient φ or coefficient of bending moments increasing η=1 / (1-N / Ncr). The critical value (in the sense of loss of stability) of the value of compressive force takes part in the choice of the value of these coefficients and this fact ties the calculation of the deformed model to the stability analysis of the idealized model.
A natural question arises about the relationship between these two approaches. To what extent and for what purposes can their results be used separately and what is the link between them? It is believed that the bridge that combines these two approaches will be the buckling lengths of the elements of the system. Therefore, the fundamental question is of the method of determining the buckling lengths.
Note also that the use of the concept of buckling length involves the division of bar type systems into separate elements, it is necessary to take into account the interaction of the element with the foundation and other elements (primarily adjacent to it in the nodes).
The buckling length of the bars of the same system is different for different combinations of loads, although in design practice usually use a simplified approach (it is allowed, for example, by paragraph 13.3.2 of [7]), according to which it is allowed to determine the buckling lengths only for such a loads combination, which gives the largest values of longitudinal forces, and the resulting value is used for other loads combinations. It is implicitly assumed that there is a combination in which the compressive forces in all elements take the maximum values. But it is easy to imagine an example of a design where this assumption is not fulfilled and, therefore, the problem of choosing a combination of loads to the stability analysis is still relevant.
The logic of most of the standards recommendations is focused on flat computational models or, at least, on a separate consideration of the spatial scheme in two orthogonal planes. If we turn to spatial systems, they may have difficulties of a completely different kind, associated with the use of the concept of flexibility in the two orthogonal planes of inertia of the bar element.
Following the classical approach of F.S. Yasinski, the buckling length of the bar is usually understood as the conditional length f a simple bar, the critical force of which when hinged its ends is the same as for a given rod. In terms of physical content, the buckling length of the bar with arbitrary fixings is the largest distance between two inflection points of the bent axis, which are determined from the stability analysis of this bar by the Euler method.
In the works of Yasinski himself and in numerous subsequent works, where the concept of the buckling length of the bar appears, the use of plane calculation models and, accordingly, plane deformation models is implicitly implied. Only for them it makes sense to consider the distance between the inflection points of the curved axis, taken as the calculated length.
Since even for plane models, the buckling length of compressed bars should be determined both in the plane and from the plane of the system, then here there is a mismatch with the definition of F.S. Yasinski. Indeed, imagine a spatial cantilever bar in which the cross section has moments of inertia J x and J y = 4J x . Under central compression, such a bar loses stability under load P cr,x =π 2 EJ x ./(2l) 2 (l ef,x =2l).
From the point of view of standards, apparently, it is possible to imagine a situation when two calculations on stability are performed during which deformation in one or in another main plane of inertia is alternately forbidden (for example, considering that J x = ∞ and then J y = ∞) , and after that the coefficients of the buckling length μ x and μ y are determined. But, as far as we know, for any complex systems, such even calculations in design practice are not performed.
Other problems arise when in the spatial system the main axes of inertia of the elements are not parallel to each other and the mode of stability loss, as well as the free lengths, is dependent on the orientation of these axes.
A fairly typical example is shown in Fig. 1, which shows the modes of stability loss and values of critical loads for two structures, which differ in that the cross-sections of the struts have different orientations of the main axes of inertia.
The model showed in Fig. 1 (a) has the coefficient of the buckling length in the plane of minimum rigidity x = 0,597, while the model showed in Fig. 1 (б) has x = 0,523. In the first case, the loss of stability mode is such that all the column are deformed in the plane of least rigidity. In the second case such deformation is observed only in two columns while the other two are deformed in the plane of greatest rigidity.
It should be noted that the solution of F.S. Yasinski refers to an elastic centrally compressed bar of constant cross-section, which when lost stability buckles in the form of a plane curve. Since the magnitude of the free length does not depend on the transverse load and is determined only by boundary conditions, this concept has been extended to elastic eccentrically compressed elements that bend in one of the main planes of inertia. Therefore, the in plane bending is implicitly assumed, because only in this case it makes sense to consider the distance between the inflection points of the bent axis, taken as the buckling length. However, even a single bar can lose stability by having a spatial bending curve that occurs, for example, when the ends of the bar have cylindrical hinges whose axes are not parallel to each other [2]. Another example that limits the scope of the classical concept of the buckling length is the case of the torsional mode of stability loss. A number of other examples that indicate the difficulties arising here are given, for example, in [9].
However, the convenience of using the concept of the buckling length has made this method extremely popular, in almost all countries it is included in the regulations governing the verification of the bar structures equilibrium stability.
The buckling length of the elastic bar was used for normative calculation in the inelastic stage of the bar loading. It should be recognized that there is, in fact, no clear theoretical justification for this, and it should be considered a heuristic technique. And the widespread use of this technique is most likely due to the fact that engineers needed at least some practical method of calculating the bar structures for stability. Therefore clarity, associated with the solution of the simplest problems, replaced the reasoning of accuracy.
Dynamic calculations
Almost all regulations in the field of dynamics focus on the use of decomposition into modes of natural vibrations. Thus, the use of linear equations is implicitly assumed, and only in a few cases do software systems consider the linearized behavior of a nonlinearly deformable structure, i.e. analyze small oscillations around the deformed equilibrium position.
When focusing on the eigenmode decomposition, many regulatory documents indicate the number of eigenvalue forms to be taken into account, with no indication of the calculation model used. As a result, it has repeatedly happened that the first few natural frequencies (namely they are recommended to take into account by the standards) determine the local partial modes of motion, while the main mode of deformation is not the first.
The second problem of dynamic calculations, which is often mentioned indirectly by regulations, is the excessive simplification of dynamic models. This simplification due to tradition is often perceived as a characteristic of real behavior, which can lead to misunderstandings. Thus, the long-standing habit of using the cantilever calculation model in the seismic analysis has led to the fact that the detection of torsional vibrations as one of the lower is treated as a shortcoming, although no one could indicate what is the defect of this design.
It is necessary to mention one more aspect of dynamic calculations using eigenmode decomposition. It is associated with summation of modal contributions, which often follows the well-known "root-sum-squares" (RSS) rule. But this approach is based on the hypothesis that all modal reactions are normally distributed random variables with the same correlation coefficients, which is consistent with many observations, although not an established fact. Therefore, the absolutization of the RSS rule is rather doubtful. An example is the calculation using the accelerogram in those models where the equations of motion are solved by eigenmode decomposition, and summation fulfilled according to the RSS rule. But if the integration of equations of motion is performed, for example, by the Adams method, then we come to a completely different result. Nevertheless, since one and the same problem was solved, the result should not depend on the method of its solution.
The summation of internal forces, which are calculated by the usual rules for each of the eigenmodes, is also performed by the RSS method, but there may be another disappointment. The use of modules of moments, longitudinal and shear forces leads, for example, to disappearing of compressed-bent bars, that is all of them become stretched-bent. Similar effects of sign loss are possible in shelltype elements. To overcome this phenomenon in some software systems, the total values of internal forces are assigned signs, as in similar forces corresponding to the first eigenmode. It is difficult to substantiate such an approach, even if we assume that it is the first eigenmode that realizes the main contribution to the total value of each of the components of the response vector.
Accuracy requirements Verification of compliance with structural design standards sometimes leads to uncertainties or errors due to the fact that the standards describe only one load or one stress-strain state. Detailed recommendations are given for this isolated situation, and in such a "ultimate" formulation (for example, as a calculation formula), which does not allow to understand what type of assumptions and simplifications were used. But in the real calculation it may be necessary to consider a less refined case and then there arise a number of difficulties.
As an example, we can point to the stability analysis of the plane bending of steel structures. The coefficient φ b , the value of which is calculated in accordance with DBN B.2.6-198-2014 and depends, inter alia, on the location of the load within the beam height of (see table N4). But it may happen that the calculated combination of loads contains loads located both above and below the beam. In this case, the direct use of the rules becomes impossible.
If we take the opportunity to study the shell model of a thin-walled bar and with sufficiently detailed modeling to solve the problem of plane bending stability using the finite element method, it turns out that in the case of exact coincidence of loading options with the normative situation, we will get a solution. which does not coincide with the provisions of the design codes. This is because some approximations of exact expressions were laid down in the formulas of the appendix N [7], by means of which the coefficients φ b are calculated. The discrepancy may be small, but the rules by which they can be considered acceptable are unknown.
What degree of discrepancy is acceptable is the subject of much nonsensical debate. But the requirements of design standards are not laws of nature, they only approximate these laws with one or another degree of accuracy. Unfortunately, nowhere can be found information about the errors that allowed by the authors of the standards. The only exception that can be found is the use of the value of 10,0 instead of the exact value of the acceleration of gravity 9,81 when translating the normative values loads from kPa to kgf / m 2 in building regulations SNiP 2.01.07-85* of 1985 edition or 0,1 instead of 1 /\π 2 in the formula (108) of building rules SP 16.13330.2017.
The problem of permissible discrepancy of results arises when the rules have some alternatives. The developers themselves were more likely to compare the results (if any) for a "typical case", but such a comparison does not follow a good correlation of the results in any case. An example is the analysis of methods for determining the width of cracks presented in [21], when the use of different alternative solutions, allowed by the standards showed more than 59% variance of the results.
There should be some measure which allow estimate the result of the comparison. After all, in engineering calculations there is no complete coincidence of results. The generally accepted norm of similarity in the form of a five percent discrepancy must also be specified and it is necessary to know to what results (displacement, effort, etc.) and to what values (extreme, average or other) it should refer. This problem would be greatly mitigated if the comparison was conducted only by the designer. However, submitted to the experts, such comparisons will be the subject of numerous and often pointless discussions.
Programming as a means of controlling a regulatory document
In the pre-computer period, the vague or ambiguous recommendations, although they were evil, but this evil was not as dangerous as it is today. Today, formal compliance with the rules in the software package is hidden from the eyes of the end user, and an unambiguous interpretation of the new paragraphs of the rules is primarily needed by software developers. And these points themselves should be set out in the wording, which should be in the nature of a clearly defined algorithm of action. It seems to us that this cannot be achieved without certain organizational changes.
Software implementation of the normative document is a good test procedure, which reveals discrepancies, logical inconsistencies, incompleteness and vagueness of the formulation and other shortcomings of the draft rules, in particular, compatibility with computer methods of analysis. As an example, we can refer to the construction of the bearing area of the element taking into account the full range of proposed requirements [11,16] which revealed some inconsistencies that lead to the rupture of the boundary and non-convexity of the permissible loads area. The construction of this area is based on the analysis of calculations that contain several hundred variants of the internal forces values. Such mass verification was simply impossible in the era of manual arithmetic.
In addition, programming reveals those aspects of the normative document that are not formulated explicitly, as the developers of the norms focused on a qualified user who can independently decide on the use of a provision, based on the specifics of the calculation situation. This is not possible for a computer program, so it will definitely be installed during programming.
It is important that such verification work is performed without the participation of the developers of the regulatory document, which would ensure the purity of the experiment.
Possible actions
How can the contradiction between the desire to develop simple and understandable design rules (traditional approach to rationing) and the ability of modern computer systems to solve problems without the use of dubious simplifications (modernist approach) be eliminated?
It seems to us that two solutions are possible here: • develop different versions of regulations for manual and computer calculation; • create a special regulatory and methodological document on the rules for implementing the requirements of design standards in software.
The first option can be implemented in the traditional form, when formulating general requirements and necessary hypotheses, based on which one can create a software implementation. After that there appears a text such as "allowed ...", which presents a simplified version of the standardized provision.
And in the second option, the document should reflect: • requirements for accuracy of calculations and permissible deviations from the literal implementation of regulatory guidelines; • the procedure for verification and coordination with the authors of the standards concerning methods of numerical solution of design problems, which expand the possibilities of verifying regulatory requirements, but not available for manual calculation; • requirements for software developers to inform users about the peculiarities of the implementation of regulatory requirements in case of deviation from their literal implementation. УДК 624.07+006.036 | 7,810.6 | 2020-09-10T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Formulation and In-Vitro Comparison Study between Lipid-Based and Polymeric-Based Nanoparticles for Nose-to-Brain Deliery of a Model Drug for Alzheimer Disease
Certain challenges like the presence of highly complex structure (blood-brain barrier (BBB)), P-glycoprotein efflux, and the particular enzymatic activity stand in the way of the successful delivery of the drug moieties to the brain and make them fruitless. Many efforts have been conducted to overcome the previous. Direct delivery of drugs to the brain after the intranasal application is one of those strategies since it holds a great hope to raise the chances of drug moieties to the brain. Nanoparticles could be the potential to improve nose-to-brain drug delivery since they are able to protect the encapsulated drugs from biological and/or chemical degradation and increase their penetration across biological barriers. Based on the fact that neuroinflammation is associated with neuron death and neurodegenerative diseases like Alzheimer’s, nonsteroidal antiinflammatory drugs (NSAIDs) might play a positive role in the disease. The present study aimed to employ the QbD approach for the first time in optimizing polymeric and lipid-based nanoparticles for the nose-to-brain delivery of Meloxicam (MEL), and to perform a comparison between the pure drug and the formulated nanosystems regarding dissolution profiles, permeability, and mucoadhesiveness.
Introduction
The precense of the blood-brain barrier (BBB) forms the major drawback to the successful delivery of the brain targeting moieties to their active site. On the other hand, the P-glycoprotein efflux supports the preventing role of the BBB by transporting the particles out of the CNS. However, this physiological protective barrier is considered s a key challenge to the pharmaceutical fraternity, with a need to circumvent it so as to deliver drugs to the brain in various CNS disorders [1][2][3].
Nose to Brain delivery route holds a great promise to overcome the BBB, since it transports the drug directly to the brain along the olfactory and trigeminal nerve pathways. These nerve pathways initiate in the nasal cavity at olfactory neuroepithelium and terminate in the brain [4].
Several clinical studies concluded that long-term use of non-steroidal anti-inflammatory drugs (NSAIDs) may protect against the onset of Alzheimer's disease (AD) [5]. Among those, meloxicam exerts its pharmacological properties by inhibiting the enzymatic activity of cyclooxygenase-2 (COX-2) [6], which linked with a neuroprotective action in Alzheimer's disease [7][8][9][10]. Unfortunately meloxicam is a lipophilic drug with a high potency and poor water solubility [11] which decreases its bioavailability when it is applied following the routine routs of administration.
The strategy of designing drugs in an encapsulated form of nanoparticles (NPs) to target the olfactory epithelium could potentially improve the direct CNS delivery of drugs-including biologics [2]. Nanostructured drug delivery systems have been accomplished with an increase in nasal permeability and control drug release [12]. Lipid-based and polymeric nanoparticles have recently attrack the attention for this purpose, Solid lipid NPs (SLNs), as a lipid-based formulation, offers an improvement to the nose-to-brain drug delivery since they are able to protect the encapsulated drug from biological and/or chemical degradation and enhance its characters regarding nasal retention time, good application properties, and adhesion of the SLNs to mucous membranes [13]. On the other hand, polymeric nanoparticles show higher stability, variety of the preparation methods, controlled release of the drug [14].
Since there are many factors can affect the characteristics and properties of the previously mentioned nanosystems, the application of QbD approach presents a logic strategy that saves time, cost, and efforts when developing complex systems such as the nanosystems [14]. Starting by the definition of the quality target product profile (QTPP), then the critical process parameters (CPPs) and critical material attributes (CMAs) that can highly affect the critical quality attributes (CQAs) of the product [15].
In the present research, two types of nanoparticles were prepared following a QbD approach. Physical, chemical, and morphological characterization were conducted. The following step was to evaluate their in vitro behavior regarding release profile, permeation and mucoadhesion properties. Correspondingly, a profound comparison was performed followed by the selection of the optimized nanocarrier system to be a successful candidate for nose-to-brain delivery of anti AD drug formulation.
MEL Loaded SLNs
MEL loaded SLNs were prepared following a modified double emulsion (W1/O/W2) solvent evaporation (DESE) technique [45]. MEL was dissolved in 0.1 M NaOH solution formulating the W1 Phase. The oily phase was prepared by dissolving phosphatidylcholine in cyclohexane. The primary emulsion was formed by adding the W1 phase dropwise into the organic phase using a homogenizing mixer (Hielscher, Germany) (0.5 cycles& 75% amplitude) for 1 min. The resultant nanoemulsion was then added dropwise into the surfactant aqueous solution using the homogenizing mixer (0.5 cycles& 75% amplitude) for 1 min. The final mixture was left then to stir over the night using a magnetic stirrer to allow the evaporation of the organic solvent and thus formulation of the SLNs.
MEL Loaded PLGA NPs
MEL loaded PLGA NPs were prepared using a double emulsion (W1/O/W2) solvent evaporation (DESE) technique [16]. First, formation of a primary W1/O emulsion, where the aqueous solution of the MEL was added to the PLGA solution in ethyl acetate upon sonication in ice bath. This was followed by the formation of a double emulsion (W1/O/W2) by dispersing the primary emulsion in an external aqueous phase containing poly vinyl alcohol (PVA) as a stabilizer, with the use of sonication in ice bath. Finally, organic solvent evaporation over the night resulted in the formation of MEL loaded NPs.
Both types of NPs were harvested by centrifugation at 16000× g for 1 h at 10 °C (Sigma, Germany) and washed 3 times with deionized water to remove unentrapped drug, surfactants and remaining organic solvent. The NPs were then resuspended in 2 mL of 10% (w/v) trehalose aqueous solution, frozen at −20 °C, and were finally freeze-dried (Christ, Germany) at −40 °C for 72 h.
Mean Particle Diameter, Size Distribution and Zeta Potential
The average hydrodynamic diameter (Z-average), polydispersity index (PDI), and surface charge (zeta potential) of the NPs were analyzed in folded capillary cells, using Malvern nano ZS instrument (Malvern Instruments, Worcestershire, UK).
Encapsulation Efficacy and Drug Load
The obtained NPs were separated from the preparation medium by centrifugation and washed 3 times, each time was followed by a centrifugation to obtain NPs pellets. Meloxical then was extracted using chloroform. The mixture was moved to a separatory funnel and the aqueous phase was withdrawn to determine its drug content by HPLC, and the EE and DL were calculated according to the following equations:
Scanning Electron Microscopy (SEM)
The morphological appearance of NPs was investigated using scanning electron microscopy (SEM) (Hitachi S4700, Hitachi Scientific Ltd., Tokyo, Japan) at 10 kV.
Fourier-Transform Infrared Spectroscopy (FTIR)
The chemical interactions between the drug and excipients were analyzed by a Thermo Nicolet AVATAR FTIR spectrometer (Thermo-Fisher, Waltham, USA.
X-ray Powder Diffraction XRPD
The X-ray powder diffractograms of citral SLNs, GMS, and the physical mixture of citral and GMS were obtained in the angular range of 3-40° 2θ at a step time of 0.1 s and a step size of 0.007°at ambient temperature. Monochromatic CuΚα1 radiation (with λ = 1.5406 Å) at 40 kV and 40 mA was used as the X-ray source. The same was repeated with the polymeric NPs.
Dissolution Test
Dissolution of meloxicam and the drug release from MEL-NPs was determined using a dialysis bag diffusion technique using a dialysis membrane [17][18][19]. The samples were analyzed spectrophotometrically at λ max of 346 nm (Jasco V730 UV-VIS spectrophotometer (ABL&E-JASCO Ltd., Budapest, Hungary).
Permeation Test
In vitro permeation of the prepared NPs was investigated using Side-by-side type apparatus. An accurate weights of MEL and MEL NPs equivalent to 1 mg of MEL were suspended in 9 mL of simulated nasal electrolyte solution (SNES), then placed in the donor chamber. On the other hand, 9 mL of pH 7.40 phosphate buffer was placed in the acceptor chamber. A semi-permeable cellulose membrane, previously impregnated in isopropyl myristate for 1 h, was placed between the two chambers as membrane to mimic the nasal mucosa.
Diffusion was investigated for 1 h comparing pure MEL, optimized SLNs formulation, and optimized PLGA NPs.
Mucoadhesiveness Test
Mucoadhesion was determined following the direct method (turbidimetric method) as following: briefly, the mucin was in PBS 6.4 (0.5 mg mL−1) and the NPs were mixed and incubated at 37 °C with continuous stirring with predetermined times of 1, 2, 3 and 4 h [20,21].
Risk Assessment
Risk assessment was conducted to rank and prioritize the factors with the highest impact on product quality. The first step of QbD-based risk assessment study was to set the QTPP encompassing the desired quality attributes in MEL-NPs, followed by the selection of CPPs and CMAs. The previous is summarized and ranked as in Figure 1
Morphology, Size, ZP and EE
The resulted NPs were smooth and spherical in shape as shown in Figure 6.
Compatibility Study
The FT-IR spectra, XRD patterns of MEL, both loaded NPs formulations, and the used excipients are presented in Figure 8.
In vitro Release
The in vitro dissolution profiles of pure MEL and MEL loaded NPs were investigated in intranasal-simulated conditions, using simulated nasal electrolytic solution (SNES) medium (pH of 5.6) and the results are shown in Figure 4.
In Vito Permeation
Permeation test has been performed in vitro for pure MEL solution and MEL-NPs, following similar conditions for nose-to-brain delivery route ( Figure 5).
In vito Mucoadhesion
The mucoadhesion was determined by turbidity analysis method to understand how the NPs will be retained, since a strong mucoadhesion suggests a close contact with absorption site, thus ensuring the effective absorption following the nasal administration ( Figure 6).
Discussion
Analyzing the results of the risk assessment as a part of QbD and based on the risk priority number RPN demonstrated the most highly influential CPP was sonication time, while the most highly influential CMAs were lipid/polymer type, lipid/ polymer concentration, surfactant type and surfactant concentration as shown in Figure 7. Both types of NPs comply with the size requirement of the administration via the nose-to-brain route which preferred to be up to 200 nm [22,23] The zeta potential values obtained for PLGA NPs and SLNs were −16.2 ± 1.81, −43.65 ± 1.47 mV, respectively, which is logical due to the negative charge of phosphatidylcholine estimated at between −10 mV and −30 mV at neutral pH [24] due to the presence of phosphate and carboxyl groups, while the negatively charged carboxyl groups on PLGA only are the cause behind the negative charge.
The FTIR spectra of the NPs showed that no changes occurred in the MEL chemical structure and did not present a significant difference in the main functional groups of MEL. The absorption band at 3290 cm −1 correspondings to -NH stretch appears to overlap with -OH group of phosphatidylcholines, which is represented at 3200-3400 cm −1 . Likewise, the absorption band at 1650 cm −1 match a slight shift of C=O. Hence, there is no interaction between MEL and the other SLNs excipients, and they are compatible with each other.
Similarly, absorption peaks of the materials used for polymeric NPs preparation (PLGA, Poloxamer) exhibited compatibility with MEL since the previously mentioned two chemical groups of MEL were maintained the same after PLGA NPs formulations The XRD of MEL gave unique fingerprint patterns owing to its crystalline structure. However, both SLNs and PLGA NPs did not show the characteristic fingerprints for the drug in their XRD pattern. This confirms that the drugs are present in a nanocrystalline state in the NPs [25]. The previous results are in agreement with the results of FTIR.
Based on Figure 4, it is evident that pure MEL demonstrates a poor solubility (5.10 ± 0. 9 µg/mL, over 360 min, at 35 °C) due to the chemical structure it has and the weak acidic character resulted in this medium (pKa = 3.43) [26]; fabrication of MEL in nanoformulations showed a significant increase in the dissolution rate than that of pure MEL (approximately 4-5 times higher), this goes in line with the results previously reported by Katona et al. and might be due to the nanosize and the increased specific surface area that the NPs have [27].
The release behavior from the NPs showed a sustained release pattern starting by a mild initial burst release during the first hour, where 12.94 ± 0.86%, 11.79 ± 0.74 of MEL was released from the PLGA NPs and SLNs, respectively, which has been frequently reported for polymeric NPs [28,29] and SLNs [13,17]. This could be explained by the presence of the surface-adsorbed drug on the NPs, in addition to the drug molecules that exist close to the surface having weak interactions with the NPs system. This was followed by a slow-release profile until 6 h, where only 25.26 ± 2.39%, 21.37 ± 1.47 of cumulative MEL release was observed for PLGA NPs and SLNs, respectively as the encapsulated drug slowly diffused through the NPs core [17]. The previous results point out that the majority of the drug remained in the NPs after their contact with the nasal mimetic conditions, and able to be released inside the targeted position.
The mucoadhesive strength was detected by calculating binding efficiency of mucin to PLGA NPs, and SLNs, which were 36.55%, 57.59%, respectively by the end of the experiment as Figure 6 represents. Since mucin is a highly glycosylated and negatively charged protein, the negatively charged SLNs showed a modest affinity driving especially by electrostatic interactions between mucin and SLNs and since SLNs are higher negatively charged, the electrostatic interactions between them and mucin will be higher than those with PLGA NPs. This observation is in close agreement with previous studies [30,31] A significant enhancement of MEL permeability through the semipermeable membrane was achieved when MEL was formulated in NPs in comparison with the pure MEL solution. This could be due to the nanoscale size of the prepared nanosystems which have the best nasal permeation properties as previously reported by Gänger et al. [22]. Moreover, the spherical and smooth surface of both NPs, as confirmed by SEM images, leads to the least friction with the membrane surface in comparison with the needle-shape particles [32]. Stabilizing these NPs using poloxamer, a permeation enhancer, further improve their permeation properties [33], by inhibiting the efflux pumps in addition to lowering the membrane fluidity when it is used in vivo [34].
Interestingly, SLNs showed superior permeability over PLGA NPs, which might be explained by the lipophilic properties of these lipid-based nanosystems [35], which exceed those of PLGA NPs [32].
Conclusions
The present research work put forward solid lipid nanoparticulate (SLNs) and polymeric based nanoparticles PBNs for the nose to brain delivery of meloxicam. QbD concept was employed for the first time to analyze the previously research efforts so we could define the critical process parameters and the critical material attribute.
All the measurements were found to be in an acceptable range. Spherical nanoparticles were obtained for both SLNs and PBNs with a diameter of 142.06, 94.76 nm, and a ZP of −16.2, −43.65, respectively. The resulted nanoparticle showed good compatibility with used materials based on FTIR and XRD measurements. Higher entrapment efficacy and drug load were noticed with SLNs (87.26%, 2.64%, respectively). Better in vitro drug release, permeation and mucoadhesion accomplished the formulation of meloxicam in SLNs more than PBNs. However, ex-vivo data is still needed to investigate cell viability, permeability, and cytotoxicity. Then in vivo measurements are critical to detect brain concentration and distribution in between brain deferent parts evaluate the risk/benefit ratio.
Conflicts of Interest:
The authors declare no conflict of interest.
Abbreviations
The following abbreviations are used in this manuscript: | 3,624.6 | 2020-12-01T00:00:00.000 | [
"Biology"
] |
On the use of low-cost 3D stereo depth camera to drive robot trajectories in contact-based applications
In production systems characterized by small batches and high customization levels, operations are required to be flexible to adapt to different products within the shortest possible time and with the minimum effort for system setup. Contact-based operations such as surface finishing, polishing, deburring, and material deposition are mandatory in the fabrication of numerous products. To maintain consistent performance over time, many of these operations require a high level of accuracy, both in end-effector positioning and contact force level. This paper proposes a robotic solution to generate the robot working trajectory for contact-based operations over the external surface of unknown objects of which a digital model is not available or different from the actual state of the workpiece. The paper introduces the integration process of RGBD images to construct a 3D model and its elaboration to extract the workpiece. Different searching subroutines have been developed to select different areas of the workpiece based on the operation to be carried on and generate the related trajectory. The evaluation metric of the proposed robotic solution is given by the stability of the contact force exerted by the robotic tool and the error between generated and the actually followed trajectory due to the depth estimation of a low-cost camera. A few millimeters of the inaccuracy of the trajectory are obtained; these inaccuracies are compensated using force control. Different tests with different nominal values of the force control loop are carried out. Statistical analysis shows that the mean values of the contact force obtained coincide with the nominal value of the single tests.
Introduction
The production in small and medium enterprises (SMEs) has been moving, in recent years, toward small volumes and a high level of customization.To answer the different customer needs, production systems have to be agile and flexible to increase productivity while maintaining a consistent production quality [1].In manufacturing systems, many products are fabricated through casting processes and to obtain a high-end final product contact-based and surface treatment operations such as polishing are required [2].
In SMEs, many manufacturing steps are still carried out manually by experienced operators, mainly due to the lack of flexible, easy-to-use, and fast-to-configure robotic tools, able to adapt to highly variable environments.In contact-based operations, robotic manipulators have been exploited to reduce production costs and increase productivity.The trajectory generation for a workpiece still represents a bottleneck to fully automate these processes since it is a time-consuming process [3].
In the case of contact-based robotic applications, trajectory planning requires a high level of accuracy, since the robot end-effector has to exert a proper contact force while following the desired trajectory [2].The CAD model of the workpiece is usually used to generate the geometric path to be followed by the robot tool center point (TCP) like in [4] and [5] where solutions for welding and for polishing are proposed respectively.
When trajectory planning is based on a digital CAD model of the workpiece, operations are carried out under the following assumptions: the production process is executed in a well-defined and constrained environment, and the object to be elaborated is exactly equal to its CAD model and precisely fixed in a constrained place.
However, these assumptions are hardly applicable in SMEs, due to their need to produce and process highly customized products, the CAD model of which is not always available or does not match the actual state of the workpiece.In addition, the work cells in these working environments are very dynamic and the operator often interacts with the process.
To carry out contact-based robotic operations in such an environment, 3D vision sensors are a feasible tool to generate the working trajectories.They allow the development of flexible, fast, and easy-to-configure tools enabling the mapping of the workpiece within the workspace and, consequently, the planning of the trajectories needed to execute the desired task.
In literature, 3D vision sensors have been mainly used for operations not requiring contact during the operation on the workpiece, involving at most contact during the gripping of the component.In [6] and [7], objects grasping solutions that use 3D vision sensors capturing one view 3D image of the robot workspace have been proposed.
Other research works have developed grasping solutions based on multi-view RGBD images of the robot workspace.In [8], a solution based on the reconstruction of the workpiece point cloud using multi-view images is proposed for the estimation of the grasping point in pick and place applications.Using multiple-view RGB-D images of the workpiece, in [9] a neural network-based solution was introduced to reconstruct the point cloud of a workpiece which is used to determine its grasp pose.For the mentioned robotic grasping operations, the efficiency of these grasping solutions is given by the success of the grasping operation.
3D vision sensors have been used for the 3D reconstruction of workpieces for quality monitoring purposes.In [10], a registration algorithm is introduced to reconstruct a workpiece point cloud aligning multi-view point clouds captured using a laser scanner.The accuracy of the output 3D model of the workpiece is given by comparing it to a ground truth model.In [11], the authors propose a point cloud 3D reconstruction for quality inspection using multi-view RGB-D images captured by a moving 3D camera attached to a robot arm.At first, a few images are used to construct an initial 3D model that is then updated: if the model is not complete due to the presence of holes and not a smooth surface, movement commands are generated to capture images of the missing parts.
Considering solutions for robot trajectory planning, fixed 3D vision sensors have been used as in [12] and [13].In the former, the trajectory to be followed by a spray gun at a distance between 10 and 15 mm from the workpiece is generated.The accuracy of the robotic operation is given by the good execution of the painting task.In the latter, the trajectory to be followed by a glue deposition system is generated to automate a robotic glue deposition in footwear manufacturing.
To automate the trajectory planning for welding applications, solutions based on multi-view images captured by 3D vision sensors have been proposed.In [14], the trajectory for a robotic welding application is generated by exploiting multi-view point clouds.The point clouds are integrated to construct the point cloud of the entire workpiece.In the solution, the assistance of a human operator is necessary to generate the camera-capturing poses based on the use of a hand gesture-based methodology.The accuracy of the generated trajectory is given by comparing it to a ground truth welding trajectory.In [15], multi-view RGB-D images of a workpiece are used to identify its edges and generate the welding trajectory.A comparison between different welding seems detection techniques has been shown.
To guarantee good execution of the contact-based operations, an accurate trajectory and stable contact force exerted over the surface of the workpiece are required.In contrast to the mentioned trajectory generation techniques, in the proposed solution in this paper, the evaluation metric of the contact-based robotic operation is given by the accuracy of the generated trajectory and stability of contact force exerted by the robotic tool during the operation execution despite inaccuracies due to the depth estimation of a low-cost camera.
This paper introduces the steps to automate contact-based robotic operations without a previous digital model of the workpiece based on the use of a low-cost 3D camera and a built-in force sensor.Specifically, we introduced how to integrate RGBD images to construct a 3D model, extract workpieces, and select within the workpiece the area to be covered by the robotic tool to execute the robotic contactbased operation (examples of searching subroutines within the workpiece are shown).The proposed system addresses operations like surface treatment, welding, and material deposition and investigates the possibility to exploit low-cost sensors, based on the assumption that inaccuracies in the defined trajectories can be, to some extent, compensated by the use of a closed-loop force control adopted to guarantee the proper contact force.
Following the generated trajectory in position control mode (i.e., by defining only the poses to be reached by the tool center point) may not indeed guarantee a stable contact force between the TCP and the workpiece surface, even when very accurate trajectories are generated.Closed-loop force control should therefore be applied [3,5,13], thereby also compensating inaccuracies in trajectories.
In our previous conference paper [16], we introduced two 3D reconstruction techniques, the odometry-based technique and a technique that calculates the camera poses exploiting the robot poses.The generated 3D models have been evaluated by comparing them to the CAD model of the considered workpiece and considering different KPIs that influence the usability in industrial applications such as elaboration time.The focus of this current paper is to analyze the trajectory generated from these techniques.This is done by explaining, besides the odometry reconstruction algorithm, how to elaborate the 3D model and follow and assess the accuracy of the trajectory that are in common steps for odometry and robot-based technique.Different objects with different geometries, durability, and smoothness are considered in the tests.In addition, technical details of the process parameters configuration, which are common for the two techniques, are shown in Section 3.1 (parameter tuning).Different tests and statistical analyses are done to show the effect of configuration parameters on trajectory accuracy.
The paper is organized as follows: in Section 2, the proposed solution is introduced focusing on the 3D model reconstruction of the observed scene from multi-view RGB-D images, workpiece extraction from the 3D model, and trajectory planning to cover the interested area.In Section 3, the experimental setup used and tests carried out to assess the performance of the proposed solution in common situations of contact-based operations are described.Reference is made to operations in which a pre-set contact force has to be applied on a plane or curvedshaped objects.Finally, in Section 4, conclusions are drawn.
Proposed solution
In this chapter, the developed robotic solution is introduced; a summary is shown in Fig. 1.
Step 1, data acquisition, consists of moving the camera following a scanning trajectory to capture RGB-D images of the working space.In step 2, image matching and pose calculations, the gathered images are analyzed to estimate the camera poses from which they have been captured.In step 3, the information contained in RGB-D images is integrated to reconstruct a 3D model of the observed scene.Step 4 consists of the extraction of
Pn
Step 6: trajectory transformation to robot frame and execution in force control mode the workpiece from the entire reconstructed 3D model.For trajectory planning, in step 5, the area to be involved in the manufacturing operation is selected to generate the working trajectory.
Step 6 explains the transformation of the trajectory in the robot frame and the execution of it in force control mode.To develop the robotic solution, the system hardware setup shown in Fig. 2 is used.The system is made by a 6-axis UR5e collaborative robot [17], D415 Realsense stereo depth camera [18], and a 3D printed tool used for the assessment of the accuracy of the trajectories generated by means of the reconstructed 3D model, as it will be discussed in Section 3. The developed algorithms are based on the use of open-source libraries such as Intel Realsense Software Development Kit [18] for Intel Realsense 3D camera control and Open3D [19] library for 3D data elaboration and integration.The point cloud elaboration and search routines to select specific areas within the workpiece point cloud have been developed in Python programming.
Data acquisition
Sensor setup and data acquisition are the first steps in the algorithm development.The Realsense D415 camera captures color and depth images using different sensors and allows configuring the image quality based on the user's preferences.In the sensor data sheet [18], sensors can be configured with resolutions of up to 1280 × 720 pixels for depth sensors and 1920 × 1080 pixels for color sensors.Sensor resolution has a significant effect on the output accuracy, and optimal configuration parameters for the considered case are discussed in Section 3.1.1.In these high-resolution configurations, the sensor can capture images at a frame rate of up to 30 frames per second (fps).
Color and depth images are captured by different sensors and could have different resolutions.So, the first step is to transform the two kinds of images to have the same characteristics.The two images are aligned to have the same exact size and geometrically reference the same physical sensor origin.The alignment procedure explained in detail in our previous work [13] allows the alignment of depth to color or vice versa.
To acquire the images that allow for the 3D construction of an object, all parts/sides of the workpiece have to be covered and visible in the multiview images.The scanning trajectory planning can be automated to optimize the visibility of the working area where the workpiece is placed.This process can be considered a 3D multi-goal path planning problem.Where the multiview 3D camera poses are determined to optimize the coverage of certain objects.In [20], a solution is proposed to find the optimal path to guarantee the visibility of a given set of objects.
In this work, the scanning trajectory is planned manually making the following assumption.Supposing that the workpieces are always fixed inside a predefined area within the robot workspace, the scanning trajectory is planned in such a way that the 3D camera is moved and rotated accordingly toward different viewpoints that cover the working area.In these viewpoints, the 3D camera should be always pointed toward the robot working area where the workpiece is placed.The scanning trajectory is made following acquisition movements that start by capturing from top images of the working area then the robot moves the camera to capture images of the different sides.That trajectory is approximately half of a rectangle where three sides of the working area are covered (the side facing the robot base and the two lateral sides).The outer face is not reachable by the robotic setup used due to the size limit of the robot.A robot arm with a higher reach can be used to cover it.
The explained scanning movements are feasible to scan flat and curved objects without cavities and undercuts.For objects with more complicated geometries or having undercuts, the scanning movements can be modified to cover also the undercuts or occluded parts by adding camera views that allow the coverage of these areas.Limitations on workpiece size, position, and orientation are dependent on the used robot's reach.When a bigger or smaller robot arm is used, a scaling factor can be used to adjust it to cover the new area with bigger or smaller dimensions.The 3D camera is synchronized to capture the multiview images of the working area while following the scanning trajectory.The total length of the scanning trajectory is divided into steps to define where the 3D camera captures an image of the working area.The step magnitude between an image and the following one, in millimeters, influences the total number of images and also the overlapped parts between them.Very small steps may lead to an excessive number of images covering similar parts of the object, while very big steps may cause partial coverage of the object.
Other factors to be considered to plan the scanning trajectory are the robot movement speed, scanning time, and the 3D camera frame rate.The relationship between all the mentioned factors can be described by the following system of Eq. 1: where n is the number of total RGB-D images in the dataset, l is the length of the scanning trajectory, Δl is the scanning step, V r is the robot movement linear speed, t is the scanning time, and fps is the frame rate of the 3D camera.
The RGB-D elaboration algorithm is based on the technique provided in [21] where the authors suggest having a maximum number of images of 100 images.Knowing the length of the scanning trajectory and choosing a scanning time to satisfy production requirements allows for calculating the other variables which are robot linear speed and camera frame rate.From the first equation of 3, the scanning step is Δl = 10mm∕frame .Supposing that a feasible scanning time (1) from the second equation of 3, the robot speed is V r = 100mm∕s .From the third equation of 3, the 3D camera frame rate is fps = 10 frames per seconds.
Images matching and camera pose estimation
The color and depth images contain different information about the observed scene, as they have been captured from different view angles.To create a unique 3D model, it is necessary to refer each image to a common reference frame and then integrate the information contained in the images together to create the 3D model.To calculate the motion between the camera poses capturing the RGB-D images of the observed scene, the RGB-D odometry technique [22] is used.This technique, comparing two images captured by a moving camera of a static scene, determines the homogeneous transformation matrix that if applied to the second image maps it to match the first one.In our previous paper [16], the information of the end-effector poses from which each image is captured has been exploited to integrate all the images to build the 3D model.In the present paper, we discuss more in deep the possibility to reconstruct the camera poses by exploiting the odometry technique, which might be a useful alternative in case the poses from which the image are taken are not available (e.g., manual free scanning).
The RGB-D odometry technique [21] determines the camera motion or the homogeneous transformation matrix by solving an optimization problem.The objective function used in the optimization problem is shown in Eq. 2.
where E(T) is the objective function that is calculated con- sidering two terms.The first term E p (T) , to be maximized, considers pixel photo-consistency [23,24] (if the same pixel is visible in two images, it has to be represented by the same color and brightness in both of the images) [25].It is represented as squared differences of pixel intensities.The second term E p (T) term, to be minimized, is a geometric objective function that measures the error of pixel positions.T is the homogeneous transformation matrix to transform an image to the coordinate system of the other image.The result of the optimization problem is the optimal T .∈ [0, 1] is a weighting term to balance the two terms.
Like most iterative closest point registration algorithms, the one used needs an approximated initial value [26] of the transformation matrix.This initial value is usually obtained by a global coarse registration algorithm based on the point cloud features.Controlling the step length, Δl , in Eq. 1 allows having small motion between consecutive images and having a large overlapped portion.A feasible initial value of the homogeneous transformation matrix is the identity Fig. 3 Acquisition trajectory to capture multi-view RGB-D images of a box matrix that is modified at every iteration to find the best solution minimizing the error of pixels color and position.
The camera reference frame, corresponding to the pose from which the first RGB-D image is captured, is considered the common reference frame to which all images must be referred.The RGB-D Odometry algorithm calculates camera motion between only two images.If the two images do not have enough overlapped pixels, this may cause low accuracy in calculating the motions.To match all images of the dataset, each image is compared to the following one, to find the matching homogeneous transformation matrix between them.The calculated individual matrices, representing camera motions between consecutive images, are finally used to calculate for every image a homogeneous transformation matrix of the cumulative motions to refer to the first image.
The procedure is explained in Algorithm 1.The procedure input is the dataset of RGB-D pairs of images of the working area.The output is an array of homogeneous transformation matrices that aligns every image to the first image of the dataset.H aux is computed considering the objective function in Eq. 2 and applying optimization procedure from [21].H k is the homogeneous transformation matrix that transforms the k th RGB-D image to the coordinate system of the first image.It is given by the cumulative motion of the camera, starting from the first image until arriving to the camera pose from which the k th image has been taken.
Volume integration for 3D model construction
All the multiview color and depth images of the working area are integrated together to create a unique 3D reconstruction representing the observed scene.The volume integration process, introduced in [27,28], is based on the generation of a voxel grid representing the scene, in which color and depth information of each pixel of each RGB-D image are integrated and referred to a global reference frame.The voxel grid is therefore a three-dimensional representation of all the information contained in the two-dimensional color and depth images.From the voxel grid, after applying the truncated signed distance function (TSDF), the isosurface representing the scene is found.The isosurface is the smooth and continuous surface interpolating the not empty voxels in the grid.
The procedure consists of the following steps.A dataset of ( 1, … , n ) RGB-D images is discretized in a voxel grid.Then, calculate the signed distance functions ( s 1 (x), … , s n (x) ) of each voxel x in the i th RGB-D image.These values represent the distance that a voxel has with respect to the nearest surface along the camera's line of sight.Voxels between the observed surface and the camera origin have a positive value that increases as it goes closer to the camera.Voxels of the observed surface have null values while not visible points have a negative distance.
The depth measurements are subject to noise and two depth measurements of the same point, using the same 3D camera and from the same point of view may have different values.To estimate better the observed surface depth measurement, the signed distance values from the different images are averaged to calculate a cumulative signed distance function S(x) .To consider that the same point in two images that are taken from different view angles may have different values, more weight has to be given to the depth value in the image that covers better that point.Weight values ( w 1 (x), … , w n (x) ) of each voxel x in the i th RGB-D image.This weight measures the degree of certainty of the depth measurement of a point.Weight value assignment is dependent on the orientation of the surface normal vector and the viewing angle.If the camera line of sight is codirectional, the weight is the highest and decreases with the increase of the angle between them.This relation can be represented as the dot product between the two vectors.
The weighted integration of all RGB-D images is given by the following Eqs. 3 and 4.Where S(x) is the global signed distance value for every voxel of the integrated scene, and W(x) is the global weight for every voxel.Discretizing the functions S(x) and W(x) in a voxel grid allows calculat- ing the zero-crossing or the isosurface having S(x) = 0 that describes the observed scene.
Algorithm 1 RGB-D odometry
From practical tests, the grid voxel size can affect the accuracy of the reconstructed 3D model.In Fig. 4, a comparison between voxel sizes and a color image of an object is shown.
In Fig. 4a, a color image from the dataset used for the 3D reconstruction is shown.In Fig. 4b, the point cloud considering a big voxel size is shown, while in Fig. 4c, the point cloud using a smaller voxel is shown.The smaller voxel size allows reconstructing a 3D model with a higher level of details and hence more degree of similarity with respect to the real object.
3D model elaboration and trajectory generation
The result obtained from the integration process is the 3D model of the observed scene in the form of a point cloud.
The information contained in the point cloud for every point is the coordinates with respect to the reference frame (i.e., the one corresponding to the camera in the first image), the color in the form of RGB values, and the normal vector.In the considered setup, the xy-plane is parallel to the workbench where the robot and the workpiece are placed.The point cloud z coordinates represent the distance of each point with respect to the workbench, with higher z values corresponding to more distant points.These values have been exploited to develop the searching algorithm, aimed at identifying and selecting the area of the workpiece to be considered in the contact-based operation.The procedure is described in Algorithm 2.
After the reconstructed 3D model is built, combining the color and depth images of the scene covered by the camera during the scanning process, a filtering process has to be applied to remove irrelevant points in the reconstructed 3D (3) model.The point cloud can be trimmed by removing points far from the camera representing unrelated background points.In Algorithm 2, the filtering process consists of the evaluation of the depth value or the distance that a point has with respect to the working table on which the robot is placed.If the distance is greater than a predefined threshold, the point is excluded from the point cloud.
In order to find the workpiece in the remaining part of the point cloud, a plane segmentation process is applied to find the working table on which the workpiece is placed.Clusters of points forming a plane are found (neighboring points having depth in a defined range, i.e., 2 mm, this value considering inaccuracies of the 3D camera in calculating the depth of the flat surface of the working table).The points forming the working table can be found by searching for the biggest cluster of the planes found.Removing these points allows removing the working table from the overall point cloud, leaving only the workpiece.Plane segmentation is done by using a RANSAC search algorithm.RANSAC algorithm is an iterative algorithm that samples a subset of the dataset and uses them to calculate a fitting model (in the considered case a plane).All points of the dataset, are then checked to evaluate if they fit in the model.If the number of points that fit in the model is lower than a defined threshold, the process is repeated sampling other points and calculating a new plane model.This is repeated until finding the model that fits most of the points in the dataset.To configure the plane search algorithm, three parameters are defined.Distance error is the maximum distance that a point has to be considered as a part of the plane.The number of points that are randomly sampled from the dataset to estimate the plane.Finally, the number of iterations of the algorithm to sample points, estimate, and verify the plane.Values used are Error = 2mm , points = 3 , and iteration = 1000.
The result is the plane equation coefficients a , b , c , and d satisfying, for every point of the plane hav- ing the coordinates x p , y p , and z p , the plane equation ax p + by p + cz p + d = 0. Once the plane equation is found, the points forming it are selected and excluded from the point cloud to leave only the workpiece points.Knowing the dimension of the working area where the workpiece is placed and how it is positioned with respect to the camera coordinates and hence the origin of the point cloud, the coordinates of the remaining points of the point cloud are checked to verify if they are found within the working area.Upper and lower bound values are used for that check.Points out of the bounds of the working area are excluded from the point cloud.The considered points are those satisfying the conditions: where x p and y p are the point coordinates, and x low bound , x high bound , y lowbound , and y high bound are the boundaries of the working area in the camera coordinates.
To avoid having sparse outlier points that are far from the points of the workpiece due to the sensor or the 3D reconstruction process noises, a statistical outlier removal filter is needed.The filter removes the points far from the average of the points.It is based on the calculation of the mean distance value between the point and its surrounding (5) x low bound < x p < x upper bound y low bound < y p < y upper bound n-neighbors.Making an assumption of Gaussian distribution of the result, points that have value out of a predefined interval (in terms of average distance and standard deviation) are excluded.
Different subroutines are developed to select the part of the workpiece to be covered in the operation.An example of selecting a specific part of the workpiece could be when a central area should be covered.An example of this case is shown in Fig. 5c.To select that area, the coordinates of the points found after outlier removals are used.The first step is to calculate the centroid of the set of points considering the x and y coordinates in the following way.For a set of points P 1 , … , P n with the coordinates ((x 1 , … , x n ),(y 1 , … , y n )), the centroid is given by Eq. 6.
After calculating the centroid of the points, offsets on the x and y-axes are used to define the area dimension to be selected.The points within the limits of that area are selected, and the rest are excluded.
In other cases, when the operation targets the edges of the workpiece, for example, in polishing operations, the edges Algorithm 2 3D model elaboration and trajectory generation can be found elaborating the normals of the workpiece point cloud and selecting the area having neighboring points with huge variation in the directions of the normals [29].
The selected portion of the workpiece is made by a high number of close points; before the generation of the working trajectory, it is necessary to downsample the point cloud.This allows to the removal of close points and has a minimum distance between the consecutive points that helps the robot controller to interpolate better the movement commands and in the applications considered to have good contact force control.
Based on the operation to be executed, the order in which the selected points are followed can be changed, and parameters like movement speed and contact force have to be defined to generate the trajectory that guarantees a good execution performance.
Figure 5 shows the process of 3D model elaboration to generate the trajectory for the contact-based operation.
In Fig. 5a, the overall point cloud of the observed scene including the working table and the workpiece is shown.In Fig. 5b, the workpiece considered in the contact-based operation is shown, and in Fig. 5c, the trajectory targeting the central area of the upper surface of the workpiece is shown.
In other cases, the operation may target a certain one of the lateral faces of the workpiece.An example is shown in Fig. 6.In Fig. 6a, the left face of the workpiece is selected; in Fig. 6b, the front face is selected; and in Fig. 6c, the trajectory covering the right part is selected.It is possible to note that the normal vectors are available for every point.Comparing the inclination variation in the neighboring points, it is possible to detect also the edges of the point cloud.That is useful for polishing operations.
Having developed the integration algorithm based on the elaboration of color and depth images, the 3D model of the scene preserves also the color information.This process also allows selecting parts of the object and searching for areas having a specific color.This is useful in the case of surface finishing and cleaning.In Fig. 7, an example of a workpiece with a colored spot is shown.In the left of Fig. 7a, the workpiece is shown.In the middle of Fig. 7b, the generated trajectory covering the colored spot is highlighted in the blue circle.In Fig. 7c, the robot end-effector is shown while following the trajectory.
Trajectory transformation and force control configuration
The part of the workpiece to be covered by the robot endeffector is represented in the form of a point cloud that is referred to the camera reference frame taking the first RGBD image.To drive the robot end-effector, to touch the trajectory points, a homogenous transformation matrix H CT is calculated.The matrix H CT is calculated by using informa- tion about the geometrical details of the 3D camera support attaching it to the robot end-effector, the captured images' origin with respect to the 3D camera frame, and the robotic tool dimension.The calculated homogeneous transformation matrix transforms the trajectory poses from the camera frame C to the robot end-effector frame T shown in Fig. 2.
In contact-based robotic operations, to obtain consistent outcomes, a constant force is applied on the workpiece surface.The generated trajectory, referred to the robot tool reference frame, is followed in force control mode along the axis Z T that is pointed toward the workpiece surface.The force control loop adjusts the z-axis value for each point of the trajectory moving it upward or downward to keep the contact force as stable as possible and close to the contact force desired setpoint.Universal robot manipulator is equipped with a built-in force sensor and a built-in PID force controller that is used in the considered experiments.
The force control is used for the accuracy assessment of the generated trajectory.A stable contact force during the trajectory following means that trajectory inaccuracies (e.g., depth estimation of the 3D camera) have been compensated.A comparison between the generated trajectory and the one actually followed is used for the accuracy evaluation.
Several workpieces have been considered for experimentation (e.g., cardboard box, 3D printed flat object, 3D printed curved object, plastic container).The characteristics of the workpiece such as smoothness and durability highly influence the contact force behavior and error compensation.
Experimental setup and tests
In this section, a set of tests and experiments are shown to evaluate the performance of the developed algorithm.We consider cases that are mostly to be found when contactbased robotic operations have to be executed over an object without any previous knowledge.The cases considered are operations on plane objects, operations on curved objects, and operations where it is necessary to apply proper contact force to guarantee a correct execution.Also, we consider the effect that some parameters could have on the accuracy of the output of the developed algorithm, such as camera resolution to capture the RGB-D images and voxel size considered during the image integration.
The setup used during the experiments is the one shown in Fig. 2. In order to assess the accuracy of the trajectory generated based on the reconstructed 3D model, the 3D printed robotic tool in the figure is moved toward the reconstructed object, following the generated trajectory.When the tool tip touches the object, the actual position is compared to the theoretical one generated from the 3D model of the object, and the difference in the actual and theoretical position is used to calculate the error.This measurement is directly influenced by the rigidity of the workpiece.In order to overcome this limitation, the end effector tip is stopped as soon as the load cell on the robot wrist starts measuring a contact force.Moreover, different objects with different rigidity are tested, i.e., a cardboard box with slightly low rigidity and poly lactic acid (PLA) 3D printed objects with higher rigidity.This experiment is done by exploiting the Universal Robot UR5e built-in force-torque sensor [17].
As a first test case, consider the one in which a robotic contact-based operation has to be executed over the center of the upper face of the cardboard box workpiece in Fig. 5.The working trajectory, selected from the workpiece point cloud and generated according to Section 2.4, is the one highlighted in red in Fig. 5c.
Applying force control and, in particular, a touch stop function in which the robot stops as soon as touches the workpiece, every point of the trajectory is reached separately, thus allowing the evaluation of its accuracy.Figure 8 shows a comparison between the trajectory points generated from the 3D model (blue line) and the tooltip feedback position obtained when it reaches the workpiece's surface at each point of the trajectory (actual trajectory depth, orange line).The lines in the graphs are made by connecting the values measured reaching every single point of the trajectory separately.The values indicate the height of the object at every point reached.The dashed line reports the mean values of the trajectory points of both the theoretically generated and the actually executed trajectory, highlighting a mean error of about 1.1 mm.The maximum discrepancy between the two curves is equal to 3 mm, and the minimum is 0.16 mm, as reported in Fig. 8b.
The values extracted from the reconstructed 3D model (blue line in Fig. 8a) have a relevant variance between consecutive points, due to both the 3D vision sensor accuracy in estimating the depth values and the voxel size considered in reconstructing the 3D model.However, as further commented in the following, this can be acceptable in contact-based applications where a certain geometrical error can be compensated thanks to the force control closed loop, keeping the desired contact and thus guaranteeing the correct trajectory.
Camera acquisition accuracy
The Intel Realsense D415 has several resolution configurations for color and depth sensors to be set by the user.To obtain the best resolution of the depth sensor, it is recommended to use 1280 × 720 pixels.
Changing the resolution settings has a direct effect on the minimum allowed distance between the depth sensor and the observed scene.This distance is the one at which the depth processor starts to measure the depth values.For the maximum resolution of 1280 × 720 pixels, the minimum distance is 450 mm between the sensor and the scene.Decreasing the resolution to 848 × 480 pixels, the minimum distance decreases to 310 mm.
In the application considered, where the D415 depth camera is moved by the robot arm to capture the images of an object, a required minimum distance influences the maximum height that the scanned workpiece can have since the height of the extended robot arm is limited.
Tests are carried out to compare the resolution configurations used to capture the multi-view images.The resolutions considered are 1280 × 720 and 848 × 480.The 3D model reconstruction is made considering a voxel size of 5 mm.For each configuration, 10 different datasets of RGB-D images are acquired assess the repeatability and consistency of the proposed method results.These images are used to reconstruct 10 3D models of the same object that are used for trajectory generation, execution, and error measurement using the force control method (touch stop) explained at the beginning of this chapter.The output of each of the 10 tests is similar to the two figures in 8b, where the error for each point in the trajectory is calculated and these individual errors are used to calculate the overall error mean value of the trajectory.
Results are shown in Fig. 9, in which the blue dots correspond to the error mean values of each of the generated 10 trajectories.The error mean values for lower resolution is fluctuating between 1.5 and 1.72 mm.For higher resolution, the error is between 1.15 and 1.3 mm. Figure 10 reports the statistical analysis of the trajectories generated, considering every acquisition of the 10 tests for both the lower (Fig. 10a) and higher resolution (Fig. 10b).Data distribution is shown in terms of quartiles, median, maximum, and minimum values.
Increasing the resolution improves the accuracy of the generated trajectory but with the drawback of decreasing the maximum height of the workpiece.
Voxel size effect on Integrated volume using truncated signed distance function (TSDF)
The estimated surface describing the observed scene is generated by transforming the depth values stored in a voxel grid in an isosurface as explained in Section 2.3.The accuracy of that surface is dependent on the dimension of the voxels forming the grid, as voxel values are obtained by averaging depth values from the depth images.
A bigger voxel dimension means that depth values contained in a bigger area are averaged to calculate a single value representing the cell of the grid.Hence, the output value is far from the real value.Instead, smaller size leads to considering the depth values of a smaller area and hence more accurate approximation.
Figure 11 shows the errors evaluated for two different voxel sizes adopted in the reconstruction of the 3D models from which trajectories are extracted.Results correspond to the highest resolution of 1280 × 720 pixels.For the bigger voxel size of 5 mm, whose 3D model reconstruction is shown in Fig. 4b, higher errors are obtained, both in mean value and maximum value (maximum error up to 1.35 mm).On the other hand, for the smaller voxel size of 1.3 mm, whose 3D model reconstruction is shown in Fig. 4c, lower errors are obtained (maximum error being equal to 0.8 mm).
Performance evaluation
In this section, the performance of the generated mesh model is evaluated, considering surface geometries that are mostly to be found when contact-based operations are executed (i.e., plane and curved shape).The parameters used for both data acquisition and 3D model reconstruction are the optimal ones as emerged from the analysis of the previous paragraphs, i.e., camera resolution 1280 × 720 pixels and voxel size 1.3 mm.
Depth evaluation of a plane shape workpiece
The object shown in Fig. 4a is considered an example of an object having a plane surface on which the contactbased operation has to be executed.The procedure is the one explained before.In which, the end-effector is driven toward each of the points, one by one, belonging to the generated trajectory (covering the central area).The object used has a higher rigidity with respect to the object shown in Fig. 5, and results are shown in Fig. 8.The object has a known height of 173 mm.In Fig. 12, the evaluation results are shown.The blue line represents the height extracted from the reconstructed 3D model.The orange line represents the real height calculated from the measured robot TCP position, reaching every point of the trajectory.The average error is less than 0.5 mm, and the maximum one is equal to 0.8 mm.
Depth evaluation of a curved shape workpiece
This section focuses on the capability of the developed code to define a robot trajectory to be followed on a curved surface, which is another type of shape mostly to be considered in contact-based operations.In order to carry out the analysis, the curved object represented in Fig. 13a is adopted.Figure 13b reports the corresponding 3D model reconstructed.
The points defining the trajectory to be tested on the curved surface are highlighted in red. Figure 14 shows the test results.Figure 14a shows the comparison between the estimated height (blue line) and the actually measured height (orange line).
Figure 14b shows the corresponding error, having a mean value of 1.85 mm, minimum value of 1.2 mm, and maximum value of 2.2 mm.
In comparison with the values achieved for the plane surface, the reconstructed 3D model of the curved object has a lower accuracy.
Contact force behavior evaluation
In most of the contact-based robotic applications (e.g., polishing and surface treatment), the contact force applied by the tool on the workpiece surface must be kept rather close to the target value to guarantee the proper execution of the task.For this reason, the use of a force control loop is highly recommended in these applications, possibly exploiting a load cell on the robot's wrist.In such a case, the activation of the force control also enables compensation of the error in the generated trajectory, so that the low accuracy/low-cost sensor exploited in this work can As a final step of validation, this subsection presents the tests executed with a force control loop, following the trajectories generated and discussed in the previous sections.Different contact forces are applied on the plane surface object of Fig. 4a, with the purpose to demonstrate that the low-cost sensor adopted in this work and the described procedure are suitable for generating trajectories accurate enough to guarantee the target contact force.
Figure 15 shows the contact force measured during the execution of the zigzag trajectory of Fig. 5.The same working trajectory is followed by applying different contact force set points (2, 5, and 10 N).
In Fig. 15, the analysis of the force values is shown.The figure shows that the mean value in the three cases is equal to the set point, with some contact loss only in the case of the 2N average force.
Conclusion
This paper discusses the possibility to exploit low-cost cameras for trajectory planning in robotic contact-based applications, in which the 3D model is not available in advance.
The proposed methodology is based on the integration of multi-view RGB-D images of the workpiece captured using a low-cost 3D camera.Compared to solutions exploiting more accurate but much more expensive sensors like laser scanners, this solution can be more feasibly exploited in small and medium enterprises, thanks to the low cost.It would also allow the quick reconfiguration of the robotic cell.Color and depth images are integrated to generate the 3D model without previous knowledge of the workpiece so that trajectories for the robot end effector can be generated in the absence of the CAD model of the workpiece.The information of camera pose can be either directly used for the 3D reconstruction [16] or estimated based on the RGB-D odometry technique.In this case, the presented methodology can be also used for free scanning in case the exploitation of a robot is not possible or can be used for quality monitoring of workpieces.
Experimental results show how to tune relevant parameters that may improve the accuracy of the reconstructed 3D model and hence the trajectory.The experimental tests also showed that the proposed solution could be properly applied over planar and curved workpieces obtaining error between generated ad actually followed trajectory ≤ 2 mm.
The exploitation of a contact force control to keep a proper contact force between the tool and the object surface allowed for compensation of the positional errors, in the order of a few millimeters, between the generated and the actual required trajectory.
The scanning trajectory in the proposed solution is chosen offline without consideration of the workpiece form.Future work is to use an adaptable trajectory that based on the reconstructed 3D generates viewpoints to cover undercuts, cavities, or occluded parts of the workpiece.
Data transparency All data and materials as well as software application or custom code support claims and comply with field standards.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material.If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.To view a copy of this licence, visit http:// creat iveco mmons.org/ licen ses/ by/4.0/.
Figure 3
shows a practical example of a scanning trajectory, in which a scanning trajectory length of l = 1000mm and data- set size equal to n = 100 images are used.The figure shows the coordinate systems representing the camera pose at each RGB-D image acquired.
Fig. 4
Fig. 4 Voxel size effect on points density
Fig. 5 Fig. 6 Fig. 7
Fig. 5 Point cloud elaboration, object extraction, and trajectory generation.a Point cloud of the reconstructed scene, b extraction of workpiece, and c trajectory generation
Fig. 12
Fig. 12 Height comparison of a plane object
Fig. 13
Fig. 13 Curved object and trajectory generated | 11,165 | 2023-08-12T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Pesticides on food.
On page 390 of the October issue of EHP (volume 101, number 5), there is a graph that purports to show the intake of pesticides by children in milligrams per kilogram per day. I think you owe a prominent correction/explanation to your readers. The original figure in the National Academy of Sciences report (figure 5-1, p. 172) shows that this is intake of food, not pesticide residue. The only point of the figure is that infants eat more of certain commodities than do adults on a body weight basis. Ifthere is residue present, and ifit survives processing, then they would get a correspondingly higher exposure. However, the situation is nothing like you imply. Furthermore, I could not find any place in the report that says children consume 60 times more fruit than adults. Table 5-5 of the NAS report (p. 183) shows that apple juice may be consumed by non-nursing infants at 16 times the national average (again, relative to body weight). For 14 year olds, the factor is 3 or less for most all foods. Finally, while it is true that concern about this report generated much activity within Congress and several federal agencies, a careful reading of the actual report will show that the concerns are largely theoretical in nature. Improvements are desired in consumption data, toxicity testing, overall regulatory approach, etc. There is nothing in the report, despite quotes to the contrary, that demonstrates that the food supply is unsafe for children or any other subset of the population.
Pesticides on Food
On page 390 of the October issue of EHP (volume 101, number 5), there is a graph that purports to show the intake of pesticides by children in milligrams per kilogram per day. I think you owe a prominent correction/explanation to your readers. The original figure in the National Academy of Sciences report (figure 5-1, p. 172) shows that this is intake of food, not pesticide residue. The only point of the figure is that infants eat more of certain commodities than do adults on a body weight basis. Ifthere is residue present, and ifit survives processing, then they would get a correspondingly higher exposure. However, the situation is nothing like you imply.
Furthermore, I could not find any place in the report that says children consume 60 times more fruit than adults. Table 5-5 of the NAS report (p. 183) shows that apple juice may be consumed by non-nursing infants at 16 times the national average (again, relative to body weight). For 1-6 year olds, the factor is 3 or less for most all foods.
Finally, while it is true that concern about this report generated much activity within Congress and several federal agencies, a careful reading of the actual report will show that the concerns are largely theoretical in nature. Improvements are desired in consumption data, toxicity testing, overall regulatory approach, etc. There is nothing in the report, despite quotes to the contrary, that demonstrates that the food supply is unsafe for children or any other subset of the population.
Thomas D. Trautman
General Mills, Inc.
Minneapolis, Minnesota
Editor's Note: The caption for the graph on page 390 of volume 101 refers to infant intake ofpesticides on raw agricultural commodities, but the graph shows only amounts offood intake in proportion to body weight, not pesticide residues. We regret any confusion resultingfrom this caption.
Heterocyclic Amine-induced Cancer and Myocardial Lesions in Nonhuman Primates
Two articles in this issue from Adamson and co-workers (p. 190) describe the effects of feeding monkeys carcinogens formed during the cooking of food derived from animal muscle. In the first paper the frequency and descriptive pathology is reported for 2-amino-3-methylimidazo[4,5-fiquinoline (IQ)-induced liver tumors. In the second paper, a histopathological study of perfusion-fixed hearts of tumor-bearing monkeys showed a variety of myocardial lesions with exposure to IQ.
The major impact of this work is that a nonhuman primate species, the cynomolgus monkey, develops liver tumors after exposure to a heterocyclic amine that is ubiquitous in our cooked food supply (1,2). Not only do the monkeys under test get tumors, but 43 months was the average latent period for the high-dose animals and is equivalent to 15-25% of the animals' life span-a very quick response.
An important question arises from this research: Do the high doses (10 and 20 mg/kg) used chronically in these experiments relate to human exposures, and if not, are the results still significant? Humans eating well-done muscle-derived meats consume 10,000-100,000 times less material daily per kilogram of body weight than do the monkeys (3). There are a number of studies that attempt to answer this question about high-dose extrapolation. They suggest that at 104-106 times lower doses than used in the feeding studies discussed here, heterocyclic amines survive the acid in the stomach, are taken up by the bloodstream from the intestine, and are metabolized by the liver cytochrome P450-A metabolizing enzymes (4,5). The N-hydroxy metabolites are then either reactive in the liver after further conjugation to form DNA adducts and presumably liver tumors or are found as DNA adducts in numerous nonhepatic tissues where the conjugation reactions probably occur locally (6). The question then is do these reactions happen when the reactant is at 10,000 times lower concentration?
Apparently, the answer is yes. In specific rodent experiments conducted over many orders of magnitude of dose, DNA binding for these compounds appears linear down to the levels found in a single hamburger (7). The data suggests that repair of DNA damage (heterocyclic amine adducts) 138 Environmental Health Perspectives | 1,292.2 | 1994-02-01T00:00:00.000 | [
"Biology"
] |
Alternative space definitions for P systems with active membranes
The first definition of space complexity for P systems was based on a hypothetical real implementation by means of biochemical materials, and thus it assumes that every single object or membrane requires some constant physical space. This is equivalent to using a unary encoding to represent multiplicities for each object and membrane. A different approach can also be considered, having in mind an implementation of P systems in silico; in this case, the multiplicity of each object in each membrane can be stored using binary numbers, thus reducing the amount of needed space. In this paper, we give a formal definition for this alternative space complexity measure, we define the corresponding complexity classes and we compare such classes both with standard space complexity classes and with complexity classes defined in the framework of P systems considering the original definition of space.
Introduction
P systems with active membranes have been introduced in [27], considering the idea of generating new membranes through the division of existing ones. The exponential amount of resources that can be obtained in this way, in a polynomial number of computation steps, naturally leads to the definition of new complexity classes to be compared with the standard ones.
Initially, the research activity focused on the investigation of time complexity. It was proved that, to go beyond the complexity class P , the creation of new membranes is a necessary feature to gain enough computation efficiency [40], unless non-confluent systems are used [34]. In [35] it was proved that P systems with active membranes can solve all problems in the class PSPACE in polynomial time, a result which is valid also for uniform systems, as proved in [6]. Relations with the classes EXP and EXPSPACE were investigated in [33].
A series of works then defined various complexity classes characterized by P systems that make use of different features. For instance, the works [12,13] focused on the crucial role of membrane dissolution; polarizationless systems have been investigated in [4,5,11,14]; constraints on membrane division [22] or on the depth of membrane structure [16] have been the subjects of other works, while [37,38] focused on the role of cooperation.
More recently, other aspects have also been studied. In [1,25] a different kind of membrane division, called separation (since objects are separated between new membranes, rather than duplicated) is considered in the framework of P systems with active membranes; in [24] such kind of rules are applied in a different variant of P systems, having proteins on membranes. In [7,10] solutions for SAT are proposed which use different strategies than previously proposed solutions. Systems of a shallow depth are the subject of [17][18][19]. A recent survey on different strategies to approach computationally hard problems by P systems with active membranes can be found in [36].
A natural research topic that has been approached after the first works on time complexity concerns space complexity, a notion introduced for the first time in the framework of P systems in [29]. The definition was based on a hypothetical real implementation by means of biochemical materials such as cellular membranes and chemical molecules. Under this assumption, it was assumed that every single object or membrane requires some constant physical space, and this is equivalent to using a unary encoding to represent multiplicities. The relations between standard computational complexity classes and the space complexity classes defined in these terms have been studied, both when at least a linear amount of space is used [30,31], as well as when only sublinear [32] or even constant amount of space [15] is available. A recent survey concerning results obtained by considering different bounds on space can be found in [39].
When defining space complexity for P systems, a different approach can also be considered, focusing the definition of space on the simulative point of view. In fact, by considering an implementation of P systems in silico (like the ones in, e.g., [8,9]), it is not strictly necessary to store information concerning every single object: the multiplicity of each object in each membrane can be stored using binary numbers, thus reducing the amount of needed space.
In this paper, we consider this option: we introduce a formal definition for this alternative space complexity measure, we define the corresponding complexity classes and we compare such classes both with standard space complexity classes defined for Turing machines and with complexity classes defined in the framework of P systems considering the original definition of space [29]. We will give results concerning the use of a constant, polynomial or exponential amount of space, respectively, trying to understand whether or not the classes of solvable problems differ.
The paper is organized as follows. In Section 2 we recall some definitions concerning P systems with active membranes and space requirements in P systems computations. In Section 3, we introduce a different definition for measuring space (which we call binary space to underline that information concerning objects is stored in binary) and we give some results following immediately from this definition. In Section 4 we compare the new binary space complexity classes with standard complexity classes and with space complexity classes for P systems based on the standard definition of space. Finally Section 5 draws some conclusions and presents some future research topics on this subject.
Basic definitions
In this section, we shortly recall some definitions that will be useful while reading the rest of the paper. For a complete introduction to P systems, we refer the reader to The Oxford Handbook of Membrane Computing [28]. where: • is an alphabet, i.e., a finite non-empty set of symbols, usually called objects; in the following, we assume • is a finite set of labels for the membranes; • is a membrane structure (i.e., a rooted unordered tree, usually represented by nested brackets) consisting of d membranes, labelled by elements of in a one-to-one way, defining regions (the space between a membrane and all membranes immediately inside it, if any); • w h 1 , … , w h d , with h 1 , … , h d ∈ , are strings over describing the initial multisets of objects placed in the d regions of ; • R is a finite set of rules over .
Membranes are polarized, that is, they have an attribute called electrical charge, which can be neutral (0), positive (+ ) or negative (−).
A P system can make a computation step by applying its rules to modify the membrane structure and/or the membrane content. The following types of rules can be used during the computation: • Object evolution rules, of the form [a → w] h They can be applied inside a membrane labelled by h, having charge and containing at least an occurrence of the object a; the copy of the object a to which the rule is applied is rewritten into the multiset w (i.e., a is removed from the multiset in h and replaced by the objects in w).
They can be applied to a membrane labelled by h, having charge and such that the external region contains at least an occurrence of the object a; the copy of the object a to which the rule is applied is sent into h becoming b and, simultaneously, the charge of h is changed to .
They can be applied to a membrane labelled by h, having charge and containing at least an occurrence of the object a; the copy of the object a to which the rule is applied is sent out from h to the outside region becoming b and, simultaneously, the charge of h is changed to .
• Dissolution rules, of the form [a] h → b They can be applied to a membrane labelled by h, having charge and containing at least an occurrence of the object a; the copy of the object a to which the rule is applied is replaced by b, the membrane h is dissolved and its contents are left in the surrounding region.
They can be applied to a membrane labelled by h, having charge , containing at least an occurrence of the object a but having no other membrane inside (in this case the membrane is said to be elementary); the membrane is divided into two membranes having both label h and charges and , respectively; the copy of the object a to which the rule is applied is replaced, respectively, by b and c in the two new membranes, while the other objects in the initial multiset are copied to both membranes.
These rules operate just like division for elementary membranes, but they can be applied to non-elementary membranes, containing membrane substructures and having label h. Like the objects, the substructures inside the dividing membrane are replicated in the two new copies of it.
A configuration of a P system with active membranes is described by the current membrane structure (including the electrical charge of each membrane) and the multisets located in the corresponding regions. A computation step changes the current configuration according to the following set of principles: • Each object and membrane can be subject to at most one rule per step, except for object evolution rules: this means that inside each membrane several evolution rules can be applied simultaneously, but each membrane can be involved only in a single communication, dissolution, or division rule per step. • The application of rules is maximally parallel: each object appearing on the left-hand side of evolution, communication, dissolution or division rules must be subject to exactly one of them (unless the current charge of the membrane prohibits it, and according to the fact that a membrane can be involved in a single communication, dissolution, or division rule per step). The same principle applies to each membrane that can be involved in communication, dissolution, or division rules. In other words, the only objects and membranes that do not evolve are those associated with no rule, or only to rules that are not applicable due to the electrical charges. • When several conflicting rules can be applied at the same time, a nondeterministic choice is performed; this implies that, in general, multiple possible configurations can be reached as a result of a computation step. • In each computation step, all the chosen rules are applied simultaneously (in an atomic way). However, to clarify the operational semantics, each computation step is conventionally described as a sequence of micro-steps as follows. First, all evolution rules are applied inside the elementary membranes, followed by all communication, dissolution and division rules involving the membranes themselves; this process is then repeated on the membranes containing them, and so on towards the root (outermost membrane). In other words, the membranes evolve only after their internal configuration has been updated. For instance, before a membrane division occurs, all chosen object evolution rules must be applied inside it; in this way, the objects that are duplicated during the division are already the final ones. • The outermost membrane cannot be divided or dissolved, and any object sent out from it cannot re-enter the system again.
A halting computation of the P system is a finite sequence of configurations C = (C 0 , … , C k ) , where C 0 is the initial configuration, every C i+1 is reachable from C i via a single computation step, and no rules of are applicable in C k . If this last condition is never reached (that is, in each configuration of the sequence there is at least one applicable rule), then a non-halting computation C = (C i ∶ i ∈ ℕ) is obtained, that consists of infinitely many configurations, again starting from the initial one and generated by successive computation steps. P systems can be used as language recognizers by employing two distinguished objects yes and no ; exactly one of these must be sent out from the outermost membrane, and only in the last step of each computation, to signal acceptance or rejection, respectively; we also assume that all computations are halting.
In order to solve decision problems (i.e., recognize languages over an alphabet ), we use families of recognizer Each input x is associated with a P system x in the family that decides the membership of x in the language L ⊆ ⋆ by accepting or rejecting. The mapping x ↦ x must be efficiently computable for each input length [23].
These families of recognizer P systems can be used to solve decision problems as follows.
Definition 2 Let be a P system whose alphabet contains two distinct objects yes and no , such that every computation of is halting and during each computation exactly one of the objects yes, no is sent out from the skin to signal acceptance or rejection. If all the computations of agree on the result, then is said to be confluent; if this is not necessarily the case, then it is said to be non-confluent and the global result is acceptance if and only if there exists an accepting computation.
Definition 3
Let L ⊆ ⋆ be a language, D a class of P systems (i.e. a set of P systems using a specific subset of features) and let = { x | x ∈ ⋆ } ⊆ D be a family of P systems, either confluent or non-confluent. We say that decides L when, for each x ∈ ⋆ , x ∈ L if and only if x accepts.
Complexity classes for P systems are defined by imposing a uniformity condition on and restricting the amount of time or space available for deciding a language.
Definition 4 Consider a language L ⊆ ⋆ , a class of recognizer P systems D , and let f ∶ ℕ → ℕ be a proper complexity function (i.e. a "reasonable" one, see [26,Definition 7.1]). We say that L belongs to the complexity class MC ⋆ D (f ) if and only if there exists a family of confluent P systems there exists a deterministic Turing machine which, for each input x ∈ ⋆ , constructs the P system x in polynomial time with respect to |x|; • operates in time f, i.e. for each x ∈ ⋆ , every computation of x halts within f(|x|) steps.
In particular, a language L ⊆ ⋆ belongs to the complexity class PMC ⋆ D if and only if there exists a semi-uniform family of confluent P systems = { x | x ∈ ⋆ } ⊆ D deciding L in polynomial time.
The analogous complexity classes for non-confluent P systems are denoted by NMC ⋆ D (f ) and NPMC ⋆ D .
Another set of complexity classes is defined in terms of uniform families of recognizer P systems: Definition 5 Consider a language L ⊆ ⋆ , a class of recognizer P systems D , and let f ∶ ℕ → ℕ be a proper complexity function. We say that L belongs to the complexity class MC D (f ) if and only if there exists a family of confluent P systems = { x | x ∈ ⋆ } ⊆ D deciding L such that: • is uniform, i.e. for each x ∈ ⋆ deciding whether x ∈ L is performed as follows: first, a polynomial-time deterministic Turing machine, given the length n = |x| as a unary integer, constructs a P system n with a distinguished input membrane; then, another polynomial-time deterministic Turing machine computes an encoding of the string x as a multiset w x , which is finally added to the input membrane of n , thus obtaining a P system x that accepts if and only if x ∈ L.
• operates in time f, i.e. for each x ∈ ⋆ , every computation of x halts within f(|x|) steps.
In particular, a language L ⊆ ⋆ belongs to the complexity class PMC D if and only if there exists a uniform family of confluent P systems = { x | x ∈ ⋆ } ⊆ D deciding L in polynomial time.
The analogous complexity classes for non-confluent P systems are denoted by NMC D (f ) and NPMC D .
As stated in the Introduction, the first definition of space complexity for P systems introduced in [29] considered a possible real implementation with biochemical materials, thus assuming that every single object and membrane requires some constant physical space. Such a definition (in the improved version from [20], taking into account also the space required by the labels for membranes and the alphabet of symbols) is the following:
Definition 6
Considering a configuration C of a P system , its size |C| is the number of membranes in the current membrane structure multiplied by log | | , plus the total number of objects from they contain multiplied by log | | . If C = (C 0 , … , C k ) is a computation of , then the space required by C is defined as The space required by itself is defined as the supremum of the space required by all computations of : Following what has been done for time complexity classes, we can define space complexity classes. By MCSPACE D (f (n)) (resp. MCSPACE ⋆ D (f (n)) ) we denote the class of languages which can be decided by uniform (resp. semi-uniform) families, , of confluent P systems of type D (for example, when we refer to P systems with active membranes, we denote this by setting D = AM ), where each x ∈ operates within space bound f(|x|).
In particular, the class of problems solvable in polynomial space by uniform confluent systems is denoted by PMCSPACE D , and the class of problems solvable in exponential space by uniform confluent systems is denoted by EXPMCSPACE D (adding a star in case of semi-uniform classes).
The corresponding classes for non-confluent systems are NPMCSPACE D and NEXPMCSPACE D .
An alternative definition of space complexity for P systems
In this section, we first give a different definition of space complexity for P systems with active membranes. This definition considers the information stored in the objects of the systems, and not the single objects themselves. In other words, we store, using binary numbers, the multiplicity of each object in each membrane, thus reducing the amount of needed space with respect to the definition of space given in the previous section. We will do this considering, for each region, a sequence of couples, describing how many occurrences of each object are present (only for objects having at least one occurrence in the region). As an example, considering an (ordered) alphabet = {a, b, c, d} , a multiset a 2 , b 5 , d 6 can be described by the sequence of couples (010, 00), (101, 01), (110, 11) (where (010,00) corresponds to 2 occurrences of the first symbol in , that is a, (101,01) to 5 occurrences of the second symbol b, etc.). Of course, different descriptions can also be considered: for instance, the bits describing the object can be avoided if we give, in order, the amount of each object, including objects having zero occurrences (sometimes this would allow to save space, but sometimes this would require more space, like in the case of sparse information -see, e.g., [21]). We leave as an open research topic the question whether or not different descriptions allow improvements in space usage.
We will refer to this definition of space by binary space, and we will add a symbol B where appropriate, to distinguish between the definitions referring to this new measure and the definitions recalled in the previous section.
Definition 7
Consider a configuration C of a P system . Let us denote by h 1 , h 2 , ..., h z the membranes of the current membrane structure (we stress the fact that z can be smaller, equal, or greater than the initial number of membranes d, due to dissolution and duplication of membranes; we also stress the fact that we do not need to store unique IDs for membranes having the same label as we can, for example, indicate multisets of objects inside a string-like bracketed expression), and by |O i,j | the multiplicity of object i within region j. The binary size |C| B of a configuration C is defined as: that is the number of membranes in the current membrane structure multiplied by log | | , plus the number of bits required to store the description of the multiset in each region.
is a computation of , then the binary space required by C is defined as The binary space required by itself is then obtained by computing the binary space required by all computations of and taking the supremum: Finally, let = { x ∶ x ∈ ⋆ } be a family of recognizer P systems, and let s ∶ ℕ → ℕ . We say that operates within binary space bound s if and only if | x | B ≤ s(|x|) for each x ∈ ⋆ .
We can thus define space complexity classes considering this newly introduced size measure, like we did in the previous section. By MCBSPACE D (f (n)) (resp. MCBSPACE * D (f (n)) ) we denote the class of languages which can be decided by uniform (resp. semi-uniform) families, , of confluent P systems of type D, where each x ∈ operates within space bound f(|x|), considering this new definition of binary space. Similarly, we can define the usual complexity classes like we did in the previous section, simply adding a B to underline the use of this new definition of space. For instance, the class of problems solvable by uniform (resp. semi-uniform) systems in polynomial binary space will be denoted by PMCBSPACE D (resp. PMCBSPACE * D ). Once these notions have been defined, we are ready to state some results obtained by considering various complexity classes defined in terms of binary space. Just like it happens with the classes based on the original definition of space given in [29], some results follow immediately from the definitions (we denote a result that holds for both semi-uniform and uniform systems by [⋆]):
and in particular
The results describing closure properties and providing an upper bound for time requirements of P systems operating in bounded binary space are still valid, too:
Proposition 3 The complexity classes PMCBSPACE
D , a n d NEXPMCBSPACE [⋆] D are all closed under polynomial-time reductions.
Proof Consider a language L ∈ PMCBSPACE ⋆ D and let M be the Turing machine constructing the family that decides L. Let L ′ be reducible to L via a polynomial-time computable function f.
We can build a Turing machine M ′ working as follows: on input x of length n, M ′ computes f(x); then it behaves like M on input f(x), thus constructing f (x) (we stress the fact that, for the corresponding result concerning the uniform case, the construction of the P system involves two Turing machines, both operating in polynomial time; in this case, we simulate the composition of the two machines). Since |f (x)| is bounded by a polynomial, M ′ operates in polynomial time and f (x) in polynomial binary space; it follows that � = { f (x) | x ∈ ⋆ } is a polynomially semi-uniform family of P systems deciding L ′ in polynomial binary space. Thus L � ∈ PMCBSPACE ⋆ D . The proofs for the three other classes and for the corresponding uniform classes are analogous. ◻ Proof By reversing the roles of objects yes and no , the complement of a language can be decided. ◻
Proposition 5
For each function f ∶ ℕ → ℕ Proof Let L ∈ MCBSPACE ⋆ D (f (n)) be decided by the semiuniform family of recognizer P systems in binary space f; let x ∈ with |x| = n and let C be a configuration of x .
The configuration C is described by the membrane structure and the objects inside it. The information concerning objects is stored using bits, as described above. The membrane structure can be stored directly using a bracketed expression. For z membranes the binary space allocated requires z × log(| |) bits; even by adding a constant number of bits for each bracket corresponding to each membrane, the space required is O(z × log(| |)) . The binary space required Since x is a recognizer P system, by definition every computation halts: then it must halt within 2 O(f (n)) steps in order to avoid repeating a configuration.
The same argument, with only some small differences, also works in the non-confluent case. All possible computations halt, even if not necessarily agreeing on the answer. Due to non-confluence, each computation can also contain repeated configurations. Nonetheless, for each computation containing a repeated configuration, there exists an equivalent one obtained by removing cycles in the computation path.
The proof for the uniform classes is analogous. ◻
Comparison with standard computational complexity classes
In this section, we compare the standard computational complexity classes with the complexity classes defined in the framework of P systems working in binary space.
Most results are an immediate consequence of the results given in [29], simply considering that Thus, recalling various results from [29], we have: Proposition 6 Let us denote by EAM and AM 0 the classes of P systems with active membranes using only elementary membrane division and without polarizations, respectively. The following results hold: Proof The inclusion P ⊆ MCSPACE ⋆ AM (O(1)) follows immediately from the definition of semiuniform P systems. Consider a language L in P and a string x; a deterministic Turing machine can create in polynomial time a P system having a single membrane and one single object yes or no , directly answering the question whether or not x ∈ L . The inclusion MCSPACE ⋆ AM (O(1)) ⊆ MCBSPACE ⋆ AM (O(1)) follows, as stated above, from the definition of binary space.
For the converse, we simply need to recall that a confluent semiuniform P system without membrane division can be simulated, in polynomial time, by a deterministic Turing machine, like it was shown in [40]. It is easy to see that the proof works both considering the standard space definition as well as the binary space definition for P systems. Even when the division of membranes is allowed, but the system can only use an amount of space that is limited to a constant, the total number of membranes is limited by a constant and, as a consequence, the total number of configurations is polynomially bounded. Hence, the same simulation is still valid. ◻ It follows that, for semiuniform systems, when we allow only a constant amount of space, the improved storage allowed by binary space does not lead to improved efficiency.
Another interesting result concerning the standard definition of space in the framework of P systems was presented in [30], and it focuses on the type of resources used. In particular, a solution for the PSPACE-complete problem Quantified 3SAT was given, for uniform systems using only communication rules (hence no evolution, membrane division and dissolution rules were used), thus proving the inclusion of PSPACE in this class. Once again, since the definition of binary space allows a more efficient allocation of space, the result is still valid: Proposition 8 Let us denote by AM(-ev,+com,-dis,-div) the class of P systems with active membranes using only communication rules (while rules for object evolution, dissolution, and division of membranes are not used). Then PSPACE ⊆ PMCBSPACE [⋆] AM(-ev,+com,-dis,-div) .
Once again, it would be interesting to understand whether or not the result remains valid for a smaller binary space class. In this case, the question can be answered negatively, by considering a result presented in [31]. In the article, it was shown that recognizer P systems with active membranes using polynomial space characterize the complexity class PSPACE . The result holds for both confluent and nonconfluent systems, and even in the case that non-elementary division is used. In particular, it was pointed out that such systems can be simulated by polynomial space Turing machines.
By considering the alternative definition for binary space, we can thus obtain the corresponding theorem: Theorem 9 Let be a nonconfluent P system with active membranes, running in binary space S. Then, it can be simulated by a deterministic Turing machine in space O(S 2 ).
Proof
We simulate by means of a non-deterministic Turing machine N. The current configuration of can be stored explicitly by N: the membrane structure is represented directly by using a bracketed expression, while multisets of objects inside each region are stored by means of tuples of integers encoded in binary. Of course, the same considerations we made in the proof of Proposition 5 hold also in this case.
For the simulation, we can use the same algorithm as in [31]: the space required by N to store further information needed to carry on the simulation is then limited by S. It follows that the total amount of space required by N is of the same order as the one required by , that is, O(S).
Using Savitch's theorem [26], it is straightforward to see that N (and thus ) can be simulated by a deterministic Turing machine in space O(S 2 ) . ◻ It follows immediately from this theorem, from the results in [31], and from Proposition 8: Hence, even when a polynomial amount of space is used, the complexity classes defined on the basis of the definition of binary space coincide with the complexity classes defined in terms of the original definition of space (for systems using at least communication rules).
In [3] it was shown that exponential space Turing machines can be simulated by polynomially uniform exponential-space P systems with active membranes. In view of this result and of Theorem 9, and of the definition of binary space, we have the following:
Theorem 11
The following equivalences hold for an exponential amount of space:
Conclusions
We have proposed an alternative space complexity measure for P systems with active membranes, where the multiplicity of each object in each membrane is stored by using binary numbers. We have defined the corresponding complexity classes and we have compared some of them both with standard space complexity classes and with complexity classes defined in the framework of P systems considering the original definition of space [29].
It turned out that, for various considered systems, the computational classes defined on the basis of binary space do not differ from the corresponding classes defined on the basis of the original space definition for P systems. Among the various systems for which we proved such a result, we underline in particular that this is the case when we consider systems using all features of P systems with active membranes and a polynomial or exponential amount of space, as well as for semiuniform systems working in a constant space.
It would be interesting to find other classes for which the improved store efficiency obtained by considering binary space does not make any difference in computational efficiency, and to understand which features can be used/ are necessary to obtain the same result. It also remains as an open problem to find, on the contrary, specific classes where this difference exists, thus proving that storing the information concerning objects in an efficient way can really be exploited in some cases. We conjecture, for instance, that this is the case for complexity classes defined by systems using a logarithmic amount of space. Another possible research direction is to consider further variants of definition for space, and compare them with standard and binary space, or to consider the space required to describe the whole system executing the computation, that is including not only the data (object and membranes, in our case) but also the program (the rules, in our case). | 7,469 | 2021-04-16T00:00:00.000 | [
"Computer Science"
] |
Local gate control of Mott metal-insulator transition in a 2D metal-organic framework
Electron-electron interactions in materials lead to exotic many-body quantum phenomena, including Mott metal-insulator transitions (MITs), magnetism, quantum spin liquids, and superconductivity. These phases depend on electronic band occupation and can be controlled via the chemical potential. Flat bands in two-dimensional (2D) and layered materials with a kagome lattice enhance electronic correlations. Although theoretically predicted, correlated-electron Mott insulating phases in monolayer 2D metal-organic frameworks (MOFs) with a kagome structure have not yet been realised experimentally. Here, we synthesise a 2D kagome MOF on a 2D insulator. Scanning tunnelling microscopy (STM) and spectroscopy reveal a MOF electronic energy gap of ∼200 meV, consistent with dynamical mean-field theory predictions of a Mott insulator. Combining template-induced (via work function variations of the substrate) and STM probe-induced gating, we locally tune the electron population of the MOF kagome bands and induce Mott MITs. These findings enable technologies based on electrostatic control of many-body quantum phases in 2D MOFs.
iii) The synthesis of a 2D-MOF on h-BN via supramolecular coordination chemistry is not new (see ref. 30).In addition, the gate control of the DCA3Cu2 2D-MOF electronic structures and charging was also already performed on graphene (ref. 31).iv) page 3 (lines 84-85) "a crystalline MOF domain" growth is claimed.However, compared to ref. 31, this is far from crystalline.As shown in Fig. 1(a) of the manuscript, the MOF domain is surrounded by excess DCA molecules, therefore the 3:2 stoichiometry was only successfully achieved on rather small regions of the sample (50 × 50 nm2).In addition, white islands embedded into the 2D-MOF domains may point out to excess Cu islands.Can the authors identify such islands?There are also many additional defects in the MOF domain which are not discussed (black or dark blue points, wires or regions inside the domain).It is therefore almost impossible to find more than two h-BN pore regions together, covered by defect free DCA3Cu2 2D-MOF.The authors should provide experimental evidence of a long-range ordered growth of the 2D MOF.Otherwise, the authors should discuss the defects appearing in the 2D-MOF domains, as well as their limitations to achieve a highly crystalline 2D-MOF growth.This is an important point since such small domain sizes and high defect density will preclude the use of such 2D-MOFs in electronic devices.v) page 4 (line 87).Here the growth of the 2D-MOF on top of the h-BN/Cu(111) electronic moiré is presented.However, it is known that the moiré periodicity can change approximately from 3 nm to 14 nm on the h-BN/Cu(111) samples grown with borazine in UHV (ref. 30).Is there any moiré periodicity preference for the successful growth of 2D-MOF islands on top?The authors should provide a discussion in this regard.vi) page 4 (lines 102-105): The authors should be transparent and mention that DCA3Cu2 2D-MOF (the very same systems as the one shown here) has already been grown (as a full monolayer) on top of a rather good decoupling layer of graphene on Ir(111) (ref. 31).
vii) With respect to Figure 1.The authors only focus on the kagome band structure close to the Fermi level.However, it is known that more kagome bands should appear above and below the energy range considered here (ref.19).Can the authors comment on this point?Do the authors have DFT calculations for a wider energy range?How does the band structure of the Mott insulating phase look like?
viii) The authors should provide STS performed on a larger voltage range (for instance from -2V to +2V) to see what happens to the additional kagome bands away from the Fermi level.Is there any strong molecular HOMO/LUMO or 2D-MOF valence/conduction band feature at lower/higher voltages, as already observed in ref. 33 and in A. Kumar et al. Nano Letters 18, 5596 (2018).The latter reference should be added since DCA3Co2 2D-MOF was grown on the rather good decoupling graphene layer on Ir(111), whereby a 2D-MOF band structure signature was already observed.ix) page 6 (line 149) "the high crystalline growth makes a large disorder-related gap unlikely".If this is the case, how does the STS look at different points of the 2D-MOF island?Are the features identical if one performs STS on the dark pore regions?How do the electronic properties look like at the edge of the 2D-MOF domain?At which position does the domain edge already play a role in disturbing the reported electronic properties?
x) With respect to Figure 2. Why are the spectra asymmetric?Why not show dI/dV maps at particular energies instead of STM images?xi) page 6 (lines 157-162): Figure 3 should be explained in more detail in the manuscript, specially panels 3e and 3f.The order of the figure description should be changed in the text (a, b, c….).Regarding panel 3b, apart from the theoretically predicted peak at the Fermi level, which can not be captured experimentally, there are many additional peaks in the unoccupied region (>0.2V).These peaks or modulations are not present in the theoretical simulations.What are they?They have a similar amplitude to the relevant peaks close to the Fermi level, therefore, most probably are not artefacts.The authors should describe them.
xii) Figure 3a,b.The authors perform STS at Cu adatom positions.For completeness, the authors should also perform the same STS sequence at the DCA anthracene lobe positions.Does the same Mott metal-insulator transition appear or evolve in the same fashion as on the Cu adatoms?xiii) Figure 3. Since the Mott metal-insulator transition is highly dependent on the work function variation of the h-BN/Cu(111) electronic moiré, the authors should provide more experimental evidence that this effect can be reproduced for different moiré periodicities.xiv) In Figure 4 the STS were performed at a DCA anthracene lobe position.This point is not mentioned in the manuscript and is important.For completeness, the authors should perform the gating experiment on a Cu adatom position.xv) Figure S10.It is not clear at all that the charging rings are increasing their perimeter around the DCA molecule (see ref. 31).It looks just the same intensity at the Cu positions.This is not consistent with the molecular charging ring features reported in ref. 31.These dI/dV maps do not support the interpretation of charging peaks given in Figure 4 of the manuscript.
Reviewer #2 (Remarks to the Author): In their manuscript entitled "Gate control of Mott metal-insulator transition in a 2D metal-organic framework", B. Lowe et al claim that in a metal-organic network grown on an hBN/Cu(111) substrate it is possible to control a metal-insulator transition through two mechanisms.One of them is based in the modulation of the surface potential due to the presence of a moiré pattern between hBN and Cu(111).In the second part of the manuscript, they control the transition by varying the sample tip distance.There are different aspects of the article that need to be clarified before the article can be published.
Figure 3 shows the spectra measured by moving the tip between the areas of the moiré called pores to the areas called wires.The experimental spectra show a modulation in the position of the bands depending on the moiré area.Upon reaching the areas called wires, the gap disappears and very intense peaks extend from the Fermi level up to +0.4eV.These spectra features are not mentioned or discussed in the manuscript.
The calculations reproduce the energy position of the bands observed in the experiment until the wire areas where the disagreement with the experiments it is clear.The calculations corresponding to the wire areas show very pronounced and very narrow peaks.The authors attributed their origin to coherent quasiparticle.Contrary to what the authors say in the manuscript, the calculations do not show a gap collapse in the wire areas, on the contrary it becomes wider and extends to almost +0.4eV.
According to the authors, the control of the metal-semiconductor transition induced by the substrate is deduced from the comparison between the experimental results and the theoretical calculations shown in Figure 3. Before this can be stated it is necessary to discuss in detail the discrepancies mentioned above between theory and experiments.
In the second part, the authors show how the metal-insulator transition can be induced by changing the tip-sample distance.To do this, they carry out experiments in one of the areas called wire. Figure 4d shows how in this area the measured spectra may or may not show a gap depending on the tipsample distance.This result somehow invalidates the manuscript's first claim that moiré registration controls the existence of a gap or not.It seems that the gap collapse depends on the parameters used to perform the measurements.
After reading the manuscript and the supplementary material, it is not clear to me how the assignments of the purple circles and red squares are made in Figure 4 or how it is decided whether the observed gap corresponds to a trivial insulator or a Mott insulator.
No details are given as to whether the energy levels shown in Figure S12 are a cartoon or the result of a calculation.This detail is important to understand how the authors identify the gap character.
Reviewer #3 (Remarks to the Author): In this work, the authors reported the experimental synthesis and characterization of a single-layer 2D DCA3Cu2 MOF on a wide bandgap BN substrate, which is further shown to host a robust Mott insulating phase and can achieve a Mott metal-insulator transition using electrostatic control.The experimental observations are quantitatively consistent with theoretical predictions (both DMFT calculations and DBTJ model).Direct experimental measurements on a Mott metal-insulator transition in 2D MOFs remain elusive.One of the challenges lies in the experimental synthesis of large single-crystal MOF samples.It is nice to see that the authors have succeeded in making one such sample and successfully demonstrated the Mott metal-insulator transitions induced via either template or tip.However, it is important for the authors to address a question that arises from the examination of the large-scale samples, as depicted in Figures 1a and S4.One can clearly see structural defects, such as vacancies and grain boundaries.As these defects may have a profound influence on the electronic properties of the MOF, it would be good for the authors to have some discussion about the potential impact of these bulk defects.This work represents a noteworthy contribution to the research field of 2D MOFs, shedding light on elucidating the Mott metal-insulator transition in MOFs.I would recommend its publication in Nature Communications after the authors address the aforementioned concerns.
Reviewer 1
B. Lowe et al. report on the DCA3Cu2 2D MOF fabrication (following the concepts of supramolecular coordination chemistry) on h-BN/Cu(111) under ultra-high vacuum (UHV).Its structural and electronic properties are studied by low-temperature scanning tunneling microscopy (STM) and spectroscopy (STS), supported with density functional theory (DFT) and dynamical mean-field theory (DMFT) calculations.The combination of the wide bandgap h-BN as a template (allowing the 2D-MOF to retain its intrinsic electronic properties), and of the adequate energy level alignment given by the h-BN/Cu(111) substrate (resulting in half-filling of the 2D-MOF kagome bands) promotes the realization of a correlated-electron Mott phase.With STS measurements, they find an electronic energy gap of around 200meV, corresponding to a Mott insulating phase according to DMFT predictions.In addition, by tuning the electron population of the 2D-MOF near-Fermi band structure, via either template induced (work function variation of the pores and wires of the h-BN/Cu(111) electronic moiré) or tip-induced gating (via STM probe sample distance) a Mott metal-insulator transition in the 2D-MOF is proposed.
Even though this work claims the observation of an exotic many-body quantum phenomenon (Mott metal-insulator transition) in a 2D-MOF, the experimental evidence and theoretical support are not strong enough.Therefore, the status of this work is still too preliminary so that it can be published in Nature Communications.The Mott metal-insulator transition should be present in the entire 2D-MOF material (since this effect alters the band structure of the material).Therefore, the present claim that this effect can happen at the local scale (< 5 nm) sounds controversial.
Author reply:
Our dI/dV STS measurements show that: (i) the 2D kagome MOF exhibits an electronic energy gap of ~200 meV (Fig. 2), with occupied and unoccupied band edges following a spatial modulation (from pore to wire regions) given by local variations of the work function resulting from the hBN/Cu(111) moiré pattern (Fig. 3); (ii) when the STM tip is far enough from the surface (such as to not significantly shift the MOF energy levels via the doublebarrier tunnelling junction effect), the entire 2D kagome MOF is in a Mott insulating phase, with Mott insulating energy gaps both at the pore and wire regions of the hBN/Cu(111) moiré pattern Fig. 4 and SI Fig. S4); (iii) for smaller tip-sample distances, the wire region exhibits a metallic dI/dV spectrum, with no energy gap and a significant dI/dV signal at the Fermi level, while the pore regions remain Mott insulating (Fig. 3b).We interpret these experimental observationswhich are reproduced by our DMFT calculationsas a transition at the wire regions from a Mott insulating phase to a metallic phase; this transition results from the depletion of the kagome MOF electronic bands due to the combination of: (i) large local work function at the wire region (Fig. 3c), and (ii) tip-induced MOF energy level shifts due to the double-barrier tunnelling junction (Fig. 4).
In the Mott insulating phase (i.e., moiré pore regions, and moiré wire regions for large tipsample distances), electronic states are intrinsically localised at kagome lattice sites due to strong on-site Coulomb repulsion, with an electronic mean free path that is smaller than the MOF lattice constant.That is, the concept of band structure within conventional band theory (i.e., eigenenergies of single-electron wavefunctions as a function of single-electron wavefunction wavevector k) can be ill defined, with near-Fermi Hubbard 'bands' that can be incoherent, and electronic phenomena are fundamentally local.Now, at moiré wire regions, when the tip-sample distance is reduced (Fig. 4), we claim that the combined effects of local work function (larger than for the moiré pore regions; Fig. 3c) and double-barrier tunnelling junction lead to a depletion of the localised electronic states of the MOF in the Mott insulating phase (i.e., effectively, a lowering of the MOF chemical potential), within the area of the tunnelling junction (with a typical characteristic length scale of ~10 nm given by the tip radius of curvature).This depletion of electronic states leads to the collapse of the Mott insulating phase, into a metallic phase with no gap and an increased density of states at the Fermi level (Fig. 3b, e-g, and Fig. 4d).We acknowledge that this effect occurs within an effective area defined by the tunnelling junction cross section (i.e., typically ~10 nm in length), and that electronic confinement at such moiré wire regions can lead to features in the local density of states that are not captured quantitatively by our DMFT calculations.
It is important to note that Mott insulating phases have been observed recently in monolayer 1T-TaS2 and 1T-TaSe2, which are 2D crystals with an hexagonal Bravais lattice and a lattice constant of ~2 nm, within crystalline domains with characteristic length scales of ~10 nm. 1,2 Importantly, 1T-TaS2 domains with characteristic length scales of ~5 nm can be metallic (i.e., ungapped and with a non-zero density of states at the Fermi level) when strained. 3These phenomena are very similar to the case of our 2D MOF here, and support our interpretation of a Mott insulating phase in monolayer domains with characteristic length scales of ~5-10 nm, with possible local transitions to a metallic phase.
We have now clarified this in our main text discussion (before the conclusion).Our work is not preliminary; it provides a complete characterisation, description and interpretation of our experimental observations, supported by a theoretical formalism (DMFT) which depicts correlated-electron phenomena accurately and reliably (in contrast with other formalisms such as DFT). 4-6 Additional experimental evidence, such as temperature dependence studies of the insulating gap size and angle-resolved photoemission spectroscopy measurements are necessary.
Following this suggestion from Reviewer 1, we have now performed dI/dV STS measurements at DCA anthracene extremity sites of the 2D kagome MOF at 77 K, at pore and wire regions of the hBN/Cu(111) moiré pattern (with moiré periodicity similar to the moiré domain in the main text; see below and new Supplementary Fig. S31a).These measurements at 77 K reveal a ~200 meV electronic gap at pore regions, and no gap at the Fermi level and an increased Fermi-level dI/dV signal for wire regions (for an intermediate tip-sample distance, as for Fig. 3b in main text), similar to what we observed at 4 K (with some trivial thermal broadening).We also performed further DMFT calculations (U = 0.65 eV) of the spectral function, for chemical potentials EF = 0.4 eV (Mott insulating phase) and EF = 0.2 eV (metal-like phase with no gap at the Fermi level), for temperatures between 29 and ~600 K (see below and new Supplementary Fig. S32).These DMFT-calculated spectral functions show no significant changes as the temperature varies within this range (except for trivial thermal broadening), in very good agreement with experiments.Importantly, these supplementary measurements and calculations indicate that: (i) the Mott insulating gap is robust up to high temperatures (in particular room temperature), and (ii) changes in dI/dV spectra are likely to become significant only for temperatures well above room temperature, at which STS measurements are challenging if not possible.We have now added a new Supplementary Section S23 on this matter.
Reviewer 1 also suggests that angle-resolved photoemission spectroscopy (ARPES) measurements might be useful for supporting our claims.We agree that ARPES might be useful as a complementary technique in future studies on systems similar to the one we focus on here.However, ARPES is not adequate for the specific system that we consider here, a single-layer 2D MOF adsorbed on an atomically thin 2D insulator.ARPES does not allow for the detection of unoccupied states (useful for the observation of the Mott insulating energy gap).For our particular sample, k-resolution would be impossible due to the different rotational orientations of the hBN domains on Cu(111), which give rise to different moiré patterns (i.e., there would be multiple crystalline domains within the spot size of an UV light/X-ray source).Moreover, ARPES could lead to radiation-induced damage of the material of interest (in particular of compounds composed of organic molecules as here).In our present case, where the MOF is adsorbed on insulating hBN, irradiation by UV light/Xrays is likely to result in charging (unless an electron flood gun is used), imposing further challenges on ARPES measurements.Importantly, conventional ARPESa space-averaging techniquecannot provide the real-space resolution necessary for distinguishing MOF electronic properties at pore and wire regions of the hBN/Cu(111) moiré pattern (due to the large UV/X-ray spot size).It would also not allow for measuring MOF energy level shifts via the double-barrier tunnelling junction (DBTJ) effect (for which the STM tip is required), which drive the transitions from Mott insulator to metal at the wire regions.It is well established that Mott energy gaps and Mott insulating phasesincluding gate-controlled transitions to metallic phases -can be evidenced via dI/dV STS, without the need of ARPES. 1,3,7-9Low-temperature dI/dV STS provides very good energy resolution, capability for detecting both occupied and unoccupied states non-invasively, and real-space atomicscale resolution (useful here for distinguishing MOF electronic properties at wire and pore regions of the hBN/Cu(111) moiré pattern).In addition, the authors should carefully address the following major concerns that I have (points i to xv).
i) The DCA3Cu2 2D-MOF has already been studied on a decoupling layer such as graphene (ref.31).There a long-range ordered 2D-MOF phase was achieved, by far exceeding the quality of the DCA3Cu2 2D-MOF presented in this manuscript.Even though the authors claim to have achieved a highly crystalline growth, this is not the case so far.The authors must provide better experimental proof to defend this point.
The DCA3Cu2 2D MOF has indeed been studied on graphene before, as acknowledged and referenced explicitly (previous Ref. 31;updated Ref. 33) in our manuscript. 10This previous study is however fundamentally different from ours: graphene is a semimetal with no energy gap, whose electronic states hybridise with those of the 2D DCA3Cu2 MOF (see Fig. 3b
image in the Supplementary Information (SI) with a large amount of presumably unreacted Cu on the surface (bright features in figure below, which the authors fail to explicitly comment on) and with clear domain boundaries. Note that the colour scale and resolution of this latter ~150x150nm 2 image hinder a reliable evaluation of the overall homogeneity or possible defects of the MOF. Moreover, no Fourier transforms (FTs) of these MOF domain STM images are provided, making a comparison of crystallinity impossible.
Synthesising a large single-crystal MOF is challenging.Here we succeeded in synthesising 2D MOF domains with characteristic lengths of several tens of nm's, with very good quality in terms of monocrystallinity and defect density.This is evidenced by our large-scale STM images (60x60nm 2 in Fig. 1a, 300x300nm 2 in SI Fig. S7, and 100x100nm 2 in SI Fig. S8a), and, importantly, the FTs of these images, showing clear sharp diffraction peaks (inset in Fig. 1a; SI Fig. S8).Note that in our manuscript we explicitly acknowledge the presence of DCA-only domains, and that in the SI we explain that we sacrifice some of the overall MOF yield to obtain high-crystallinity of the MOF domains.
Previous STM and dI/dV STS studies on 2D materials with a lattice geometry similar to that of our MOF here (e.g., single-layer 1T-TaS2, with a 2D hexagonal lattice and a lattice constant on the order of ~2 nm) have shown a Mott insulating phase on nanoisland domains with characteristic sizes on the order of ~10x10nm 2 . 1 That is, our monocrystalline MOF domains with characteristic lengths on the order of several tens of nm's are sufficiently large to demonstrate a Mott insulating phase.
We have now updated our manuscript, nuancing our previous claim of "highly crystalline growth" and changing it to "monocrystalline growth of the MOF domains".We have now added comments to the main text discussing explicitly the presence of defects in our MOF domains and of DCA-only regions.We have also added a 300x300 nm 2 image (Fig. S7) to the Supplementary Section S6), highlighting the quality and homogeneity of the 2D MOF, and an SI section on how defects affect the MOF electronic properties (Supplementary Section S11).ii) The authors claim in the abstract that correlated-electron phases, or 2D-MOF "quantum materials" have not been experimentally realized yet.However, in a recent paper, already uploaded into arXiv before this manuscript, Lobo-Checa et al. report 2D magnetism in a very similar 2D-MOF i.e., the DCA3Fe2 2D-MOF grown on Au(111) (see https://arxiv.org/abs/2209.14994).The authors should give credit to other colleagues working in the same research field.This preprint should be mentioned in the introduction and cited.
We thank Reviewer 1 for drawing our attention to this relevant reference. We have now added it to the introduction as requested (updated Ref. 26). We acknowledge that Lobo-Checa et al. have reported ferromagnetism in the DCA3Fe2 MOF, as the result of exchange interactions (i.e., correlations) between unpaired
Fe 3d electrons across the organic DCA linkers. 11The main message and physics reported in our manuscript are, however, fundamentally different: the Mott insulating phase results from strong Coulomb interactions between electrons populating the DCA3Cu2 MOF near-Fermi kagome bands, which have dominant DCA molecular orbital character, with the Cu 3d shell completely filled (see Kumar et al. DOI: 10.1002/adfm.202106474;updated Ref. 25 in our manuscript) 12 .We have now tried to emphasise and clarify this throughout the manuscript.
iii) The synthesis of a 2D-MOF on h-BN via supramolecular coordination chemistry is not new (see ref.30).In addition, the gate control of the DCA3Cu2 2D-MOF electronic structures and charging was also already performed on graphene (ref.
31).
We agree with both Reviewer comments.We do not intend to claim otherwise on either point.We explicitly acknowledge that growth of a 2D MOF (with square lattice) on hBN 13,14 (previous Ref. 30; updated Refs.31, 32; updated line 95 of main text), as well as tip-induced gating (i.e., charging via the double-barrier tunnelling junction effect; see Fig. 4a, b in old Ref. 31,now ref. 33; line 247) of unoccupied molecular orbitals of the same DCA3Cu2 MOF on graphene, have been achieved previously. 10 We do emphasise, however, that our work represents the first experimental demonstration of: (i) growth of a single-layer 2D MOF with kagome crystal structure on an atomically thin wide bandgap 2D insulator, and (ii) electrostatic control over a Mott insulating phase therein.These findings are fundamentally different from those reported in old Refs.30, 31.iv) page 3 (lines 84-85) "a crystalline MOF domain" growth is claimed.However, compared to ref. 31, this is far from crystalline.As shown in Fig. 1(a) of the manuscript, the MOF domain is surrounded by excess DCA molecules, therefore the 3:2 stoichiometry was only successfully achieved on rather small regions of the sample (50 × 50 nm2).In addition, white islands embedded into the 2D-MOF domains may point out to excess Cu islands.Can the authors identify such islands?There are also many additional defects in the MOF domain which are not discussed (black or dark blue points, wires or regions inside the domain).It is therefore almost impossible to find more than two h-BN pore regions together, covered by defect free DCA3Cu2 2D-MOF.The authors should provide experimental evidence of a long-range ordered growth of the 2D MOF.Otherwise, the authors should discuss the defects appearing in the 2D-MOF domains, as well as their limitations to achieve a highly crystalline 2D-MOF growth.This is an important point since such small domain sizes and high defect density will preclude the use of such 2D-MOFs in electronic devices.
As outlined in our response to point i) above, we dispute the Reviewer's assessment of the comparison of the DCA3Cu2 MOF crystallinity between our manuscript here and previous Ref. 31 (updated Ref. 33). Previous Ref. 31 (updated Ref. 33) by Yan et al. provides two large-scale STM images of the DCA3Cu2 MOF on graphene: a ~40x40nm 2 image in the main text of a MOF region showing good crystallinity but which is not defect-free [see Fig. provided in answer to point i)], and a ~150x150nm 2 image in the SI with a large amount of presumably unreacted Cu on the surface [bright features in panel b of Fig. above in point i),
which the authors fail to explicitly comment on] and with clear domain boundaries. 10No Fourier transforms (FTs) of these MOF domain STM images are provided, making a comparison of crystallinity difficult.
We succeeded in synthesising 2D MOF domains with characteristic lengths of several tens of nm's, with very good quality in terms of monocrystallinity and defect density.This is evidenced by our large-scale STM images (60x60nm 2 in Fig. 1a and 100x100nm 2 in Supplementary Fig S8a), and, importantly, the FTs of these images, showing clear sharp diffraction peaks (inset in Fig. 1a of our manuscript; SI Fig. S8b, c).In our manuscript we explicitly acknowledge the presence of DCA-only domains in these images, and in the SI we explain that we sacrifice some of the overall MOF yield to obtain crystalline MOF domains.
We have now updated our manuscript, nuancing our previous claim of "highly crystalline growth" and changing it to "monocrystalline growth of the MOF domains".We have now added comments to the main text discussing explicitly the presence of defects in our MOF domains and of DCA-only regions.In the SI, we have also added a Supplementary Section S11 on how such defects might affect the MOF electronic properties, as requested by Reviewer 1.
We have also added a 300x300 nm 2 image of the DCA3Cu2 MOF on hBN/Cu(111) to the Supplementary Section S6 (see Fig. S7).In this image, no DCA-only domains are observed, indicating that the DCA-to-Cu stoichiometry is close to 3-to-2 for this sample preparation (with some Cu clusters, similar to previous Ref. 31 We emphasise that the main findings of our manuscript are: (i) the observation of a significant Mott gap in a single-layer 2D kagome MOF grown on an atomically thin insulator, and (ii) the potential for controlling such Mott gap and insulating phase electrostatically.As mentioned above, previous STM and dI/dV STS studies on 2D materials with a lattice geometry similar to that of our MOF here (e.g., single-layer 1T-TaS2, with a 2D hexagonal lattice and a lattice constant of ~2 nm, similar to our system) have shown a Mott insulating phase on nanoisland domains with characteristic sizes on the order of ~10x10nm 2 (see updated Ref. 49 2 , is beyond the scope of our current manuscript and requires further studies.v) page 4 (line 87).Here the growth of the 2D-MOF on top of the h-BN/Cu(111) electronic moiré is presented.However, it is known that the moiré periodicity can change approximately from 3 nm to 14 nm on the h-BN/Cu(111) samples grown with borazine in UHV (ref.30).Is there any moiré periodicity preference for the successful growth of 2D-MOF islands on top?The authors should provide a discussion in this regard.
in main text by Vano et al.). That is, our monocrystalline MOF domains with characteristic lengths on the order of several tens of nm's are sufficiently large to support these findings. The growth of a perfectly crystalline monolayer 2D kagome MOFs on an atomically flat insulator, across areas significantly larger than ~100x100nm
We thank Reviewer 1 for this valid comment.In our manuscript we focus on the DCA3Cu2 MOF grown on hBN/Cu(111) moiré domains with periodicities of ~10-12 nm, since these hBN/Cu(111) regions seem to be the most commonly formed.We do observe examples where the 2D MOF grows on rarer hBN/Cu(111) domains with smaller moiré periodicity (e.g., ~5 nm), where the 2D MOF shows structural and electronic properties similar to those on hBN/Cu(111) moiré domains with a ~10 -12 nm periodicity, with dI/dV spectra that are consistent with our physical interpretation.
We believe the MOF has no growth preference for particular moiré pattern periodicities.Establishing whether the 2D MOF growth quality depends on the hBN/Cu(111) moiré periodicity is, however, beyond the scope of this manuscript and requires further investigations.
We have now added a comment on this matter in Supplementary Section S6, and a new Supplementary Section S15 including data on the electronic properties of the MOF on hBN/Cu(111) domains with moiré periodicities of ~5 nm and ~10 nm.We have now included an explicit mention of the growth of the DCA3Cu2 MOF on graphene/Ir(111) on p. 4, lines 102-103, citing previous Ref. 31 (updated Ref. 33), as requested by Reviewer 1.We do emphasise that the findings of our work are fundamentally different to those of (previous) Ref. 31 by Yan al. 10 The electronic states of graphenea semimetal with no energy gap and with significant electrical conductivitydo show some degree of hybridisation with those of the DCA3Cu2 2D MOF (see Fig. 3b 1d). 14We therefore think our growth upon an insulating hBN layer represents a significant advancement towards potential technological applications.
vii) With respect to Figure 1.The authors only focus on the kagome band structure close to the Fermi level.However, it is known that more kagome bands should appear above and below the energy range considered here (ref.19).Can the authors comment on this point?Do the authors have DFT calculations for a wider energy range?How does the band structure of the Mott insulating phase look like?
We have added a plot of our DFT calculations for a wider energy range (-3 to 3 eV), for both the free-standing MOF and the MOF on hBN/Cu(111) (see below and new Supplementary Section S1).These calculations, in good agreement with Ref. 19 in the main text, show the three kagome bands near the Fermi level separated from lower-lying occupied and upperlying empty bands by >1.5 eV.Note that DFT tends to underestimate energy gaps, so this energy separation could be even larger.Because this energy separation is so large, the essential correlated-electron physics can be described by considering solely electronic states near the Fermi level.More DFT calculations can also be found in our previous publication. 15 We have also added to the SI a plot of the energy-and wavevector-dependent spectral function, A(E, k), for the Mott insulating phase, calculated via DMFT (U = 65 eV; EF = 0.4 eV, i.e., near half-filling; new Supplementary Section S5).This spectral function shows a significant gap at the Fermi level, with an occupied lower Hubbard band with weakly dispersive features (in particular close to the Γ point), and an empty upper Hubbard band (UHB) with significantly less dispersion.Notably, the diffuse nature of these bands (as opposed to sharp and well-defined, as in conventional band theory) is indicative of band incoherence; this is expected for a Mott insulating phase.
It is important to note that, as mentioned in the main text (lines 125 -126, p. 5), DMFT captures many-body effects more accurately and reliably than DFT.As such, DMFT is a valid approach for determining the electronic structure of the Mott insulating phase in our case here, and of strongly correlated materials in general. 4 As requested by Reviewer 1, we have now added a new Supplementary Section S12 with a dI/dV spectrum at a MOF DCA lobe site, at a hBN/Cu(111) moiré pore region, on a larger voltage range, from -1 to +1 V (see below).Such a spectrum shows no clear prominent electronic features (e.g., HOMO/LUMO signatures) outside the -0.5 to +0.5 V energy range reported in the main text.This is consistent with our DFT calculations (see above; Supplementary Fig. S1) and those of Ref. 19, which show three kagome bands with dominant DCA LUMO character near the Fermi level, well separated in energy from other fully occupied and completely empty bands lying beyond ~1.5 eV below and above the Fermi level, respectively.This is the reason why in our manuscript we focus on the -0.5 to 0.5 V bias voltage range.Note that performing dI/dV STS within a large bias voltage range (e.g., -2 to 2 V) is not trivial due to challenges in preparing a spectroscopically functional STM tip on the MOF/hBN/Cu(111) system.
We have now added A. Kumar et al. Nano Letters 18, 5596 (2018) as a reference (new Ref. 52 in main text), as requested by Reviewer 1. 16 Note that the (non-interacting) band structure of DCA3Co2 on graphene is fundamentally different from that of DCA3Cu2 on hBN, with the former hosting multiple bands near the Fermi level with very significant contributions from Co states [e.g., see Fig. 4b of Kumar et al. Nano Letters 18, 5596 (2018)]. 16The DCA3Cu2/Cu(111) system of (old) Ref. 33 (new Ref. 35) is also fundamentally different from our DCA3Cu2/hBN/Cu(111) system, with the MOF interacting significantly with the underlying, more reactive Cu(111) substrate, with significant hybridization between MOF and Cu(111) states, and the near-Fermi kagome bands not half-occupied [see B. Field et al., npj Computational Materials 8, 227 (2022)]. 15,17We therefore assert that a quantitative comparison between the electronic properties of our DCA3Cu2/hBN/Cu(111) system, and those of DCA3Cu2/graphene and DCA3Cu2Cu(111), is not meaningful.ix) page 6 (line 149) "the high crystalline growth makes a large disorder-related gap unlikely".If this is the case, how does the STS look at different points of the 2D-MOF island?Are the features identical if one performs STS on the dark pore regions?How do the electronic properties look like at the edge of the 2D-MOF domain?At which position does the domain edge already play a role in disturbing the reported electronic properties?
We thank Reviewer 1 for these valid comments.We have now provided further dI/dV STS measurements at the high-symmetry MOF locations: Cu, DCA lobe, DCA centre and MOF pore sites (to not confuse with what we refer to as a hBN/Cu(111) moiré pore region); see new Supplementary Section S10.The dI/dV spectra at DCA centre and dark MOF pore sites are qualitatively similar to the spectra at adjacent Cu and DCA lobe sites, with spectral features (i.e., lower and upper Hubbard bands) at these Cu and DCA lobe sites stronger.This is consistent with the STM images and dI/dV maps taken at bias voltages corresponding to the lower Hubbard band maximum and upper Hubbard band minimum (see Supplementary Figs.S11-12), showing higher intensity at Cu and DCA lobe sites.
We also added a new Supplementary Section S24 where we compare dI/dV spectra acquired at different pore regions of a hBN/Cu(111) domain with a moiré period of λ ≈ 12.5 nm.The spectra in Supplementary Fig. S23 are very consistent across six distinct moiré pores.This shows that within the bulk of a MOF domain, the MOF local electronic properties are identical for equivalent sites with respect to the hBN/Cu(111) moiré pattern.
We have also added a new Supplementary Section S11 where we explore the effect of defects (e.g., MOF domain edges and boundaries, vacancies, cracks) on the MOF electronic properties.Supplementary Fig. S14 18 Notably, one unit cell away from the MOF domain edge, the MOF local electronic properties are identical to those within the MOF domain bulk.
We have also performed additional dI/dV STS measurements at a defective MOF domain boundary (new Supplementary Fig. S15), qualitatively consistent with dI/dV spectra at the MOF domain edge.Similarly, no significant disturbance in MOF electronic properties is found at defective Cu vacancy sites (new Supplementary Fig. S16).This is not consistent with the molecular charging ring features reported in ref. 31.These dI/dV maps do not support the interpretation of charging peaks given in Figure 4 of the manuscript.
Reviewer 2
In their manuscript entitled "Gate control of Mott metal-insulator transition in a 2D metalorganic framework", B. Lowe et al claim that in a metal-organic network grown on an hBN/Cu(111) substrate it is possible to control a metal-insulator transition through two mechanisms.One of them is based in the modulation of the surface potential due to the presence of a moiré pattern between hBN and Cu(111).In the second part of the manuscript, they control the transition by varying the sample tip distance.There are different aspects of the article that need to be clarified before the article can be published.
Figure 3 shows the spectra measured by moving the tip between the areas of the moiré called pores to the areas called wires.The experimental spectra show a modulation in the position of the bands depending on the moiré area.Upon reaching the areas called wires, the gap disappears and very intense peaks extend from the Fermi level up to +0.4eV.These spectra features are not mentioned or discussed in the manuscript.
We thank Reviewer 2 for this valid comment.Indeed, the experimental dI/dV spectra in Fig. 3b for locations near the hBN/Cu(111) moiré wire region show significant peaks between the Fermi level and Vb ~0.4 V.At such locations close to the wire regions, we observed dI/dV features due to tip-induced charging via the double-barrier tunnelling junction (DBTJ) effect; see Fig. 4. We therefore propose that these dI/dV peaks in Fig. 3b could be the result of tipinduced charging via the DBTJ effect, as observed for moiré wire regions and described in Fig. 4. Note that the DMFT calculations do not consider the DBTJ or tip-induced effects; therefore, they cannot predict such features.We have now added a paragraph on this matter in the main text, and a related new Supplementary Section S17.
We want to emphasise that the main messages of our manuscript are: (i) the observation of a significant Mott gap in a single-layer 2D MOF, and (ii) the electrostatic control of such Mott gap and insulating phase.As such, in the main text we focus on dI/dV features at energies close to the Fermi level.The calculations reproduce the energy position of the bands observed in the experiment until the wire areas where the disagreement with the experiments it is clear.The calculations corresponding to the wire areas show very pronounced and very narrow peaks.The authors attributed their origin to coherent quasiparticle.Contrary to what the authors say in the manuscript, the calculations do not show a gap collapse in the wire areas, on the contrary it becomes wider and extends to almost +0.4eV.According to the authors, the control of the metal-semiconductor transition induced by the substrate is deduced from the comparison between the experimental results and the theoretical calculations shown in Figure 3. Before this can be stated it is necessary to discuss in detail the discrepancies mentioned above between theory and experiments.
We thank Reviewer 2 for these very valid comments, with which we agree.Indeed, in the main text (previously, p. 7, line 181), we claimed that the DMFT calculations show, for a chemical potential corresponding to the moiré wire region, a collapse of the energy gap Eg.What we meant is that, for such a chemical potential (i.e., smaller than that for a moiré pore region), the DMFT calculations show no gap at the Fermi level, with a non-zero increased spectral function (indicating the presence of electronic states) at the Fermi level.This is consistent with our experiments, which show a larger Fermi-level dI/dV signal at the moiré wire region in comparison to the moiré pore region (for the considered tunnelling conditions; see updated Fig. 3e and new Supplementary Sections S14-15), and with our claim of a metallic phase at the moiré wire region (i.e., for tunnelling parameters, in particular tipsample distance, used in Fig. 3).This increase in Fermi-level DMFT-calculated spectral function and measured dI/dV marks the onset of the Mott energy gap collapse, and of the transition from Mott insulating phase to metal (see Supplementary Fig. S2b).
As pointed out by Reviewer 2, the DMFT calculations for a chemical potential corresponding to the moiré wire region show a pronounced narrow peak near the Fermi level.These peaks are not observed in our experimental dI/dV curves (Fig. 3
We have now updated the main text, clarifying what we mean by 'gap collapse', and providing a more detailed discussion on the discrepancies between experiment and theory.
In the second part, the authors show how the metal-insulator transition can be induced by changing the tip-sample distance.To do this, they carry out experiments in one of the areas called wire. Figure 4d shows how in this area the measured spectra may or may not show a gap depending on the tip-sample distance.This result somehow invalidates the manuscript's first claim that moiré registration controls the existence of a gap or not.It seems that the gap collapse depends on the parameters used to perform the measurements.
We thank Reviewer 2 for this valid comment. We acknowledge that our main text was perhaps not perfectly clear on this point. Both the considered MOF region relative to the underlying hBN/Cu(111) moiré pattern and the tunnelling parameters (i.e., tip-sample distance, bias voltage) determine the energy level alignment and population of the MOF electronic states, and hence the corresponding electronic phase (i.e., Mott insulator or metal) of the MOF region at the STM junction.
It is important to note that: (i) the MOF at the hBN/Cu(111) moiré pore regions exhibits a Mott gap and is in the Mott insulating phase regardless of the tunnelling parameters, (ii) the MOF at the hBN/Cu(111) moiré wire regions exhibits a Mott gap and is in the Mott insulating phase for large tip-sample distances (i.
e., when MOF energy level shifts due to the double-barrier tunnelling junction are negligible), and (iii) the MOF at the hBN/Cu(111) moiré wire regions is in a metallic phase (with no energy gap at the Fermi level, and with a significant non-zero Fermi-level density of states) for intermediate tip-sample distances (i.e., when MOF energy level shifts due to the double-barrier tunnelling junction result in partial depopulation of MOF electronic states).
We have revised our main text to make this clear.
After reading the manuscript and the supplementary material, it is not clear to me how the assignments of the purple circles and red squares are made in Figure 4 or how it is decided whether the observed gap corresponds to a trivial insulator or a Mott insulator.
We thank Reviewer 2 for this valid comment.We agree that the explanation for the assignments of the purple and red markers, and of the observed gaps to a trivial or Mott insulator in Fig. 4, could be clearer in our manuscript.
The dI/dV spectra for the MOF at the hBN/Cu(111) moiré wire region in Fig. 4d show an electronic gap, with a clear peak (sharper than the near-Fermi band features in Fig. 3b, with a maximum indicated by the purple circles) at positive bias voltage for large tip-sample distances, and at negative bias voltage for small tip-sample distances.These spectra also show a subtler band edge (indicated by the red squares, similar to the near-Fermi band features in Fig. 3b; see updated Supplementary Section S8 for information on the determination of band edges) at a bias voltage of sign opposite to that of the sharp peak, i.e., at negative bias voltage for large tip-sample distances and at positive bias voltage for small tip-sample distances.For intermediate tip-sample distances, these spectra are gapless, with a significantly larger Fermi-level dI/dV signal (see Fig. 4f).From these observations, we infer that the MOF is an insulator (trivial or Mott) for large and small tip-sample distances (i.e., dI/dV spectra with a gap at the Fermi level), and a metal for intermediate tip-sample distances (i.e., dI/dV spectra with no gap at the Fermi level).
The bias voltage position of the sharp peak (purple circles) increases linearly with respect to tip-sample distance, whereas the bias voltage position of the subtler band edge decreases nonlinearly with tip-sample distance. Notably, Eqs. 1, 2 of the main text (now on p. 10), corresponding to the double-barrier tunnelling junction (DBTJ) model, provide very good fits for the bias voltage positions of both sharp peak and subtle band edge as a function of tipsample distance (black curves in
Fig. 4e).This provides compelling evidence that the sharp peaks (purple circles) are associated with charging of an intrinsic MOF electronic state lying at the edge (red squares) of a fully populated (large tip-sample distances) or completely empty (small tip-sample distances) band (see cartoon schematics in Fig. 4a-c and now Supplementary Fig. S26, and previous Refs. 33,43,45,now Refs. 33,46,48).Now, dI/dV spectra for the MOF at a pore region of the hBN/Cu(111) moiré pattern show a ~200 meV electronic energy gap at the Fermi level (Fig. 2).These spectra resemble the spectral function of the 2D kagome MOF in the Mott insulating phase (Fig. 1e), calculated via DMFT with a chemical potential that is consistent with the DFT-predicted occupation of the near-Fermi kagome bands for the MOF on hBN/Cu(111) (Fig. 1d, Supplementary Fig. S1).At an adjacent moiré wire region, the local work function increases by ~0.2 eV for the specific periodicity of the MOF/hBN/Cu(111) domain considered (Fig. 3c; see new Ref.[29][30][31].Accordingly, the near-Fermi electronic states of the MOF at this moiré wire region are shifted upwards in energy in comparison to the near-Fermi electronic states of the MOF at the moiré pore region.The dI/dV spectra in Fig. 3b for this moiré wire region (for the specific tunnelling parameters used) show no gap at the Fermi level, with a significant nonzero Fermi-level dI/dV signal (new Fig. 3e; Supplementary Fig. S19), indicative of a metallic phase.These experimental dI/dV spectra are consistent with the spectral function of the MOF calculated via DMFT for a chemical potential that is reduced (in comparison with the DMFT calculations for the moiré pore region); such DMFT spectral function exhibits a significant magnitude and no gap at the Fermi level, indicating a metallic phase, resulting from the depopulation of the MOF bands (Fig. 3d and Supplementary Fig. S2).
From these observations we infer that: (i) the MOF at the moiré pore region is in a Mott insulating phase, and (ii) the MOF at the adjacent moiré wire region, for the specific tunnelling parameters used (i.e., bias voltage, tip-sample distance) in Fig. 3b, is in a metallic phase (as the result of the depopulation of the MOF near-Fermi electronic states due to the increase in local work function).
Now, let us consider the MOF at such a moiré wire region, which is in the metallic phase (i.e., for specific tunnelling parameters). The work function of the STM tip is larger than that of the sample (Supplementary Section S19). When the tip-sample distance is reduced, the double-barrier tunnelling junction (DBTJ) leads to an upward energy shift of the MOF electronic states [with respect to the Cu(111)
Fermi level]: the MOF electronic states become further depopulated (Supplementary Fig. S26).That is, when the dI/dV spectra in Fig. 4d transition from gapless (metallic) to gapped (insulator) as the tip-sample distance decreases, we infer that the MOF electronic states associated with the MOF kagome bands become empty, with the Fermi level lying below the bottom of the three kagome bands (see DFT band structure over a wide energy range in new Supplementary Fig. S1).From this, we associate the gapped dI/dV spectra at the top of Fig. 4d (for small tip-sample distances) to a trivial insulating phase of the MOF.
Let us again consider the MOF in the metallic phase at such a moiré wire region (i.e., gapless spectra in Fig. 4d
at intermediate tip-sample distances). When the tip-sample distance is now increased, the DBTJ leads to a downward energy shift of MOF electronic states [with respect to the Cu(111)
Fermi level], which become further populated (Supplementary Fig. S26).That is, when the dI/dV spectra in Fig. 4d transition from gapless (metallic) to gapped (insulator) as the tip-sample distance increases, we infer that the population of the MOF electronic states associated with the MOF kagome bands also increases in turn, reaching a threshold that opens the Mott gap, with the Fermi lying in such Mott gap.This population of MOF states (here driven by the DBTJ effect and the increase in tip-sample distance) is analogous to the population of MOF states at the adjacent moiré pore region (due to the smaller local work function at such a moiré pore region; Fig. 3c).From this, we associate the gapped dI/dV spectra at the bottom of Fig. 4d (for large tip-sample distances) to the Mott insulating phase of the MOF.This inference of a Mott gap at the moiré wire region for large tip-sample distances is supported by DMFT.Indeed, when the chemical potentials used in the DMFT calculations of Fig. 3d are all offset upwards by 45 meV (i.e., leading to further population of the MOF electronic states, mimicking the effect of a tip-sample distance increase), the metallic spectral functions associated with the moiré wire region (with no gap at the Fermi level) all become Mott gapped (see Supplementary Fig. S4).
We have now revised the main text and Supplementary Section S19, including details on how the purple circles and red squares in Fig. 4 were assigned, and on how we associated the observed energy gaps to either a trivial or Mott insulator.
No details are given as to whether the energy levels shown in Figure S12 are a cartoon or the result of a calculation.This detail is important to understand how the authors identify the gap character.
We thank Reviewer 2 for this comment.The schematics in Fig. 4a-c and in previous Supplementary Fig. S12 (now Supplementary Fig. S26) are qualitative, cartoon illustrations of the MOF energy level shifts that result from tip-sample distance variations and the doublebarrier tunnelling junction (DBTJ) effect.
We have now clarified this in the captions of Fig. 4a-c, methods section, and Supplementary Fig. S12 (now Supplementary Fig. S26).
Reviewer 3
In this work, the authors reported the experimental synthesis and characterization of a singlelayer 2D DCA3Cu2 MOF on a wide bandgap BN substrate, which is further shown to host a robust Mott insulating phase and can achieve a Mott metal-insulator transition using electrostatic control.The experimental observations are quantitatively consistent with theoretical predictions (both DMFT calculations and DBTJ model).Direct experimental measurements on a Mott metal-insulator transition in 2D MOFs remain elusive.One of the challenges lies in the experimental synthesis of large single-crystal MOF samples.It is nice to see that the authors have succeeded in making one such sample and successfully demonstrated the Mott metal-insulator transitions induced via either template or tip.
We thank Reviewer 3 for their comment and careful reading of our manuscript.However, it is important for the authors to address a question that arises from the examination of the large-scale samples, as depicted in Figures 1a and S4.One can clearly see structural defects, such as vacancies and grain boundaries.As these defects may have a profound influence on the electronic properties of the MOF, it would be good for the authors to have some discussion about the potential impact of these bulk defects.This work represents a noteworthy contribution to the research field of 2D MOFs, shedding light on elucidating the Mott metal-insulator transition in MOFs.I would recommend its publication in Nature Communications after the authors address the aforementioned concerns.We thank Reviewer 3 for this valid suggestion which has helped improve our manuscript.We have added a comment in the main text and a new Supplementary Section S11 focussed on the electronic properties of the MOF at defect sites (e.g.vacancies, domain grain boundaries) and at boundaries of 2D MOF domains.At these sites, dI/dV spectra show remnants of the Hubbard bands, with a weaker upper Hubbard band and an additional peak at ~0. 6 18 The influence of these defects on the MOF electronic properties is very local, however: a short distance away (~one MOF unit cell) from one of such defect sites or domain boundaries, the local MOF electronic properties are identical to those within a defect-free MOF bulk region (new Supplementary Figs.S14-16).We therefore claim that the presence of these defects does not alter the main message of our manuscript.I would like to thank the authors for their efforts in answering my questions, but I am afraid that the answers have not been convincing enough.
In the paper the authors say that they have produced a Mott insulator in a MOF layer.Experimentally, they observe a gap at the Fermi level, depending on the area of the sample (moiré) in which they make the measurements the gap moves in energy following the expected values for the surface potential (not measured) and in some areas the presence of the gap depends on the tip sample position.The assignment of the gap a Mott gap is based only on the DMFT calculations.The authors, in their response to the referees, acknowledge that the DMFT calculations are not capable of reproducing some of the experimental measurements due to limitations in the calculations.Some of the features not explained by the calculations are very prominent in the experiments.Due to these limitations, it seems to me that it is very risky to base the main finding of the article on theoretical calculation that only reproduces the experimental data partially and in some areas of the sample.
According to the authors "the DMFT calculations assume a uniform chemical potential for an infinite system, omitting effects of locality; this assumption is reasonable for the Mott insulating phase (i.e.localized states)".I think this assumption is wrong, the Mott insulator occurs because itinerant electrons are located by electronic correlations, so the calculation cannot be local, although the result is that the conduction electrons end up localized and prevented from moving.
Finally, I don't understand why the authors use the existence of DBTJ in some cases and not in others.The MOF is deposited in an insulator on Cu(111), and the MOFF presents a gap at the Fermi level, according to the authors a Mott gap, therefore I do not understand why in some cases it is considered that a DBJT exists and in other cases This is completely ignored without explanation.
Due to all the above, I cannot support the publication of the manuscript.
Reviewer #3 (Remarks to the Author): The authors have incorporated the suggested comments and made necessary modifications to the manuscript.Now, I recommend its publication.
In the paper the authors say that they have produced a Mott insulator in a MOF layer.Experimentally, they observe a gap at the Fermi level, depending on the area of the sample (moiré) in which they make the measurements the gap moves in energy following the expected values for the surface potential (not measured) and in some areas the presence of the gap depends on the tip sample position.The assignment of the gap a Mott gap is based only on the DMFT calculations.The authors, in their response to the referees, acknowledge that the DMFT calculations are not capable of reproducing some of the experimental measurements due to limitations in the calculations.Some of the features not explained by the calculations are very prominent in the experiments.Due to these limitations, it seems to me that it is very risky to base the main finding of the article on theoretical calculation that only reproduces the experimental data partially and in some areas of the sample It is well established that the kagome lattice can host a Mott insulating phase at half-filling if there is significant on-site Coulomb repulsion U (see Ref. 42 in main text).Furthermore, previous theoretical literature supports the existence of a large U and the emergence of a Mott insulating phase in the specific DCA3Cu2 MOF that we study here (see Refs. 27,36 in main text).Additionally, evidence of localised electrons as a result of a large U has already been demonstrated experimentally for this specific 2D MOF (Ref. 25).In this context, the claim of a Mott insulating phase is not controversial -especially given the strong agreement between experiment and dynamical mean-field theory (DMFT) calculations.DMFT is a wellestablished method for capturing effects of electronic correlations, faithfully describing the Mott metal-insulator transition 1-3 (see in main text).
The agreement between experimental dI/dV spectra for the MOF at the moiré pore region and the DMFT spectral functions for the MOF in the Mott insulating phase is excellent, with quantitatively consistent spectral features, including: (i) spectral line shape with lower (LHB) and upper (UHB) Hubbard bands and a ~200 meV gap at the Fermi level, and (ii) energy modulation of the LHB and UHB due to electrostatic potential variations (in the experiment due to the local work function variation given by the hBN/Cu(111) moiré pattern, and in DMFT due to the variation of the chemical potential; Fig. 3b, d, f, g in main text).This energy modulation of the LHB and UHB is perfectly consistent with the moiré local work function variation, regardless of the hBN/Cu(111) moiré pattern periodicity (Supplementary Section S15).This moiré local work function variation of hBN/Cu(111), and its effect on the energy level alignment of atomic and molecular adsorbates, is very well established (44)(45)(46)(47).
This quantitative agreement provides compelling evidence that the DMFT calculations capture the fundamental electronic properties of the 2D DCA3Cu2 MOF at the hBN/Cu(111) moiré pore regions, and that the energy gap Eg = ~200 meV observed experimentally at the Fermi level for the MOF at these moiré pore regions can be attributed to a Mott insulating gap.As stated above, a Mott insulating phase for the kagome lattice and for the specific DCA3Cu2 MOF studied here is well supported by previous literature (Refs. 25,27,36,42 in main text).
The DMFT calculations also provide an explanation for the increase in Fermi-level dI/dV signal and absence of an energy gap at the Fermi level for the MOF at the moiré wire regions (for an intermediate tip-sample distance range), where the local work function is larger -and hence the population of the MOF electronic states can be reduced -in comparison with the moiré pore regions.Indeed, the DMFT calculations (main text Fig. 3d, Supplementary Fig. S2b) show that a reduction in the population of the MOF electronic states (i.e., reduction of the chemical potential) leads to a transition from the Mott insulating phase to a metallic phase, with an increase in Fermi-level spectral function and absence of a gap at the Fermi level.That is, the increase in Fermi-level dI/dV signal and absence of energy gap at the Fermi-level is indicative of a MOF metallic phase at the moiré wire regions (for a specific intermediate tipsample distance range), with electronic states that can be more delocalised than those of the MOF in the Mott insulating phase at the moiré pore regions.
We acknowledge that the effects of the long-range moiré electrostatic potential and of the finite double-barrier tunnel junction (DBTJ) cross section on these arguably delocalised metallic MOF states at moiré wire regions could result in dI/dV features that are not captured by the DMFT calculations (performed for a perfectly periodic, infinite, flat system).Further dI/dV features not captured by DMFT (e.g., dI/dV peaks at positive bias voltage in Figs.3b, 4d of main text and Supplementary Fig. S24) can also be explained by the susceptibility of tipinduced charging of MOF electronic states at (and in proximity of) the moiré wire regions due to the DBTJ effect.It is important to note that the DBTJ effect does manifest itself also in the moiré pore regions, similar to the moiré wire regions, also leading to energy shifts of the LHB maximum (LHBM) and UHB minimum (UHBM) as the tip-sample distance changes (Supplementary Sections 20, 21).However, given the smaller local work function in comparison to moiré wire regions (Fig. 3c in main text), the MOF energy gap at these moiré pore regions is centred with respect to the Fermi level (Fig. 3b, d in main text), with a significant energy difference between Fermi level and LHBM.This makes the Mott insulating phase at the moiré pore regions robust to energy level shifts given by tip-sample distance changes (within the range of tip-sample distances considered in our study), without tip-induced charging.These factors provide a plausible explanation of why experimental dI/dV spectra and DMFT spectral functions show some differences for the moiré wire regions.
It is important to note that calculations based on DMFT -or even on other theoretical formalisms that capture electron-electron interactions and many-body physics less accurately (e.g., DFT+U) -on systems with large unit cells [such as the moiré unit cell of our
DCA3Cu2/hBN/Cu(111) system studied here] are computationally challenging, if not intractable.Such calculations become even more complex or unfeasible if tip-induced effects are taken into account.Our work provides a tractable approach in which electronic correlations and many-body effects are accounted for reliably, with excellent agreement between experiment and theory for moiré pore regions, and good qualitative agreement for the near-Fermi spectral features at the moiré wire regions.We assert that these aspects provide sufficient evidence for: (i) a Mott insulating phase for the MOF at the moiré pore region, regardless of tip-sample distance, (ii) a Mott insulating phase for the MOF at the moiré wire region for large tip-sample distances (i.e., where the DBTJ effect is negligible), and (iii) a metallic phase at the moiré wire region for a specific reduced tip-sample distance range (i.e., where the DBTJ effect leads to a reduction in the population of MOF electronic states).This is the main message of our manuscript.Our study is not trying to say anything further on the nature of the quantum ground state other than classifying it as metallic or insulating.A quantitative reproduction based on DMFT of our experimental dI/dV spectra for all experimental variables (e.g., wide energy range including energies far away from the Fermi level; all locations with respect to hBN/Cu(111) moire pattern; wide range of tip-sample distances) is beyond the scope of our work.
We have now updated our manuscript to clarify these points (lines 222-225 on p. 9, and lines 307-317 on p. 12-13 in the main text) According to the authors "the DMFT calculations assume a uniform chemical potential for an infinite system, omitting effects of locality; this assumption is reasonable for the Mott insulating phase (i.e.localized states)".I think this assumption is wrong, the Mott insulator occurs because itinerant electrons are located by electronic correlations, so the calculation cannot be local, although the result is that the conduction electrons end up localized and prevented from moving.
We acknowledge that this statement could be clearer.Dynamical mean-field theory (DMFT) is a well-established -widely regarded as the goldstandard -method for understanding Mott metal-insulator transitions.It is not a local method.It fully describes both the Mott insulating and metallic phases, and is responsible for the fundamental understanding of strongly correlated metals 1-3
Let us give a little more technical detail of how DMFT works. Effects of electronic correlations can be accounted for via the self-energy, which in general is nonlocal (i.e., depends on wavevector k), but also intractable to calculate in almost all cases. DMFT calculations assume a self-energy which is local (i.e., independent of k): this makes the calculation of the self-energy feasible. The resulting interacting Green's function is still nonlocal, with local (on-site) correlation effects being fully accounted for (and nonlocal correlation effects being ignored). DMFT calculations do not omit the itinerant nature of electrons; DMFT calculations can still result in metallic phases with itinerant electrons. It is well established that DMFT with such a local approximation of the self-energy captures electronic correlations explicitly, and describes the Mott metal-insulator transition faithfully (see Refs. 38-41 in main text).
Our DMFT calculations assume an infinite, perfectly crystalline, defect-free 2D DCA3Cu2 MOF with a uniform chemical potential EF.Indeed, as stated above, DMFT calculations on systems with large unit cells -such as the long-range moiré unit cell of DCA3Cu2 on hBN/Cu(111) -are computationally challenging, if not intractable.Approximating the potential at each point in space as being uniform is similar in spirit to (but much more accurate than) the local density approximation (LDA) in DFT, where the charge density is locally assumed to be homogeneous.Nevertheless, the LDA can still describe inhomogeneous systems.Now, for EF = 0.25 to 0.5 eV in Supplementary Fig. S2b (corresponding to half-filling of the 2D kagome system), our DMFT calculations of the MOF spectral function indicate a Mott insulating phase, with a Mott gap Eg = ~200 meV (for an on-site Coulomb repulsion energy U = 0.65 eV; see also k-resolved spectral function for EF = 0.4 eV in Supplementary Fig. S5).Because this Mott phase is stable over this range of chemical potentials, it is robust to significant variations of the chemical potential, up to 0.25 eV.Furthermore, in this Mott insulating phase, electronic states are localised at the kagome sites, confined within areas that are small in length compared to the distance between nearestneighbour kagome sites. 4 In our experiments, the periodicity of the hBN/Cu(111) moiré domains considered (λ > 5 nm) is significantly larger than the distance between nearest-neighbour kagome sites (~1 nm).This moiré pattern imposes to the MOF a periodic modulation of the local work function, with a peak-to-peak modulation amplitude of ~0.2 eV for a modulation periodicity λ ≈ 12.5 nm (see Fig. 3 in main text).The amplitude of this local work function modulation becomes smaller with decreasing λ (Supplementary Fig. S22).That is, the MOF is exposed to a periodic modulation of the local electrostatic potential, which varies 'slowly' across the molecular kagome lattice.So, if the MOF is in the Mott insulating phase, the effect of such long-range electrostatic modulation on the localised electronic states is to shift the energy of these localised states accordingly: as long as the electrostatic modulation amplitude does not reach a critical value for the transition to the metallic phase, there is no other dramatic qualitative effect on such localised electronic states.This is consistent with the DMFT-calculated spectral functions in Supplementary Fig. S2b, where the LHB and UHB shift in energy when EF is varied between 0.25 and 0.5 eV, without other significant qualitative changes.This is also consistent with the experimental dI/dV spectra for the moiré pore regions in Fig. 3b of the main text, where the energy of the LHB and UHB is modulated following the moiré variation in local work function.
In other words, the Mott insulating phase is insensitive to the modulation of the local electrostatic potential, as long as the amplitude of this modulation remains below the threshold for the transition to the metal phase, and as long as the periodicity of this modulation is larger than the distance between nearest-neighbour kagome sites.Importantly, this means that the DMFT calculations for an infinite system in the Mott insulating phase capture the experimental phenomena observed locally at the moiré pore (regardless of tip-sample distance) and wire (for large tip-sample distances) regions.
We have now clarified this by updating lines 639-671 (p. 26, 27) in the Methods section of the main text.
Finally, I don't understand why the authors use the existence of DBTJ in some cases and not in others.The MOF is deposited in an insulator on Cu(111), and the MOF presents a gap at the Fermi level, according to the authors a Mott gap, therefore I do not understand why in some cases it is considered that a DBJT exists and in other cases This is completely ignored without explanation.
We have revised our manuscript to more clearly explain the effect of the DBTJ on all our experimental measurements (lines 290-295 on p. 12, and lines 697-703 on p. 29 in main text).
Briefly, the DBTJ effect is intrinsic to and manifests itself in all STM measurements of the DCA3Cu2 MOF on hBN/Cu(111) system, with energy level shifts due to tip-sample distance variations regardless of the location on the sample (i.e., both at moiré wire and pore regions).
We argue that the DBTJ affects the MOF dI/dV spectra more strongly at the moiré wire regions than at the pore regions due to the larger local work function (Fig. 3c), which leads to a MOF LHBM which is closer to the Fermi level, with the MOF electronic states prone to depopulating and the LHBM energy level susceptible of charging as the tip-sample-distance is reduced.This depopulation of the MOF electronic states results in the transition from Mott insulating (with an energy gap at the Fermi level) to metal phase (with no gap at the Fermi level and an increase in Fermi-level dI/dV signal).This is stated explicitly on lines 274-297 (p. 11-12) of the main text.
As shown in Supplementary Sections S20 and S21, the DBTJ effect also manifests itself at the moiré pore regions, with energy level shifts of the LHBM and UHBM as a function of tipsample distance, similar to the moiré wire regions, and consistent with the DBTJ model.Due to the smaller local work function at these moiré pore regions (Fig. 3c), the Fermi level lies close to the centre of the MOF energy gap, further from the LHBM in comparison to the moiré wire regions (Fig. 3b, d).That is, the LHBM and UHBM at the moiré pore regions shift in energy as the tip-sample distance is reduced, yet these energy shifts do not lead to a depopulation of MOF electronic states or to a Mott insulator-to-metal transition (for the range of tip-sample distances considered).This is fully consistent with the DMFT calculations of MOF spectral functions in Supplementary Figs.S2b and S4, which show a Mott insulating phase for chemical potentials EF between ~0.25 to ~0.5 eV.This means that the Mott insulating phase can be robust to energy level shifts significantly larger than the energy level shifts due to the DBTJ effect (for the range of tip-sample distances considered; see Supplementary Section S4).
In other words, the DBTJ effect manifests itself at both moiré wire and pore regions, but given the local work function difference between these two types of regions, only at the wire regions the electronic properties of the MOF are dramatically altered (i.e., transition from Mott insulator to metallic phase) as the result of such an effect (within the range of tip-sample distances considered in our study).
vi) page 4 (lines 102-105): The authors should be transparent and mention that DCA3Cu2 2D-MOF (the very same systems as the one shown here) has already been grown (as a full monolayer) on top of a rather good decoupling layer of graphene on Ir(111) (ref.31).
- 6 DFT
band structures of MOF over wide energy range.a, Free-standing DCA3Cu2 MOF.b, DCA3Cu2 MOF on hBN/Cu(111).Blue circles: projections onto MOF states.k-resolved spectral function for free-standing DCA3Cu2 MOF, calculated by DMFT (U = 0.65 eV, t = 0.05 eV, EF = 0.4 eV, corresponding to half-filling).viii) The authors should provide STS performed on a larger voltage range (for instance from -2V to +2V) to see what happens to the additional kagome bands away from the Fermi level.Is there any strong molecular HOMO/LUMO or 2D-MOF valence/conduction band feature at lower/higher voltages, as already observed in ref. 33 and in A. Kumar et al.Nano Letters 18, 5596 (2018).The latter reference should be added since DCA3Co2 2D-MOF was grown on the rather good decoupling graphene layer on Ir(111), whereby a 2D-MOF band structure signature was already observed.
STS measurements over a broader energy window.Orange curve: DCA lobe site of MOF within a pore region of the hBN/Cu(111) moiré pattern.Grey curve: bare hBN/Cu(111) reference spectrum.Spectra normalised and offset for clarity.Orange curve acquired in two parts: setpoint of Vb = −1 V, It = 100 pA for data between -1 V and -500 mV; setpoint of Vb = −500 mV, It = 100 pA for the remaining bias range.Grey curve setpoint: Vb = −2 V, It = 100 pA.
shows dI/dV STS measurements at the edge of a 2D MOF domain, where DCA molecules are coordinated to only one Cu atom (and not two as within the MOF bulk).At these MOF edge sites, the dI/dV spectra show remnants of the Hubbard bands, with a weaker upper Hubbard band, and an additional peak at ~0.6 V, resembling the (uncoordinated) DCA LUMO on hBN [see D. Kumar et al. "Mesoscopic 2D molecular selfassembly on an insulator", Nanotechnology 34, 205601 (2023); updated Ref. 47].
a pore region of the hBN/Cu(111) moirée pattern.a, STM image of DCA3Cu2 MOF on hBN/Cu(111) (Vb = −1 V, It = 10 pA).b, STS measurements performed at positions corresponding to coloured markers in (a).Grey curve: reference spectrum acquired upon bare hBN/Cu(111).Curves offset for clarity.Dashed lines indicate position of dI/dV = 0. Tip height stabilised 150 pm further away from the surface with respect to a setpoint of Vb = 10 mV, It = 10 pA.dI/dV spectra at different moiré pore regions within hBN/Cu(111) domain with period λ ≈ 12.5 nm.a, Cu sites.b, DCA lobe sites.All acquisition sites close to centre of moiré pore regions.Spectra normalised and offset for clarity.Pore 1 setpoint: Vb = −500 mV, It = 500 pA.Pores 2,3 setpoints: 190 pm further from the surface with respect to setpoint of Vb = 10 mV, It = 10 pA.Pores 4,5 setpoint: 225 pm further from the surface with respect to setpoint of Vb = 10 mV, It = 10 pA.Pore 6 setpoint: 150 pm further from the surface with respect to setpoint of Vb = 10 mV, It = 10 pA.STS measurements at the edge of a DCA3Cu2 MOF domain.a, STM image of the edge of a MOF domain (Vb = −1 V, It = 10 pA).b, STS measurements at Cu sites (circle markers) and DCA lobe sites (triangle markers) both at the edge of the MOF domain (black) and within the MOF domain (red).Curves offset for clarity.Setpoints: Vb = −500 mV, It = 100 pA.STS measurements at a DCA3Cu2 MOF domain boundary.a, STM image showing a MOF domain boundary (Vb = −1 V, It = 10 pA).b, STS measurements at DCA lobe sites both at the MOF domain boundary (black) and within the MOF domain (red).Curves offset for clarity.Setpoints: Vb = −500 mV, It = 100 pA.STS measurements at Cu vacancy defect within a DCA3Cu2 MOF domain.a, STM image showing a Cu vacancy defect (Vb = −1 V, It = 10 pA).b, STS measurements at Cu sites in proximity of Cu vacancy (green and orange), and at Cu vacancy (blue).Curves offset for clarity.Setpoints: 135 pm further away from the surface with respect to a setpoint of Vb = 10 mV, It = 10 pA.Tip-induced gating at a Cu site of the MOF within a wire region of the hBN/Cu(111) moiré pattern.a, dI/dV spectra at MOF Cu site, for different ∆z + z0 (z0 given by STM setpoint Vb = 10 mV, It = 10 pA).Purple circles (red squares): MOF charging peak (intrinsic electronic state at MOF band edge, respectively).Spectra normalised and offset for clarity.b, Vcharge [purple circles in (a)] and Vstate [red squares in (a)] as a function of ∆z.Black solid lines: global fits to Eqs. (1) and (2) in main text.c, dI/dV signal at Fermi level (Vb = 0) as a function of ∆z, from (a).Increased dI/dV (Vb = 0) indicates metallic phase.Inset: STM image with position where dI/dV (∆z) were performed indicated by blue circle (Vb = −1 V, It = 10 pA).dI/dV spectroscopy for Cu site of MOF within a pore region of hBN/Cu(111) moiré pattern.a, dI/dV spectra for different tip-sample distances ∆z+z0 (z0 represents tip-sample distance for STM setpoint of Vb = 10 mV, It = 10 pA) at a Cu site [location indicated by red circle in (c)].Red squares (circles) indicate LHBM (UHBM, respectively).b, LHBM and UHBM as a function of ∆z, from (a).Black dashed curves: fits based on Eqs.(S7) and (S8).c, STM image of DCA3Cu2 on hBN/Cu(111).Red circle indicates position where spectra shown in (a) were acquired (Vb = −1 V, It = 10 pA).xv) Figure S10.It is not clear at all that the charging rings are increasing their perimeter around the DCA molecule (see ref. 31).It looks just the same intensity at the Cu positions.
Fig. S23.dI/dV maps of DCA3Cu2/hBN/Cu(111): charging ring.a, STM image of DCA3Cu2 at a wire region of hBN/Cu(111) moiré pattern (Vb = −1 V, It = 10 pA).b-m, dI/dV maps of region in (a), at indicated bias voltages Vb, obtained via numerical derivative of pixel-by-pixel I(V) curves.At each pixel, the tip-sample distance was stabilised 300 pm further away from the surface relative to a setpoint of Vb = 10 mV, It = 10 pA, before I(V) acquisition.Scale bars: 1 nm.
Charging features at a pore-wire boundary of the moiré pattern.a, STM image of MOF/hBN/Cu(111) showing two pore regions and one wire region of the hBN/Cu(111) moiré pattern (Vb = −1 V, It = 10 pA).b, dI/dV spectra for different tip-sample changes (∆z), acquired at MOF Cu site, as indicated in (a).Spectra normalised and offset for clarity.Setpoints range between 250 pm further from surface (bottom curve) to 105 pm further from surface (top curve) with respect to a setpoint of Vb = 10 mV, It = 10 pA.
STS measurements at the edge of aDCA3Cu2 MOF domain.a, STM image of the edge of a MOF domain (Vb = −1 V, It = 10 pA).b, STS measurements at Cu sites (circle markers) and DCA lobe sites (triangle markers) both at the edge of the MOF domain (black) and within the MOF domain (red).Curves offset for clarity.Setpoints: Vb = −500 mV, It = 100 pA.STS measurements at a DCA3Cu2 MOF domain boundary.a, STM image showing a MOF domain boundary (Vb = −1 V, It = 10 pA).b, STS measurements at DCA lobe sites both at the MOF domain boundary (black) and within the MOF domain (red).Curves offset for clarity.Setpoints: Vb = −500 mV, It = 100 pA.STS measurements at Cu vacancy defect a DCA3Cu2 MOF domain.a, STM image showing a Cu vacancy defect (Vb = −1 V, It = 10 pA).b, STS measurements at Cu sites in proximity of Cu vacancy (green and orange), and at Cu vacancy (blue).Curves offset for clarity.Setpoints: 135 pm further away from the surface with respect to a setpoint of Vb = 10 mV, It = 10 pA.The authors have convincingly answered all my questions.The work is more clearly presented and well supported with new additional experimental results and theoretical simulations.I therefore recommend it for publication in Nature Communications.Minor typos: Line 175: Fig. S14 Fig. S19 or Section S14 Line 201: Fig. 3c, 3b Reviewer #2 (Remarks to the Author): by Yan et al.).The size of this image is ~four times the size of any image in previous Ref.31, with similar MOF quality despite the insulating substrate (compared to a conductive substrate in previous Ref.31).The bright islands are indeed excess Cu clusters forming after the final annealing step in our sample preparation; similar bright clusters are also present in previous Ref.31.We have added a comment to the main text explicitly identifying these as Cu.
of previous Ref.31).That is, graphene is not a perfectly decoupling layer.In contrast, it is well established that electronic states of molecular and metal-organic systems on wide bandgap, single-layer hBN retain their intrinsic properties and remain unhybridized with substrate states, within a relatively large energy range (~5 eV) around the Fermi level (see previous Ref.30 by Auwarter et al., updated Ref. 33, and Fig. | 18,624.2 | 2023-05-24T00:00:00.000 | [
"Physics",
"Materials Science"
] |
Freezing shortens the lifetime of DNA molecules under tension
DNA samples are commonly frozen for storage. However, freezing can compromise the integrity of DNA molecules. Considering the wide applications of DNA molecules in nanotechnology, changes to DNA integrity at the molecular level may cause undesirable outcomes. However, the effects of freezing on DNA integrity have not been fully explored. To investigate the impact of freezing on DNA integrity, samples of frozen and non-frozen bacteriophage lambda DNA were studied using optical tweezers. Tension (5–35 pN) was applied to DNA molecules to mimic mechanical interactions between DNA and other biomolecules. The integrity of the DNA molecules was evaluated by measuring the time taken for single DNA molecules to break under tension. Mean lifetimes were determined by maximum likelihood estimates and variances were obtained through bootstrapping simulations. Under 5 pN of force, the mean lifetime of frozen samples is 44.3 min with 95% confidence interval (CI) between 36.7 min and 53.6 min while the mean lifetime of non-frozen samples is 133.2 min (95% CI: 97.8–190.1 min). Under 15 pN of force, the mean lifetimes are 10.8 min (95% CI: 7.6–12.6 min) and 78.5 min (95% CI: 58.1–108.9 min). The lifetimes of frozen DNA molecules are significantly reduced, implying that freezing compromises DNA integrity. Moreover, we found that the reduced DNA structural integrity cannot be restored using regular ligation process. These results indicate that freezing can alter the structural integrity of the DNA molecules.
Introduction
DNA molecules have been widely used in various fields such as drug delivery, therapeutic systems, molecular imaging, and biomolecule detection recently due to their high mechanical flexibility and unique assembly specificity [1,2]. For example, aptamer, a short oligonucleotide, has been utilized to enhance the therapeutic efficiency of cancer treatments [3,4]. During reagent preparation, to maintain DNA quality and reduce DNA damage induced by free radicals in aqueous environments, DNA samples are usually stored at −20°C [5]. In ionizing radiation studies involving the use of long half-life isotopes, mixtures of DNA samples and isotopes have typically been frozen for up to 30 days at −70°C for 125 I compounds to accumulate sufficient isotope decays [6,7]. Because of the common usage of freezing protocols, various engineering developments and scientific studies will benefit from understanding the effect of freezing DNA on its integrity.
The structural integrity of DNA plays a critical role in different nucleic acid nanotechnology applications. For example, site-specific DNA strands can be assembled into threedimensional nanostructures, which can serve as the building blocks of advanced nanodevices [8]. The structural stability of nucleotide probes can also influence hybridization efficiency for molecular detection. Various factors, such as ionic strength, can influence the integrity of molecular structures, and several technologies have been developed to probe the integrity of polynucleotides [9,10]. However, to probe the integrity of DNA at a single molecular level is still challenging for most current technologies.
Double-strand breaks are often formed by closely spaced nicks. One method of detecting nicks, particularly if they are closely spaced, is to stretch the DNA molecules in a low ionic strength buffer. When DNA is stretched, sections containing clustered nicks denature locally and induce DSBs [24]. The integrity of DNA molecules is also influenced by the buffer being used. The ionic strength plays a major role in the stability of double-stranded DNA (dsDNA). Because DNA backbones are highly negatively charged in aqueous solutions, low salt concentrations result in a low degree of charge shielding of the backbones, thus making dsDNA less stable [25]. When dsDNA is subjected to external tension, low ionic strength buffers facilitate peeling and force induced melting near the nicks [26][27][28]. In addition, various surfactants, such as Tween 80, are widely used for macromolecule assembly and nanostructure fabrication [29][30][31]. The effect of such surfactants on DNA integrity is not clear.
Although plasmid DNA samples can easily be prepared in house to ensure superior quality control, lambda DNA samples are usually sourced commercially. Lambda DNA (48,502 bp), which is several times larger than typical plasmids, is prone to chemical and mechanical damage during purification and storage. Some samples are frozen after purification and then thawed in the lab before being used in assays; although this freeze/thaw cycle does not affect most experiments, it can result in nicks that can skew the results of some experiments, especially if the DNA molecules are to be subjected to tension [32,33].
In this study, we investigated the effects of freezing on DNA integrity at the molecular level by measuring the sustaining times of DNA molecules through the use of dual-beam optical tweezers. We analyzed the difference between frozen and non-frozen samples, as well as the difference between two batches of non-frozen samples.
Bacteriophage lambda DNA handling
Bacteriophage lambda DNA samples were purchased from New England Biolabs (NEB, N3011S). The standard catalog item was frozen at −20°C. The non-frozen samples were special order and shipped at 4°C. Upon arrival, the frozen sample was thawed and stored at 4°C. All lambda DNA samples were stored in 100 mM Tris, 0.5 M NaCl, and 50 mM EDTA (pH 7.5) at 4°C. Samples older than 1 year were discarded.
Overhangs of lambda DNA (14 μg) were annealed with complementary oligos with a single biotin at the 3′ end for 40 min in 50 μl of TE buffer (10 mM Tris, 1 mM EDTA, pH 8.0). Unbound oligos were washed twice with 50 mM Tris (pH 7.5) with Amicon Ultra centrifugal filter (Millipore, UFC510096) at 8100 g for 15 min at 4°C. All samples were then treated by 400 units of T4 DNA ligase (NEB, M0202S) to repair nicks at 16°C for 2.5 h in 40-50 μl of 1X T4 DNA ligase buffer (50 mM Tris-HCl, 10 mM MgCl 2 , 10 mM DTT, and 1 mM ATP, pH 7.5) in PCR tubes. T4 DNA ligase was inactivated with additional 6 μl of 0.5 M EDTA (pH 8.0) to bring the final concentration to 53-65 mM and washed with TE buffer twice with centrifugal filter at 8100 × g for 15 min at 4°C. The final products were stored in 100 mM Tris, 0.5 M NaCl, and 10 mM EDTA (pH 7.5) at 4°C and used within 2 weeks.
Optical tweezers setup
The instrument setup was described by Yang et al. [34]. In short, the force-measuring dualbeam laser tweezers comprised one fixed trap and one movable trap, both formed by 1064-nm laser beams. A quadrant photodiode (QPD) was used to measure the signal from the position of the bead in the trap from an 830-nm defocused laser beam that was superimposed onto the fixed trap. Trap stiffness was calculated from the Brownian motion of the bead in the trap (Fig. 1a).
Labeled lambda DNA was incubated with streptavidin-coated beads (Spherotech, SVP-15-5) for approximately 1 h before being sealed in a liquid chamber containing TE buffer (10 mM Tris, 1 mM EDTA, 53 mM NaCl, and 0.3 mg/ml casein, pH 8.0) at 23°C, with or without 4.6% Tween 80 (Sigma-Aldrich, P5188). Two beads were trapped, with a single DNA molecule suspended in between (Fig. 1a). The DNA molecule was then stretched to the desired tension by moving the movable trap. The force was deduced from the trap stiffness and the average position of the bead in the fixed trap.
Sustaining time was defined as the time required for a single DNA molecule to break under tension. In this type of measurement, a drop to zero in the force measurement indicated that the DNA molecule had broken (Fig. 1b). The cutoff time for measuring the DNA sustaining time was set to 60 min in this study.
Data analysis
The probability that a DNA molecule survives time t was derived as follows [35,36]: where τ is the mean lifetime of DNA. This survival probability was estimated by: where n(t) is the number of DNA molecules not broken by time t, and N is the total number of molecules tested. The maximum likelihood estimate (MLE) of τ for the exponential probability distribution with censoring was given by: where x i is the sustaining time of broken DNA, T w is the period of measurement window (= 60 min in the study), n is the number of broken DNA molecules during the measurement period. The sampling distribution of τ̂could be approximated by bootstrapping as follows: We generated 10,000 bootstrap samples, each of size N, from the exponential distribution with τ ¼ τ. For each of these samples, the maximum likelihood estimate, τ b̂, of τ for the exponential distribution with censoring was calculated. The significance of differences among τ's for various experimental conditions could be evaluated by their 95% confidence intervals (CI).
Comparison of frozen and non-frozen samples
To evaluate the effects of the freezing protocol, frozen and non-frozen samples were compared in low-force experiments. The sustaining times of the frozen and non-frozen lambda DNA molecules were measured under tensile forces of 5 and 15 pN in low salt buffer. Mean DNA lifetime τ̂was calculated according to Eq. (3). We evaluated the variance of lifetime under different exponential conditions using bootstrapping simulations. The results for frozen and non-frozen DNA molecules are shown in Table 1.
Survival probability plots were generated by plotting survival probabilities versus time, see Fig. 2d. The distribution of frozen DNA samples is distinct from both non-frozen samples under the same tension. Although the mean of the two nonfrozen samples are different, their 95% confidence intervals overlap. Our data indicated that there is a significant difference between frozen and non-frozen samples, and no significant difference between samples stored in TE buffer and TE with Tween 80. Bootstrap simulations (Table 1). Only 29% of the frozen DNA molecules lasted more than 60 min under the 5 pN force, whereas 62% of the non-frozen samples in TE buffer lasted more than 60 min under this force.
We studied the effects of surfactants on DNA integrity by using non-frozen lambda DNA with additional Tween 80 (4.6%) in the liquid chamber. Compared with a sample without Tween 80, the lifetime of the non-frozen samples was slightly shortened to around 105.5 min (95% CI: 79.1-145.7 min), and the number of molecules that survived past 60 min in the sample with Tween 80 dropped slightly from 62% to 55%. Despite the 21% reduction in the DNA lifetime in the presence of Tween 80, the shortened lifetime was still more than two times longer than that of the frozen samples in TE buffer.
To further investigate the mechanical strength of the DNA molecules, we increased the tension exerted on the molecules. We observed the differences in lifetime between the frozen and non-frozen samples under a force of 15 pN. All the frozen samples in TE buffer broke within 60 min and 47% of the non-frozen samples in TE buffer survived over 60 min. The addition of Tween 80 slightly reduced the amount of samples that survived past the 60 min mark to 42%. The lifetime of the frozen samples was 10.8 min (95% CI: 7.6-12.6 min), considerably shorter than that of the non-frozen samples (78.5 min, 95% CI: 58.1-108.9 min).
Increasing the mechanical force shortened the lifetimes of the molecules, an observation that is consistent with those reported in the literature [37,38]. Because the non-frozen samples subjected to the 5 and 15 pN force experiments were derived from the same batch of lambda DNA, the differences in their lifetimes should be directly related to the different tensile forces The histograms of the sustaining time are illustrated in Fig. 3a and b. All data within a histogram were derived from the same batch. Histograms of the frozen samples are presented in blue and those of the non-frozen samples are presented in red. For easy comparison, the numbers of DNA breakage events were replaced by the percentages of DNA molecules that were broken during the first 60 min of the experiments. Figure 3a presents a comparison of the distributions of the sustaining times of the frozen and non-frozen DNA samples at 5 pN. From 0 to 30 min, more frozen DNA molecules were broken, compared with the non-frozen molecules. A lower percentage of frozen DNA molecules exhibited breakage between 40 and 60 min, presumably because most of the frozen molecules had already broken before this period. By contrast, the non-frozen samples exhibited a more even distribution across the 60-min observation, except for the first 10 min, during which only 1% broke. The differences in the sustaining time histograms between the two sample groups are shown in Fig. 3c. The greatest difference was observed within the first 10 min, decreasing as the sustaining time increases, and finally reaching a negative level after 40 min.
Similar differences were also observed between the frozen and non-frozen samples at 15 pN. Compared with the percentage observed at 5 pN, a greater percentage (55 vs. 22%) of the frozen DNA molecules broke within the first 10 min (Fig. 3b). After 20 min, most of the frozen molecules had already broken. For the non-frozen samples, more molecules were broken within 10 min compared with the results observed at 5 pN for samples from the same batch; however, approximately three-fourths of the molecules survived beyond 20 min. The differences between the two histograms are illustrated in Fig. 3d. Similar to the differences in Fig. 3c, the differences decreased as the sustaining time increased. After 30 min, most of the frozen molecules had already broken, thus resulting in fewer molecules to be observed.
The lifetimes of the frozen lambda molecules were observed to be significantly shorter than those of the non-frozen samples at 5 and 15 pN. Our data show that the freezing process, even without repeated freeze/thaw cycles, made a significant impact on DNA integrity when tested in a tensile force range common in biologically relevant processes.
Comparison of non-frozen samples from different batches
The sustaining times of non-frozen DNA samples from two batches purchased from the same vendor were measured at 25 and 35 pN to ensure that most, if not all, molecules would break before the cutoff time ( Table 2). All samples were processed in exactly the same manner from their arrival in the lab to their loading in liquid chambers; moreover, all buffers used were the same. A single exponential decay probability function with MLE lifetime do not fit data sets well, as shown by green lines in Fig. 4a and b. The red lines in Fig. 4a and b show that the sum of two exponential decay functions is a much better fit for the data, which implies that more than one mechanism may contribute to the observed DNA breakage. Because all DNA molecules tested broke within the measurement window of 60 min for both batches, it is not necessary to apply the MLE method. Sample lifetimes were calculated by fitting the survival probability plots to the sum of two exponential decay functions.
Most of the molecules from both batches that were tested at the relatively high tensile forces of 25 and 35 pN broke within 60 min. The samples subjected to the 25 and 35 pN forces exhibited two discrete populations, a shorter lifetime group and a longer lifetime group. The Table 2 Lifetimes of non-frozen DNA samples from different batches in TE buffer with Tween 80 Overall, under the same tension, DNA in batch 2 had a longer lifetime than did that in batch 1, indicating that it was of higher quality in terms of structural integrity. For batch 1, at 25 pN, 26% of the tested molecules were in the shorter lifetime group; the remaining molecules were in the longer lifetime group. When the tensile force increased to 35 pN, the sizes of the two populations evened out. For batch 2, although two populations were also observed, the profile was different from that observed for batch 1 at both forces. At 25 pN, a lower percentage of the tested molecules belonged to the shorter lifetime group (22 vs. 26%). At 35 pN, more than half of the molecules belonged to the shorter lifetime group.
The histogram of the sustaining time shows the distributions of the populations within each sample. The histograms of the sustaining times of non-frozen lambda DNA molecules from the two batches are illustrated in Fig. 5a and b. All data within a histogram were derived from a single batch. The histograms of the samples from batch 1 are presented in blue and those of the samples from batch 2 are presented in red. For easy comparison, the numbers of DNA breakages were replaced by the percentages of DNA molecules that were broken during the first 60 min of the experiments. Figure 5a presents a comparison of the distributions of the sustaining times of the nonfrozen DNA samples at 25 pN, and Fig. 5c illustrates the difference between the two histograms. From 0 to 10 min, more DNA molecules broke in batch 1 than in batch 2. From 10 to 40 min, a lower percentage of DNA molecules from batch 1 broke, presumably because most of the molecules that had been frozen had already broken within 40 min. From 40 to 60 min, the number of molecules observable from both samples was extremely low; thus, the Fig. 5b. The difference in the sustaining time histograms between the two sample groups at 35 pN is presented in Fig. 5d. The greatest difference was observed within the first 10 min. Beyond that time, the size of the sample was too small for us to draw clear conclusions.
Discussion and conclusions
The considerable difference in lifetime between the frozen and non-frozen DNA samples at a low force could have a few underlying causes. We speculate that the most likely cause is the higher number of nicks in the frozen sample, which consequently makes it more likely to have , and the y-axis represents the percentage of all DNA molecules tested in TE buffer with 4.6% Tween 80. Blue histograms denote samples from batch 1, and red histograms denote samples from batch 2 in a and b. a Of 100 molecules tested from batch 1 at a tensile force of 24.6 ± 1.9 pN, 6% lasted over 60 min. Of 45 molecules tested from batch 2 at a tensile force of 24.9 ± 1.0 pN, 9% lasted over 60 min. b Of 100 molecules tested from batch 1 at a tensile force of 34.9 ± 1.8 pN, only one lasted over 60 min. All 25 molecules tested from batch 2 at a tensile force of 34.7 ± 1.7 pN were broken within 60 min. c Differences in the percentages of broken DNA between the two batches of non-frozen DNA samples at a tensile force of 25 pN. d Differences in the percentages of broken DNA between the two batches of non-frozen DNA at a tensile force of 35 pN a pair of closely spaced nicks on opposite strands. In low ionic strength buffer, a pulling force is conducive to breathing/peeling in dsDNA [39,40] and accelerates the breaking process near closely spaced nicks. Our data indicate that the frozen DNA specimens had lower mechanical strength levels, which is consistent with previous findings for DNA at low temperatures [10].
DNA ligase is frequently used in laboratories to facilitate the formation of phosphodiester bonds of adjacent DNA bases, and ligation is expected to enhance DNA integrity. In our experiments, T4 DNA ligase is applied to repair the nicks of both the frozen and non-frozen samples as part of the biotinylation procedure, and this ligation procedure is expected to repair the nicks along the DNA structure.
The ligation condition used in this study, 16°C for 2.5 h, is shorter than the recommended ligation time in the manufacturer's protocol, and a more optimized ligation procedure may produce frozen samples with longer lifetimes. On the other hand, we cannot ignore the possibility that ligation procedures may not be able to fully restore the damages caused by freezing DNA samples. Therefore, if intended experiments rely on DNA integrity, avoiding samples that have been frozen is worthwhile.
Non-frozen samples offer higher DNA integrity than frozen samples do, but long DNA molecules from different batches still show different survivorship profiles under tension. In contrast to lower tensile force (5-15 pN), two discrete populations with different decay lifetimes were observed under higher tensile force (25-35 pN). These results suggest that more than one single factor may contribute to the DSBs, and we speculate that various DSBs mechanisms, such as bubble migration and strand peeling [28,41], could contribute to the observed DSBs in our experiments. It is also possible that the rupture of single biotin-streptavidin binding between DNA and beads [42] contributes to the additional population as well. More experiments are needed to further explore the potential mechanisms of the DNA breakage under higher tensile force. Due to batch-to-batch differences, care should be taken to perform all experiments from one batch.
We used dual-beam optical tweezers to evaluate the integrity of DNA molecules by stretching single molecules in low ionic strength buffers. Our results demonstrate that common freezing protocols can reduce DNA integrity at the molecular level. When a moderate tensile force (< 20 pN) was applied to mimic the mechanical interactions between enzymes and DNA molecules, the lifetimes of frozen DNA molecules decreased dramatically. Considering the increasing applications of DNA molecules in numerous fields, our findings are expected to aid developments in sample preparation and storage procedures in DNA nanotechnology. | 4,981.2 | 2017-09-08T00:00:00.000 | [
"Biology",
"Physics"
] |
ForecastingNatural Gas Consumption in theUSPower Sector by a Randomly Optimized Fractional Grey System Model
Natural gas is one of the main energy resources for electricity generation. Reliable forecasting is vital to make sensible policies. A randomly optimized fractional grey system model is developed in this work to forecast the natural gas consumption in the power sector of the United States. +e nonhomogeneous grey model with fractional-order accumulation is introduced along with discussions between other existing grey models. A random search optimization scheme is then introduced to optimize the nonlinear parameter of the grey model. And the complete forecasting scheme is built based on the rolling mechanism. +e case study is executed based on the updated data set of natural gas consumption of the power sector in the United States. +e comparison of results is analyzed from different step sizes, different grey system models, and benchmark models. +ey all show that the proposed method has significant advantages over the other existing methods, which indicates the proposed method has high potential in short-term forecasting for natural gas consumption of the power sector in United States.
Introduction
Electricity facilitates the development of the national economy and promotes the progress of the industrial society in the present age. Electricity, as high-performance clean energy, has one shortcoming that its sources are too extensive. Among plenty of ways to produce electricity, natural gas is the best choice as a clean fuel, which is better than coal combustion in terms of pollution and more convenient than nuclear energy in resource acquisition [1]. As the world's largest industrial country, among the primary energy sources used by the United States to produce electricity in 2020, natural gas accounted for 38%, coal accounted for 27%, nuclear energy accounted for 20%, and traditional hydropower accounted for 12% [2]. With the closure of many coal plants and nuclear power plants in the United States, natural gas has become the primary electricity production source in the United States [3]. erefore, it is of great significance to study natural gas consumption in the US power sector. In the early natural gas prediction methods, Hubbert model is one of the earliest established tools [4], and it has been proved to achieve a pleasing effect in the prediction of fossil fuels [5]. Jiang et al. took China's policies as the driving factor to establish MARKAL, an economic optimization model for predicting natural gas consumption, and applied it to the energy forecast of three major regions in China [6]. Li et al. used the system dynamics model to predictthe natural gas consumption [7]. Szoplik built an artificial neural network to predict natural gas consumption, considering many factors that may influence natural gas consumption, such as calendar and weather, and got effective results [8]. A recent method that combines weather forecasting with artificial intelligence to predict a short-term gas consumption has also been developed [9]. Svoboda et al. established a time series prediction method based on machine learning to study natural gas consumption [10]. In Wang et al.'s work, the multiperiod Hubbert model and the rolling grey model were used to forecast and evaluate the natural gas consumption, respectively [11]. As early as 2012, in the work of Soldo, the Hubbert model and grey forecasting model would become the main tool in forecasting by predicting gas consumption [12]. In the grey model, natural gas consumption prediction as a time series has achieved satisfactory results [13,14].
Grey prediction technology is an essential branch of grey system theory proposed by Professor Deng [15]. Because it can provide a feasible and effective method to deal with uncertainty, grey forecasting model is often used in the research of energy, environment, industry, economy, and other fields [16][17][18][19]. Besides, compared with other prediction models, the grey model is better at conducting small samples experiments. erefore, the grey model is often used for short-term prediction and provides corresponding decisions to deal with future trends according to the obtained forecasting results. Grey prediction technology is widely used in energy prediction. Qian and Sui designed a discrete grey model that can adapt to any periodic time series and applied it to renewable energy systems [20]. Huang et al. constructed a multivariate interval grey model and further applied it to the prediction of clean energy with the method of fractional connotation prediction [21]. Zhao and Lifeng proposed an adjacent cumulative, discrete grey model to improve the utilization rate of new data, and it demonstrates the effectiveness on nonrenewable energy [22]. e grey prediction model is more mature and feasible in energy application. However, in most studies, there are no applications with large changes in data characteristics.
In the development of the grey model, to solve this problem, Wu et al. proposed a new accumulation method, replacing the first-order accumulation with fractional-order accumulation, which eliminated the randomness of the original data series [23]. A large number of pieces of literature show that the model can obtain better prediction performance when the original data is processed by fractional-order accumulation [24,25]. With the introduction of new information priority accumulation, the grey model has more choices to process the original data [26]. However, with the introduction of nonlinear parameters, approximating the required parameters of the model has become a new problem.
Many scholars adopt random search algorithm to solve this problem. Bergstra and Bengio et al. applied the random search algorithm to solve the hyperparameter of the model and verified the simplicity and effectiveness of random search in the same field. Compared with other search methods, the application of random search for parameters can quickly and efficiently find equally good or even better models [27]. e random search algorithm has shown some advantages of its algorithm in various fields [28,29].
According to the literature study, this paper uses the random search to optimize the fractional nonlinear parameters in the nonhomogeneous grey model and designs an application of natural gas consumption in the US power sector which uses the rolling forecast mechanism to forecast the results. e rest of this paper is organized as follows. Section 2 presents the theory and concept of a nonlinear grey model which needs to be optimized. In Section 3, the concept of the random search algorithm to optimize nonlinear parameters is given. e rolling forecast mechanism and case study of forecasting natural gas consumption in the US power sector are presented in Section 4, and the conclusions are given in Section 5.
The Fractional Nonhomogeneous Grey
Model and Related Models is section first presents the construction of fractional nonhomogeneous grey model (FNGM), of which the fractional order is the parameter to be optimized [23]. en description of other related models is presented briefly, which is used to compare the prediction performance of the models in the case study.
e Fractional Nonhomogeneous Grey Model.
e raw data sequence is X (0) (k) � x (0) (1), x (0) (2), . . . , x (0) (n) , and its fractional-order accumulation generation sequence is X (r) (k) � x (r) (1), x (r) (2), . . . , x (r) (n) , r is the fractional parameter, and (1) e first-order differential equation of the FNGM is where α is the grey development coefficient and βk is the grey action quantity. e discrete differential equation of (2) is where z (r) (k) � (x (r) (k) + x (r) (k + 1))/2 is the sequence mean generated of consecutive neighbors of x (r) (k). Set en the least squares estimation of the FNGM satisfies e solution of the first-order differential equation (2) is e forecasting results of the FNGM were obtained according to the inverse accumulation operation:
Relationship between the Fractional Nonhomogeneous Grey Model and Other Existing Grey Models.
Several transformationsof the FNGM are givento compare the model forecasting performance: When the discrete differential equation (3) of the FNGM is changed to the FNGM model degenerates to the basic fractional grey model (FGM) [23]. By differencing operation, the FGM can be rewritten as which is the fractional discrete grey model (FDGM) [30]. e equation is called the fractional nonhomogeneous discrete grey model (FNDGM) [31]. e FNDGM will also be used for comparisons. When the fractional parameter r � 1, the fractionalorder accumulation is reduced to the first-order accumulation, which is defined by and within it, the above four models yeild the grey model (GM), the nonhomogeneous grey model (NGM), the discrete model (DGM), and the nonhomogeneous discrete grey model (NDGM) with the firstorder accumulation [23]. When the new information priority accumulation is used to replace the first-order accumulation to process the original sequence, which is then the new information priority accumulation method for the above four models, the new information priority grey model (NIPGM), the new information priority nonhomogeneous grey model (NIPNGM), the new information priority discrete grey model (NIPDGM), and the new information priority nonhomogeneous discrete grey model (NIPNDGM) can be obtained [26].
In the following content, we will compare the performances of the models in the same case study with the same evaluation metrics.
Parameter Optimization Based on Random Search
After the fractional-order accumulation operator is selected, how to set the fractional-order parameters of the model becomes vital to make accuracte forecasting. e simplicity and global optimality of random search make it competitive in parameter optimization. e following part of this section introduces the main steps of random search for parameter optimization of grey models.
Data Set Division.
Set the raw data set as Firstly, the data set is divided into two parts: modelling subset and prediction subset, denoted as respectively, where X model is a subset of the established model and X test is a test set to evaluate the final performance of the model and does not participate in establishment of the model. Secondly, the subset of the modelling part e training subset X train is used to estimate model parameters. e validation subset X valid is used to test the out-of-sample accuracy of the model, which aims to improve the generality of the model. e flowchart of this process is shown in Figure 1.
Optimization Problem Structure.
Taking the nonhomogeneous grey model with fractional-order accumulation as an example, the fractional order r in the FNGM is the parameter that needs to be optimized, in which r determines the way to process the original data. e objective is to reach the minimum average absolute error on the validation set X valid with respect tor, and within this, the FNGM can obtain excellent prediction performance. erefore, the optimization problem of fractional order r can be written by the following equation:
e Randomized Parameter Optimization.
For the nonlinear programming problem expressed in (13), traditional mathematical methods are usually difficult to use. Intelligent computing has become the mainstream of the current era, and the method of a random search for optimized parameters can solve this problem with low time consumption.
In the random search algorithm, it takes random sampling in the parameter space as the benchmark, generates evenly distributed random numbers in the interval, calculates the objective function value, and preserves the sampling points with good results by comparing the objective function value. e approximate optimal solution of the optimization problem can be obtained within limited iterations.
is paper uses a random search algorithm to search the optimal fractional order r of the FNGM. e algorithm is summarized in Algorithm 1.
Complexity Analysis.
e number of training set samples, validation set samples, and algorithm iteration times are defined as n train , n valid , and n iter . And the process of obtaining the optimal model is divided into five parts in the following paragraph.
For particular cases, if r � 1, there are no binomial coefficients in the accumulation. So, the time complexity T 1 (n) of fractional-order accumulation is
Least Squares.
For (5), the operation of Θ T Θinvolves a matrix with shape 2 × 2; it needs 4(n train − 1) multiplications. e inverse(Θ T Θ) − 1 requires 4 3 multiplications, and this value is independent of n train ; the operation Θ T ς means multiplying one matrix by another in which their shapes are 2 × n train and n train × 1, respectively; the multiplications are 2(n train − 1); similarly, the multiplications of matrixes (Θ T Θ) − 1 and Θ T ς need 4 multiplications. So, the complexity of the least squares is the sum of the total number of multiplications, 6n train − 63. And the time complexity is
Time Response Function.
Consider the number of multiplications in (6); the time complexity T 3 (n) for time response function is
Fractional-Order Inverse Accumulation.
e time complexity T 4 (n) of inverse accumulation operation is similar to the fractional-order accumulation, and it can be expressed as
Random Search Algorithm. Every iteration includes
one construction of the model, and the algorithm actually excutes a cyclic process. So, the total time complexity T(n) of the optimal model is According to (19), it indicates that the total time complexity of obtaining the optimal model is related to n train , n valid , and n iter , but in our small sample time series forecasting work, the number of n train and n valid is much smaller than n iter , so the time complexity of the entire work is mainly determined by n iter .
Case Study
In this section, we use the data set of natural gas consumption in the US power sector to verify the FNGM optimized by the random search algorithm. In this case, we will compare the results obtained by the models mentioned in Section 2 and the prediction method given in Section 4.2. In the first subsection of this part, several indicators for evaluating model performance are given to facilitate the measurement of prediction accuracy between models. e forecasting results are discussed in the last section.
Forecasting
Method. 44 months of data on natural gas consumption in the US power sector (from Jan 2017 to Aug 2020) are collected. e results of the analysis of the data are shown in Figure 2, and it can be seen that the data presents a clear quarterly trend. e consumption is the most in the autumn period of each year, and it shows an upward trend year by year. Under the influence of this quarter, the traditional direct modelling and forecasting method obviously cannot achieve better results. In our work, we use a rolling forecasting method for time series data [33]. e specific process is shown in Figure 3. Such methods are widely used in the research of various forecasting models.
To illustrate the forecasting method represented by this figure, we label the data set of natural gas consumption in the US power sector as X � x(1), x(2), . . . , x(n), n � 44 { }, set τ � 12 as the number of rolling window for rolling forecasting, and divide X into group n − τ data subsets: the first subset is T 1 � x(1), x(2), . . . , x(τ) { }, the second subset is T 2 � x(2), x(3), . . . , x(τ + 1) { }, and the last subset is T n−τ � x(n − τ), x(n − τ + 1), . . . , x(n − 1) { }. In each forecasting step, the first ten data points are included in the training subset, and the last two data points form the validation subset. In each subset T i (i � 1, 2, . . . , n − τ), last time series node x n in the original data set X is predicted by the randomly optimized model. e first step time node of each subset is used to form the one-step forecasting result of the rolling forecast, and the second step time node of each subset prediction corresponds to the result of the two-step prediction, and so on. According to the above description, the forecasting result of each step is a prediction value composed of n − τ + 1 − ζ step subsets of model parameters adjusted by random search.
With the forecasting method, the overall prediction scheme in this work is shown in Figure 4
Forecasting Results of FNGM with Different
Steps. In this section, the forecasting results of the first three steps of the FNGM after random search and tuning parameters are used for comparative analysis, as shown in Table 2. e MAE, MAPE, and RMSE of the one-step forecasting are 24.45, 2.58%, and 35.26, and all of them are smaller than other step forecasting. It is worth noting that the metrics values of onestep forecasting are smaller than two-step forecasting, and (1) Input the original data sequence X, need to search for the optimal parameter model mdl, parameter optimization interval S, iteration number n iteration (2) Divide the data set into two parts: training set X train and validation set X valid (3) Define the objective function min W x j ∈X valid (4) Initialize the objective function judgment value as MAE min (5) for i � 1; i ≤ n iteration ; i � i + 1 do (6) For mdl, randomly select a set of parameter values value params with uniform distribution in the interval S (7) Pass the training set X training into this model to train the mdl and predict the time series node where the validation set X validation is located to get the forecast result X (8) Calculate MAE of the forecast result X and the verification set X validation as MAE validation (9) if (MAE validation < MAE min ) do (10) params � value params (11) Update judgment value MAE min � MAE validation (12) end (13) end (14) return params (15) Output optimal parameter params ALGORITHM 1: e process of optimizing parameters by random search algorithm. Mathematical Problems in Engineering the metrics values of two-step forecasting is smaller than three-step forecasting. We compared the forecasting results under different step sizes in Table 2, and detailed plots are shown in Figure 5 which illustrate thatthe forecasted curve are farther from the original data with the step sizes increasing. It can be concluded that the shorter the step size, the higher the prediction accuracy obtained in the rolling prediction mechanism.
Forecasting Results in Comparison with Other Grey System
Models. In order to further evaluate the accuracy of the rolling forecast of the FNGM with random search, we selected the remaining 11 models mentioned in Section 3 for comparative analysis. It should be mentioned that the GM, NGM, DGM, and NDGM are used for forecasting with the rolling forecasting without optimized nonlinear parameters.
In such 12 forecasting models, the prediction performances of the FNGM, FGM, FDGM, NIPGM, and NIPDGM are better than others. e forecasting results of these five models are shown in Figure 6. And it includes the prediction comparison of three kinds of different step sizes. It can be clearly observed that the forecasting results represented by the FNGM are better than the other four models in every step size.
To visually show the prediction performance of the rolling prediction mechanism after random search, the evaluation metrics mentioned in Section 4.2 are used to quantify the effectiveness of the forecasting results of the models. Table 3 shows the different evaluation metrics of all models in the first three steps of prediction. In the one-step x (τ) x (n -τ -1) x (τ -1) x (n -4) x (n -3) x (n -3) x (n -2) x (n -2) x (n -1) x (τ -1) x (τ + 1) x 1,1 (τ + 1) x 2,1 (τ + 2) x 2,2 (τ + 3) x n-1,1 (n -1) x n-1,2 (n) x n,1 (n) x n-1,2 (τ + 2) x 1,n-τ-1 (n -1) x 2,n-τ-1 (n) Divide the original data into nτ data subsets Forecast a er training the model parameters for each subset based on RandomizedSearchCV forecasting, all the metrics of the FNGM are smaller than other models' one-step results, the MAPE values of several models are slightly smaller, such as FGM, FDGM, and NIPDGM, but are still much greater than three times that of FNGM. And the prediction results of the FNGM with other steps are also reliable. In particular, the performance of FNGM in three-step performance is better than other models' one-step performance.
Considering the prediction results obtained by different accumulation, in the forecasting results of the NGM under three accumulations, the FNGM achieves the best prediction effect but the prediction accuracy of the NGM and NIPNGM Figure 4: e process of the application of natural gas consumption in the US power sector. is not good enough. It illustrates that the fractional-order accumulation forecasting models are suitable in the application of natural gas consumption in the US power sector. In addition, results in Table 3 indicated that the longer the step size is, the worse the prediction accuracy is.
Forecasting Results in Comparison with Different
Benchmarking Models. To further validate the performance of fractional nonhomogeneous grey model in the application of natural gas consumption, the autoregressive model (AR) [34] and artificial neural networks (ANN) [35] are selected as the benchmarking models to compare with FNGM. e metrics are calculated by the forecasting results illustrated in Figure 7. e MAE, MAPE, and RMSE of AR are 839.91, 90.23%, and 1172.76; they are much worse than the FNGM, and this shows that the AR is not appropriate for this application. Although the ANN has achieved satisfactory prediction results, the FNGM still retains its superiorities in short-term prediction.
4.6. Brief Summary. According to the above results, first of all, the FNGM is more complex than the other fractional- order accumulation model because it is nonhomogeneous withfractional-order accumulation. Such properties of FNGM make it nonlinear and more flexible. e flexibility of the model can be reflected in the more general form of FNGM described in Section 2.2. ese improvements make the model have a stronger description ability for complex data.
On the generalization performance of the model, the in-sample cross-validation improves the fitting performance, so the effective prediction results can be obtained. But for other grey system models, they may not have better formulation or adopt a better optimization algorithm. For the benchmarking models, they may not have the in-sample cross-validation to improve the performance.
To sum up, the model used in this paper is more flexible and has stronger nonlinear properties; the use of data division and verification set makes it have stronger generalization performance; random optimization enables FNGM to obtain fully accurate nonlinear parameters. According to the description of the application results, the FNGM has the best prediction performance on the data sets of natural gas consumed by the US power sector. It implies that the research in this paper can be extended to similar natural gas energy prediction.
Conclusions
is paper uses a random search algorithm to estimate the parameters of the forecasting model and applies the rolling Mathematical Problems in Engineering forecasting modelling mechanism to achieve accurate forecasting of time series data. First, we transform the optimization parameter problem into a nonlinear programming problem by constructing the nonlinear objective function reflecting the performance of the proposed model on the validation subset. Secondly, the rolling forecast modelling mechanism is used to forecast the natural gas consumption of the US power sector. By comparing the other eleven forecasting models, the results show that the step size influences the forecasting accuracy, and the accuracy becomes lower with more forecasting steps. e FNGM obtained by random search has an excellent performance in forecasting natural gas consumption, which illustrates that the FNGM can be used as a reliable tool for studying clean energy consumption.
Limitations of this work should also be mentioned. First, the random optimization only has weak convergence, and thus the preconditions (such as initial points and bound for the variables) may require more prior knowledge. However, such limitations widely exist in the heuristic algorithms. Second, the structure of the fractional nonhomogeneous grey system model is not complicated enough, which may limit its flexibility to deal with more complex time series.
Data Availability
All the data sets are available in the manuscript.
Conflicts of Interest
e authors declare that they have no conflicts of interest. | 5,646.2 | 2021-11-11T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Antiviral Effect of Ribonuclease from Bacillus pumilus against Phytopathogenic Rna-Viruses
Background: Viruses can cause different diseases in plants. To prevent viral infections, plants are treated with chemical compounds and antiviral agents. Chemical antiviral agents usually have narrow specificity, which limits their wide application. Alternative antiviral strategy is associated with the use of microbial enzymes, which are less toxic and are readily decomposed without accumulation of harmful substances. The aim of this work is to study the effect of Bacillus pumilus ribonuclease on various phytopathogenic viruses with specific focus on the ability of enzyme to eliminate them from plant explants in vitro. Materials and methods: Extracellular ribonuclease of B. pumilus is tested as an antiviral agent. To study the antiviral effect of RNase, depending on concentration and the time of application several plant-virus model systems are used. Virus detection is conducted by serological testing and RT-PCR. Results: Bacillus pumilus ribonuclease possesses antiviral activity against plant Rna-viruses RCMV (red clover mottle virus), PVX (Potato Virus X) and AMV (Alfalfa Mosaic Virus). The maximum inhibitory effect against actively replicating viruses is observed when plants are treated with the enzyme in the concentration of 100 ug/ml prior to infection. In case of local necrosis ribonuclease in the concentration of 1 ug/ml completely inhibits the development of RCMV virus on bean plants. The enzyme is able to penetrate plants and inhibit the development of viral infection, inhibiting effect for untreated surfaces decreased on average for 20%. It is also found that B. pumilus ribonuclease protects apical explants of sprouts of potato tubers from PVM and PVS viruses. Conclusion: B. pumilus ribonuclease possesses antiviral activity against plant Rna-viruses and produces viruses-free plants in the apical meristem culture.
Introduction
Viruses, along with bacteria and fungi can cause different diseases in plants.Viral infections lead to plant damage, a significant drop in the crop yield and eventually affect the quality of the food products.The most significant economic losses due to viral infections are linked with potatoes, fruits and berries, as well as decorative flowers and perennial grasses.
Viral diseases in plants are difficult to treat during vegetative period of growth.The most effective antiviral strategy relies on using healthy planting material, obtaining resistant plant varieties, and employing the techniques of in vitro culturing in combination with chemo-and thermotherapy and implementation of phytosanitary norms [1].
Currently, cultures of apical meristem are most frequently used for the production of virus-free plants.The process of obtaining the healthy plants from meristems is laborious and time consuming.Because of the weak capacity of meristems for regeneration and low overall yield of healthy plants it is necessary to isolate multiple meristem explants.
Plants are treated with chemical compounds and antiviral agents such as ribavirin, interferon, benomyl, thiouracil, aziridines and others to prevent viral infections [2]- [4].However, chemical antiviral agents usually have narrow specificity and can be toxic to plants, animals and humans, which limit their wide application.
The emerging alternative antiviral strategy is associated with the use of microbial enzymes.They are less toxic and are readily decomposed without accumulation of harmful substances.It is known that some enzymes that are capable to break down the nucleic acids have antiviral activity high resistance to pathogens which have been observed in transgenic plant expressed bacterial nuclease Serratia marcescens [5].Bacillus cereus ZH14 is found to produce a new type of antiviral ribonuclease, which is secreted into medium and active against tobacco mosaic virus [6] [7].Recent studies demonstrate the ability of artificial ribonucleases (a RNases, small organic RNA cleaving compounds) to inactivate Rna-viruses via the synergetic effect of viral RNA cleavage and disruption of viral envelope [8].Taken together, these data allow us to explore the possibility of using target bacterial enzymes to inhibit plant pathogenic viruses.
The objective of this work is to study the effect of B. pumilus ribonuclease on various phytopathogenic viruses with specific focus on the ability of enzyme to eliminate them from plant explants in vitro.
Materials and Methods
Extracellular ribonuclease of B. pumilus 7P/3 -19 (RNase Вр, 12.3 kDa, CF 3.1.27.2) was tested as an antiviral agent.This endonuclease is thermostable with the specific activity 1.3 × 10 6 U/mg protein.The enzyme was produced and purified at the Riga Pharmaceutical Factory according to the previously described technique [9].RNase activity was determined by RNA hydrolysis products soluble in 4% HClO 4 supplemented with 12% uranyl acetate [9].One unit of activity was defined as the amount of enzyme necessary to increase the OD 260 by 1 optical unit per 1 ml of enzyme solution for 1 h.
To study the antiviral effect of RNase Bp several plant-virus model systems were used (Table 1).
Plants were grown under constant controlled growth conditions at 20˚C ± 2˚C, 60% -80% relative humidity with a daily photoperiodic cycle of 16 h light (9000 lux irradiation power) and 8h dark in pots with perlite, except tobacco plants, which were grown in soil.40 -45 days old tobacco plants cultivar Samsun were used for alfalfa mosaic virus (AMV) infection by sap of infected plants.AMV-infected leaves were disinte grated by French press in 0.025 M phosphate buffer pH 8.0 (100 g leaves per 50 ml buffer).The sap was applied to the leaves of healthy plants in the presence of an abrasive (silicon carbide).After 14 days two fully developed leaves of tobacco were collected and analyzed for the presence of virus.
10 days old pea plants were used for red clover mottle virus (RCMV) infection, which was performed with the sap of infected plants.After 10 days post-infection, the plants were collected and sap was serologically tested for the maintenance of viral antigen.5 pots of peas with 5 -6 plants in each were used for each experiment.In beans RCMV caused severe damage, leaf surface tissue was destroyed and areas of local necrosis were formed.
The sap of infected plants was prepared to infect the leaves of tobacco plants cultivar Samsun with potato virus X (PVX).Leaves were disintegrated by French Press; resulting sap was diluted with distilled water (1:1) and used for the infection of 40 -45 days old tobacco plants.
Serological test was used to determine the RCMV as described in [10].Determination of AMV was performed similarly to RCMV.Viral presence was determined on 4 -5 day post infection.Leaves of the second growth were analyzed on 10 -12 day after infection.
The determination of the PVX in tobacco leaves was determined by reaction of agglutination with rabbit antibodies to virus.Gamma-globulin fraction of rabbit serum was extracted twice by deposition of 20% aqueous solution of polyethylene glycol (M.W. 6 kDa) followed by dialysis against 0.01 MK-phosphate buffer (0.1 M NaCl) pH 7.4 to 7.8.The sap was incubated for 10 min at 42˚C and centrifuged for 20 min at 5000 rpm (centrifuge K-23 "Bekman").The supernatant was serially two-fold diluted and the solution of antibodies to potato virus X in 0.025 M phosphate buffer, pH 8.0 was added.The agglutination reaction was monitored under the microscope.The average dilution at which the agglutination reaction took place was determined after five independent experiments.
We studied the dose-dependent effect of RNase Bp on the RCMV replication.On the 14th day of growth bean primary leaves were treated with RNase Bp (in the range of concentrations 1 -100 ug/ml) followed by treatment with carborundum and then inoculated with virus (40 ug/ml) For each enzyme concentration tested at least 10 primary bean leaves were used.Untreated leaves, or the left side of the leaf when the right was treated with RNase were used as a control.After 4 -5 days post-infection the number of local necrosis loci was determined and statistical analysis was performed.
To study the effect of RNase depending on the time of application, plants were treated with the enzyme (100 ug/ml) each day starting from 4 days prior to infection with 24-hour interval, at the time of infection, and 24 h after viral infection.In separate experiment seeds were treated with RNase by soaking in RNase solution (10 -100 ug/ml) for 2 hours before planting.
To study enzyme transport in the infected plants leaves of first, second and third tier of 10-days old pea plants (5 pots with 5 -6 plants in each for every tier) were treated with RNase Bp (100 ug/ml) for 24 hours prior to infection.Leaves of the 2-ndtier only were infected with virus in all variants.After 10 days post-infection sap was collected and analyzed for the presence of RCMV.
The initial diagnosis of potato viruses was performed by ELISA using kits Adgen (Neogen Europe, Scotland) according to the instructions of the manufacturer.The isolation of the apical explants was performed from the sprouts of potato tubers No # 3 -23 -2 (interbreeding 91.29/2 X Ausoniya) infected with potato virus M (PVM) and potato virus S (PVS).Dissection was done with needle under binocular microscope with 24 times magnification.Plant tissue containing the apical meristem, the cone and 4 primordial leaves was isolated.Samples were sterilized by 0.1% mercuric chloride solution and then washed three times with sterile distilled water.Explants were maintained on MS medium [11], supplemented with RNase Bp at different concentrations; the enzyme was introduced into the medium through the filter (Millipore filter).Plants were grown in vitro at 24˚C -25˚C and 16-hour photoperiod (illuminance 2000 lux).Virus detection was conducted by reverse transcription PCR after three cycles of microclonal proliferation of regenerated plants.RNA was isolated from plants by "Ribo-sorb" kit (Inter Lab Service, Russia).For RT-PCR reagents from Sib Enzyme Ltd. (Russia) were used.The reaction was performed in 25 ul containing 2.5 ul 20 mm solution of dNTP, 1.25 ul of 10 x RT buffer, 1.25 ul of 10 x PCR buffer, 0.2 ul of 5U M-MuLV reverse transcriptase, 0.2 ul of 5U Taq DNA-polymerase, 0.5 ul of 50 uM sense and antisense primers (for PVM or PVS), 2 ul of total RNA sample.For PVM detection the following oligonucleotide primers were used: forward 5'-gccacatcygaggacatgat-3', reverse 5'-gtgagctcsggaccattcat-3'; for PVS detection: forward 5'-gaggctatgctggagcagag-3', reverse 5'-aatctcagcgccaagcatcc-3' [12].Reaction of amplification was performed on "Mastercycler Gradient" (Eppendorf, Germany) under following conditions: 1 cycle: 37˚C-60 min; 1 cycle: 94˚C-5 min; 42 cycles: 9˚C-1 min, 60˚C-1 min, 72˚C-1 min; 1 cycle: 72˚C-5 min; 4˚C-storage.RT-PCR productes were analyzed by electrophoresis in 2% agarose gels containing 1 x TBE buffer with ethidium bromide at the concentration of 0.5 ug/ml.The size of the amplified fragments (PVM-524 Bp, PVS-738 Bp) was estimated by mobility comparison with DNA molecular size markers (Sib Enzyme Ltd, Russia).
Statistical analysis was performed using the software package SPSS 12.0.Standard deviation (σ) was calculated and the results were considered significant when σ ≤ 10%.
The Effect of RNase Bp on Pea Plants Infected by Plant RNA-Virus RCMV
Plants were treated with the bacterial enzyme at 1 -4 days prior to infection, at the time of infection (enzyme solution and virus were applied simultaneously) and one day post infection.Enzyme was used in the range of concentrations from 10 to 1000 ug/ml (Figure 1); in a separate experiment seeds were treated with RNase in the same concentrations (Figure 2).
It has been experimentally established that pretreatment of pea plants with the enzyme solution in different concentrations and at different times before the infection with RCMV significantly (P ≤ 0.05) suppressed viral spread.Inhibition increased with the reduction of time of plant pre-treatment with the bacterial RNase before viral infection and reached the maximum value (75% -93%) for all concentrations of the enzyme when RNase treatment was performed one-day prior the infection.When RNase was applied simultaneously with the virus (Time 0), the inhibition decreased to 20% -78% depending on the concentration of the enzyme.It was found that treatment of plants with the enzyme after infection with viruses was less efficient (0% -25% inhibition) compared to plants pre-treated with the enzyme.In a range of enzyme concentration tasted the maximum inhibition corresponded to the concentration of 100 ug/ml.Increase in enzyme concentration to 1000 ug/ml did not lead to increased inhibition of viral infection.Therefore, RNase concentration of 100 ug/ml was chosen for further experiments.
Pre-treatment of pea seeds with bacterial RNase resulted in 24% -62% inhibition of viral spread; maximum inhibitory effect (65%) was observedwhen seeds were treated with RNase at 100 ug/ml (Figure 2).
Thus, data obtained for RCMV virus, indicate that the bacterial enzyme is effective when applied prior to viral infection and potentially can be used as a prophylactic antiviral agent.It should be emphasized that pretreatment of seeds is simple and feasible in terms of agricultural practices-also contributes to the development of plant resistance to viral infection.
Mechanisms of Penetration and Transport of RNase B. pumilus in Plants
Next, the mechanisms of penetration and transport of RNase Bp in plants infected with RCMV were addressed.
To study relationship between the enzyme application site and the distribution anti-viral effect in pea plants RNase Bp (100 ug/ml) was applied to the leaves of one of the three tiers of pea plants one-day prior to infection.One day later the leaves of the second tier only were infected.The content of viruses was determined in the sap of each variant (see Table 2).
Viral infection was suppressed in all three experimental groups: in the leaves of the second tier by 90%, in the leaves of the first and third tier-by 70% and 64%, respectively (Table 2), i.e. the maximum inhibition was observed for the tier where leaves were treated both with the enzyme and virus.The obtained data indicated that bacillary RNase is able to penetrate plant tissue and be transported both upwards and downwards.The difference in inhibition of RCMV spread after the treatment of pea with enzyme a day prior to the infection could be due to a different transport speed through phloem and xylem and dilution of the enzyme in plant tissue, as well as its partial inactivation in the course of several days by non-specific plant inhibitors or by plant proteases.
Inhibitory Effect and Duration of Enzyme Action, Depending on the Application Site of the Rnase on Various Parts of Plants
We studied the duration of RNase antiviral action after infection.Maximal concentration of RCMV virus in infected pea plants was observed on the 15th day post infection.The effect of the enzyme was studied on the 10th and 21st days after viral infection (Table 3).The data showed that RNase is active for a long time.On day 10 virus was serologically detected at the dilution of 1.2 (average value) compared to dilution of 3.5 for the control plants (without enzyme treatment).On day 21 post in fectionvirus was detected in RNase-treated plants at the dilution of 1.8 and in the control plants-at 7.0.It is possible that the difference between treated and control groups on day 21 post infection is due to continuous reinfection of plants with virus.These data indicate that RNase Bp has a pronounced effect on viral spread during the initial phase of infection, but also the inhibitory effect of enzyme was observed in secondary infected leaves (on the 21st day).RNase Bp (100 ug/ml) was applied to the leaves of one of the three tiers of pea plants one-day prior to infection, only the leaves of the second tier were infected.** The content of viruses in control plants, which were infected but not treated with the enzyme, was taken as 100%.
The Effect of Rnase B. Pumilus on Necrosis Formation in Bean Plants
The effect of RNase Bpon necrosis formation in bean plants (model for localized infection of plants with virus) depending on the concentration of enzyme, its site of application and duration of the plant treatment was studied.The right halves of fully developed primary leaves of beans were treated with RNase Bp in concentration of 1 -1000 ug/ml on days four and two as well as1 hour, prior to infection, at the time of infection, and one hour and 1 day post infection.The left halves of the leaves were treated with distilled water.Both sides of these leaves and control leaves on the other plants were infected with RCMV virus.The number of developed necrotic lociwas determined after 5 days on the both sides of the leaves (Figure 3).The right side of the leaf was treated with the enzyme in all experimental groups, and complete suppression of viral infectionwas observed when plants were treated with the enzyme prior to and at the time of infection (Time 0).When leaves were treated with RNase Bp (1000 ug/ml) 1 hour post infection with RCMV, the number of necrotic loci was reduced by 57%.Nochange in the number of necrotic loci was detected after treatment with RNase Bp (1000 ug/ml) one day post infection.It was found, that RNase Bp in the concentration of 1 ug/ml completely prevented RCMV-mediated necrotic loci formation.RNase Bp treatment was most effective when enzyme was introduced 1 hour prior to infection or was used at the time of infection.Higher concentration of the enzyme (10 ug/ml) prevented necrotic loci formation after treatments performed four, two days and one hour prior to infection.Therefore, that RNase Bp protected leaves from necrotic loci formation most efficiently when treatment took place either prior to viral infection or during early stages of plant infection by phytopathogenic viruses.Necrotic loci formation was also suppressed on the left side of the leaf, infected with the virus, but not treated with the enzyme.Most likely, that could be a result of the RNase Bp transport inside the plant.The structural features of RNase Bp can contribute to the spread of the enzyme inside the plant, suggesting that RNase translocation occurs by spontaneous inversion through the membrane [13].It does not exclude other mechanisms of plant defence, which are activated by the RNase.
The treatment of the right side of the leaf in the lower concentration (1 ug/ml) of the enzyme inhibited necrosis formation on the left side by 28% -51% (treatment for 2 -4 days prior to the infection).The application of the enzyme 1 h before the infection resulted in complete prevention of necrotic loci formation on the left side of the leaves.Importantly, RNase Bp treatment of the right half of the leaf with increasing concentration of enzyme-1, 100, 1000 ug/ml at the time of infection directly correlated with suppression of necrotic loci formation on the left side of the leaf (52%, 70% and 100%, respectively).Thus, RNase Bp is able to prevent necrotic loci formation on bean plant leaves even in the case of uneven enzyme distribution during application.
The Effect of RNase Bp on the PVX Viral Spread in Tobacco Plants
To study the effect of RNase Bp on the PVX virus in tobacco plants, plant leaves were treated with the enzyme (100 ug/ml) at 2 days and 30 min prior to infection (Figure 4).
The content of viruses was determined in the sap from plant leaves 5 days after infection.In both variants almost complete suppression of viral infection (93%) was observed.To compare antiviral effect of RNase Bp with other RNases we used pancreatic RNase in the concentration that provides the same catalytic activity as bacillary RNase.The inhibitory effect of RNase A was 92% -94% (30 min and 2 days prior to the infection, respectively).
On day 13 we determined the virus content in secondary infected tobacco leaves, which were not infected at the time of infection and the presence of PVX was due viral spread.Viral spread in these leaves was inhibited by 88% after RNase Bp treatment and by 86% after treatment with RNase A. Thus, both enzymes were able to prevent PVX spread in primary and secondary infected tobacco leaves.Therefore, it confirms that RNases could be used to prevent spread of RNA-viruses.1) (100 ug/ml) and pancreatic RNase (2) (300 ug/ml) on PVX virus spread in tobacco plants.A-enzyme treatment was performed 2 days prior to infection with PVX, virus content was determined 5 days after the infection; B-enzyme treatment was carried out 30 minutes before the infection with PVX, virus content was determined 5 days after the infection; C-enzyme treatment was carried out 30 minutes before the infection with PVX, virus content was determined 13 days after the infection.
The Effect of RNase BP on AMV Virus
To study the effect of RNase BP on AMV virus, tobacco leaves were treated with the enzyme 2 days prior to infection (enzyme was applied 3 times a day) (Figure 5).RNase treatment inhibited viral infection by 60% on day 6 after the infection, 41% on day 13 (at secondary infected leaves) and no inhibition was observed on day 21.Treatment of tobacco leaves with the higher concentration of enzyme (1000 ug/ml) did not increased efficiency of inhibition.Thus, the inhibitory effect of RNase Bp against AMV virus is weaker in comparison with other RNA-containing viruses.Therefore, the efficiency of inhibition depends on the virus type.Therefore, the bacterial RNase showed antiviral activity against RCMV, PVX and AMV viruses.The maximal inhibitory effect against systematically replicating viruses was achieved when the enzyme was applied 1 -2 days before the infection at the enzyme concentration of 100 ug/ml.Thus, RNase Bp treatment is most effective on the initial stages of viral plant infection.RCMV infection was suppressed by 91%, PVX and AMV infections were suppressed by 93% and 60%, respectively.It is possible that the differences in inhibitory effect of RNase Bp against various viruses are due to the peculiarities in viruses' structure.RNase Bp in the concentration of 1 ug/ml completely inhibited the necrotic loci formation hypersensitive RCMV-infected bean plants.The reason why systemic infection caused by RCMV virus was inhibited by RNase in the concentration of 100 ug/ml while a process of necrosis formation caused by the same virus in the concentration of 1 ug/ml, can be related to the nature of viral diseases.According to our data the enzyme in plants was transported both upwards and downwardsand prevented viral infection spread.Note that the maximum antiviral effect was observed when the infected pant surface was treated with an enzyme, the effect was less pronounced for the plant surfaces left untreated.It was determined that in RCMV-infected peas RNase BP had antiviral effect lasted for 3 weeks.It is important to mention, that guanyl-specific RNase Bp is a compact globular protein with the high content of hydrophobic amino acids, which ensures its high thermal stability in a wide pH range of 2.5 -11.0 [14].
Regeneration of Potato Plants from Apical Explants of Tuber Sprouts
To study were used potato tubers, already infected with potato virus S (PVS) and potato virus M (PVM).Using ELISA-test it previously was found that all the experimental potato tubers are mix-infected with potato virus S (PVS) and potato virus M (PVM).From tuber sprouts explants were isolated apical meristem, from which a nutrient medium containing RNase Bp 6 -8 months raised plants.Pathogens were tested by RT-PCR.The effect of RNase Bp treatment on virus elimination during the in vitro regeneration of potato plants from apical explants of tuber sprouts.The results are shown in Table 4.
Supplementation of culture medium with RNase Bp promoted elimination of viruses.According to our data PVS-free plants were obtained with all concentrations of the enzyme.The number of PVS-free plants was 1% -37% higher compared to the control group.Yield of PVM-free plants was for 6% -9% compared to the control (completely infected explants).Inverse relationship between the content of RNase Bp in a nutrient medium and the number of received uninfected samples was established.The maximum yield of PVS-free plants was observed at the enzyme concentration of 1 ug/ml.Yield of PVM-free plants was for 6% -9% compared to the control.Liberation of potato plants PVM occurred when the content of RNase Bp in the nutrient medium at concentrations of 1 and 100 ug/ml.Only at a concentration of 1 ug/ml we obtained virus-free plants, completely free of PVS and PVM.At a concentration of fermenta 100 ug/ml, this result is not achieved.Thus, we concluded that the most optimal for the removal of viral pathogens in the apical meristem culture of potatoes is to apply Bp RNase at a concentration of 1 ug/ml.Both, PVS and PVM viruses, belong to the genus Carlaviridae, which are known to be difficult to treat.The relatively low efficiency of PVM elimination may be associated with deeper presence of viral particles in the apical meristem tissue.According to the electron microscopy data PVM is localized in the zone of 70 -80 microns and PVS-90 -95 microns [15].Such localization PVM difficult to produce virus-free forms.We found effective antiviral effect at low enzyme concentrations the conditions for penetration and transport of RNase Bp in plant tissue are optimal.
In work published by others the stimulating effect of several bacterial enzymes on the development of potato plants in vitro is described.For example, application of endonuclease of Serratia marsescens intensifies regeneration of potato plants from the apical explants and inhibits proliferation of DNA-and RNA-containing phytoviruses [5].Antiviral action of endogenous plant ribonucleases, localized on the surface of plant cells in the apoplastis reported [16].It is known that virus inactivation occurs through depolymerisation of nucleic acids by nucleases when they nascent viral DNA or RNA are released from the protein coat.According to our data RNase Bp showed antiviral activity on the initial stages of plant infection by viruses.One of the ways to penetrate the eukaryotic cells for viruses is pinocytosis [17]; the mechanism of antiviral action of RNases may include the destruction of viral RNA with ribonucleases during this process, resulting in inactivation of immature, but infectious virions.It is known that, the inhibitory effect is associated with the formation of virus-ribonuclease complex followed by cleavage of viral RNA by enzyme [18].We hypothesize that RNase Bp has a similar mechanism of action.
Conclusion
In conclusion, data obtained for different plant RNA-viruses indicate that the bacterial enzyme is effective when is applied prior to viral infection and potentially can be used as a prophylactic antiviral agent.It is important to note that pre-treatment of seeds is simple for agricultural practices and it can be used to protect plants against viruses.Furthermore, B. pumilus ribonuclease intensifies regeneration of potato plants from the apical explants to inhibit proliferation RNA-containing phytoviruses.Therefore, it confirms that RNases can be used to prevent the spread of RNA-viruses.activities (Project No. 14-83 0211/02.11.10083.001)accordance with the Russian Government Program of Competitive Growth of Kazan Federal University.
Figure 1 .
Figure 1.The effect of RNase Bp on RCMV spread in pea plants depending on the time of enzyme treatment.0-viral infection and enzyme treatment was conducted simultaneously.−4, −3, −2, −1-correspond to the number of days pasted prior to infection; +1-indicates the number of days after infection.Viral spread was evaluated two days after infection.Viral spread in the control plants, infected with RCMV but not treated with the enzyme, was taken for 100%.
Figure 2 .
Figure 2. Resistance of pea plants germinated from the seeds, treated with RNase Bp in different concentrations to virus RCMV.The content of viruses in control plants, which were not treated with the enzyme, was taken for 100%.Asterisk indicates P ≤ 0.05%.
Figure 3 .
Figure 3.The inhibitory effect of RNase Bp on RCMV viral infection depending on the duration of treatment and the site of application (1-inhibition of virus on the right side of the leaf treated with RNase Bp; 2-inhibition of the virus on the left side of the leaf, treated with distilled water) and the concentration of enzyme (a: 1 ug/ml; b: 10 ug/ml; c: 100 ug/ml; d: 1000 ug/ml).
Figure 4 .
Figure 4. Antiviral effect of RNase Bp (1) (100 ug/ml) and pancreatic RNase (2) (300 ug/ml) on PVX virus spread in tobacco plants.A-enzyme treatment was performed 2 days prior to infection with PVX, virus content was determined 5 days after the infection; B-enzyme treatment was carried out 30 minutes before the infection with PVX, virus content was determined 5 days after the infection; C-enzyme treatment was carried out 30 minutes before the infection with PVX, virus content was determined 13 days after the infection.
Figure 5 .
Figure 5. Inhibitory effect of RNase Bp on AMV virus, replicating systemically in tobacco plants: 1-the plants treated with the enzyme in the concentration of 100 ug/ml; 2-the plants treated with the enzyme in the concentration of 1000 ug/ml.
Table 2 .
Effect of RNase Bp (100 ug/ml) on the spread of RCMV virus in systemic infection of pea plants after the treatment of leaves of different tiers with the enzyme. *
Table 3 .
Duration of antiviral action of RNase Bp against the RCMV virus at system-infection of peas.
*The content of viruses in control infected plants not treated with the enzyme, was taken for 100%.
Table 4 .
The effect of RNase Bp on the process of potato viruses' elimination. | 6,567.4 | 2015-11-13T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
Bioethical Implications of Globalization: An International Consortium Project of the European Commission
The BIG project looks at some of the ethical concerns surrounding globalization and health.
T he term "globalization" was popularized by Marshall McLuhan in War and Peace in the Global Village . In the book, McLuhan described how the global media shaped current events surrounding the Vietnam War [1] and also predicted how modern information and communication technologies would accelerate world progress through trade and knowledge development. Globalization now refers to a broad range of issues regarding the movement of goods and services through trade liberalization, and the movement of people through migration.
Much has also been written on the global effects of environmental degradation, population growth, and economic disparities. In addition, the pace of scientifi c development has accelerated, with both negative and positive implications for global health. Concerns for national health transcend borders, with a need for shared human security and an enhanced role for international cooperation and development [2]. These issues have signifi cant bioethical implications, and thus a renewed academic focus on the ethical dimensions of public health is needed.
Future developments in science and health policy also require a fi rm grounding in bioethical principles. These core principles include benefi cence; nonmalefi cence (to do no harm); respect for persons and human dignity (autonomy); and attention to equity and social justice. According to the World Health Organization [3], global ethical approaches should (1) monitor and update ethical norms for research, as necessary; (2) anticipate ethical implications of advances in science and technology for health; (3) apply internationally accepted codes of ethics; (4) ensure that agreed standards guide future work on the human genome; and (5) ensure that quality in health systems and services is assessed and promoted.
The Bioethical Implications of Globalization Project
The Bioethical Implications of Globalization (BIG) Project is a 42-month dialogue funded by the European Commission that involves a series of expert panel discussions on four specifi c globalization and health subject areas: (1) mobility of people; (2) technological globalization; (3) liberalization of trade; and (4) new global health threats (bioterrorism). In addition, BIG includes a multipleround Delphi Process (Box 1) to solicit input on these issues from a broad, interdisciplinary audience.
The project's purpose is both to raise short-term, practical considerations about globalization and health and
Box 1. The Delphi Method
Delphi is a group communication technique designed to obtain opinions from a panel of selected experts on specifi c issues through the sending of questionnaires to be completed within a specifi ed time. The experts are contacted individually and they do not know other group participants and their opinions-the aim is to submit the group participants to the same conditions. Participants do not meet personally, thereby avoiding undue infl uence. The process foresees the following points: The process is repeated a number of times, until a convergence of all group members is obtained. The process ends with analysis of the answers and formulation of the fi nal report.
Elias Mossialos, Govin Permanand
to provide a longer-term, strategic perspective on the four selected public health-related issues. The fi nal conclusions will be presented to a high-level meeting of European Union (EU) policy makers in 2006; these conclusions may then inform future research directions and stimulate additional critical thinking about globalization and its bioethical implications for health policy. This article presents preliminary results from the BIG Project.
Mobility of People and Consequences for Health Systems
Mobility results from the increasing ease of domestic and international travel as well as from instantaneous access to information through the Internet and other electronic resources. Mobility may involve the pursuit of a better quality of life, development of markets for traded goods and services, return of resources to home countries, and improvement of professional and business networks.
However, migration may also affect psychological and physical health as a result of confl ict, famine, poverty, and the insuffi cient cultural or economic integration of migrants within their new home society. It may contribute to the spread of infectious diseases across borders ( Figure 1). The recent epidemic of SARS was a classic example of an infectious disease propagated through the movement of people across borders; it required attention from the original site to control migration (quarantine) as well as vigilance by secondary sites to protect their populations ( Figure 2) [4]. For these and other reasons, the International Organization for Migration is increasingly concerned with migratory patterns and their health consequences in a globalizing world (for an illustration of the emerging confl ict of ideas, see http:⁄⁄www.iom. int and http:⁄⁄www.noborder.org/ iom/index.php).
Cross-border health commerce is related to mobility. In Europe, this commerce is likely to increase as the EU enlarges to include Eastern and Central European nations. Such commerce may include the movement of health providers from East to West as well as "medical tourism" in pursuit of less costly or more accessible high-quality health care. In addition, international trade in illegal health products and inconsistent regulatory and safety standards for exports may threaten public health, especially in unregulated pharmaceutical markets.
Ethical concerns may also result from the vast growth in international tourist travel. Such travel now accounts for a twelfth of world trade, supporting an economy the size of a middle-income country [5]. Tourism may provide substantial economic benefi ts to many developing countries, and it may improve cultural understanding among travelers. However, these benefi ts require an ethical concern for the environment and for persons employed in the tourist industry.
The rights of nations to protect against infectious disease and unsafe medical practices, as well as the rights of human beings displaced by war, traffi cking, and economic and cultural disruption, are critical concerns for health policy makers. Poverty and social disparities are key factors in the growth of global migration. Therefore, it is timely to consider whether mobility is a human right, and whether those who migrate have rights to health care in their new country. These questions should be considered by health policy makers within the ethical contexts of individual autonomy, social justice, nonmalefi cence, and benefi cence.
Technological Globalization and Health: Information Technology and Genomics
Technology drives globalization and in turn is driven by globalization. However, there is considerable ambiguity as to the value of technological globalization, especially for health in low-income countries. The "digital divide" may be important in improving health or income disparities as the electronic revolution provides scientists and health workers in both the developed and developing world with unprecedented access to information. Much could be done to reduce information inequities for the developing world through collective international action, but new global governance mechanisms may be needed to achieve this goal for information technology [6].
Interestingly, the Internet is a structural necessity for fi nancial and corporate globalization, but the same technology is used by nongovernmental organizations, political groups, and cultural movements to support grassroots social justice and human rights campaigns against these globalizing corporations. Neither side in this struggle would advocate limitations to the expansion of Internet technology, but both sides need to consider the bioethical implications of increased information access.
On the other hand, the ethical issues surrounding genomics (with both environmental and human concerns) are quite ambiguous. While there may be signifi cant benefi ts to identifying genetically benefi cial products or genetic determinants of disease, there are also concerns about altering natural environments and about collecting routine genetic information from general populations [7].
For example, some experts assert that genetically modifi ed (GM) crops will signifi cantly increase crop yields without requiring any additional farmland, thus preserving valuable rain forests and animal habitats. herbicides, and pesticides. Farmers are not allowed to trade or save GM seed from one harvest to the next, and "terminator technology" (producing grains that are genetically modifi ed so that they cannot be used to generate new crops) is under development. (See http:⁄⁄www.globalissues.org/ EnvIssues/GEFood/Terminator.asp for more information on this technology.) Thus, ethical considerations of distributive justice and benefi cence must be considered in the debate about the global applicability of GM crops.
For the pharmaceutical and health care industries, genetic testing could provide information about the shape of future markets and the possible tailoring of specifi c pharmacotherapy to genomic susceptibility. For governments, genetic testing may provide predictive information on a population basis that could aid future health care planning. Genetic information might also be similarly used by the insurance industry, but the identifi cation of genetically "high-risk" individuals would likely interfere with their autonomy, in that they may not be able to purchase health insurance. For example, the Apolipoprotein E test may indicate that an individual has two copies of one form (allele e4) of the gene that leads to Alzheimer disease [9]. Could this information be used by insurance companies or possible employers to deny insurability, despite no current adverse health effects?
In the post-genomic era there is potential to both reduce and increase health inequities, but much will depend on how ethical issues are addressed. If interventions to increase the life span for those with access to high quality health care must compete with expensive investments in genetic research on infectious diseases (which affect the poor most of all), health inequalities may be amplifi ed between those with access and those without access to health care. If research participants or patients in low-income countries have unequal access to information, they may not be properly informed about genetic testing and the counseling needed if adverse genetic information is found. Population-based genomic research may characterize groups of people in such a way that encourages discrimination. Such research may also lead to disputes about ownership of genetic resources from participant populations. Health professionals must have a solid grounding in bioethical issues as they make clinical decisions based on genetic information. However, health policy makers and global governance structures must also be accountable for the potential adverse consequences such decisions might engender.
One may ask: will genomic science really help developing nations? To what extent can benefi ts be shared? Will pharmaceutical and biotechnology companies invest in poor countries if they can make more money working on therapeutics for high-income countries? Thus, concern for the bioethical issues of social justice and benefi cence arises. Genomics has the potential to be a global public good, but there is considerable uncertainty as to its bioethical justifi cation in all cultures [10].
Liberalization of Trade and its Effects on Health
In general, globalization helps liberalize trade through removal of import restrictions and tariffs, through removal of restrictions on trade in services, and through linkage of trade sanctions to the protection of intellectual property rights. All these activities may have an impact on population health.
Defenders of trade liberalization claim that this process is one of the most effective means of increasing a country's wealth and, by extension, population health. Even if this were always true, there may be specifi c policies that have particularly detrimental effects on health (such as opening markets to trade in manufactured tobacco products). Further, there may be an ethical argument based on social justice against some trade liberalization policies. If, for example, trade liberalization between rich and poor countries produces proportionally more wealth in rich countries compared with poor countries, this may suggest a socially unjust result of liberalization; poor countries' economies may not grow as fast as rich countries' economies in this situation.
The relationship between wealth and health is actually somewhat controversial: the so-called Preston curves demonstrate a dramatic relationship between health and economic prosperity up to about a Purchasing Power Parity of US$3,000 per capita per year [11]. However, there are cheap, cost-effective approaches to population health (such as vaccination, clean water, and sewage disposal) that may not be affected by the increase in Purchasing Power Parity. These approaches were relatively more important than economic development per se in early 20thcentury interventions in developed countries, and they are likely to be more important for infl uencing health among developing country populations today than simple economic growth. On the other hand, high-intensity technological improvement rather than economic growth may be more important to health in rich countries compared with developing countries.
The concern for intellectual property rights in trade has been an extraordinarily contentious issue in recent years. Newer drugs that are effective against diseases in resourcepoor but highly impacted countries, such as antiretroviral drugs against HIV, have been prohibitively expensive in these countries, in part because of patent protections. With the Trade-Related Intellectual Property agreement, patent protection became linked to trade policy; if countries in need of cheaper essential drugs did not conform to patent rules, trade retaliation from exporting countries might ensue. However, restrictions on poor countries' responses to legitimate public health emergencies may be unethical on the basis of distributive justice, nonmalefi cence, and benefi cence. Exceptions for public health emergencies (such as HIV/AIDS) under the Trade-Related Intellectual Property agreement include the right to compulsory licensing (local companies produce patented medicines in exchange for a royalty payment to the patent holder) or parallel importing (importing patented drugs sold more cheaply elsewhere) that will make essential medicines more available to highly impacted countries without fear of trade retaliation from the originating country [12].
The General Agreement on Trade in Services is a relatively new treaty that covers trade in health services [13]. The agreement has been severely criticized by some, who claim that it increases privatization of health care services and undermines public health care systems. However, given its ambiguities, the actual impact of the agreement on the health sector will be largely determined by the way in which the agreement is further specifi ed in multinational commitments [14]. Social justice, equity, benefi cence, and nonmalefi cence will all come into play in the implementation of this treaty.
New Global Health Threats, Focusing on Bioterrorism
Concerns for security against biological weapons have recently arisen among both poor and wealthy nations. Some, however, question the enormous sums now being spent to address the perceived threats due to bioterrorism even without strong evidence for actual threats. Even without such evidence, global bioethical principles at least suggest the need for a framework for consideration of distributive justice in this arena.
For example, should a nation with a limited supply of a vaccine against weaponized smallpox offer its stockpiles to a neighboring country that is under direct attack? This case is complicated by the fact that the infection could spread to its own territory. In the case of widespread biological attacks, which global governing agency, country, or other entity would be responsible for global resource allocation? Clearly, risks from bioweapons are trans-border, but resources may be unevenly and inequitably distributed, requiring a bioethically based policy determination on a global basis [15].
A further concern with respect to biomedical research is the issue of dual-use technology development for health benefi ts as well as for possible bioweapons. Governments must balance the secrecy necessary for security with the need for disclosure of information that is essential for research and development in health. It is very diffi cult to sequester new knowledge that might be applied to building biological weapons without simultaneously impeding research on defense against those bioweapons and on other benefi cial biomedical advances. Most BIG Project scientists agree that the benefi ts of releasing scientifi c information in general outweigh the risk of its misuse. However, the scientifi c community needs to consider whether new codes of conduct are necessary or whether existing governance is suffi cient to support a bioethical approach to research on possible dualuse technologies.
Conclusion
Global bioethical challenges require careful theoretical deliberation and practical considerations for international health policies [16]. The BIG Project seeks to guide these processes in four selected areas of interest to the EU, so that the project results may be helpful to policy makers at local, national, and international levels.
The BIG Project has found that bioethical principles are important in considerations of migration, trade, information technology, genomics, and bioweapons threats. Globalization in these arenas is neither a right nor a wrong process, but it demands careful consideration of bioethical principles including social justice, benefi cence, nonmalefi cence, and individual autonomy. These concerns may not be immediately obvious to health policy makers, and thus the BIG Project results may help clarify the larger goals and purposes of bioethically based health policy development within the EU and elsewhere. More information about the BIG Project can be found at http:⁄⁄www. bigproject.org/project.htm. | 3,834.6 | 2006-01-24T00:00:00.000 | [
"Medicine",
"Philosophy",
"Political Science"
] |
Adaptive Lag Synchronization of Lorenz Chaotic System with Uncertain Parameters *
The paper discusses lag synchronization of Lorenz chaotic system with three uncertain parameters. Based on adaptive technique, the lag synchronization of Lorenz chaotic system is achieved by designing a novel nonlinear controller. Furthermore, the parameters identification is realized simultaneously. A sufficient condition is given and proved theoreticcally by Lyapunov stability theory and LaSalle’s invariance principle. Finally, the numerical simulations are provided to show the effectiveness and feasibility of the proposed method.
Introduction
Since the original work on chaos synchronization by Pecora and Carroll [1] in the drive-response systems, chaos synchronization has attracted much attraction due to its potential applications in many practical engineering fields, such as secure communication [2], information processing [3], image encryption [4], and so on.In the past two decades, many schemes for chaos synchronization have been proposed, including linear and nonlinear feedback approach [5,6], adaptive technique [6], backstepping method [7], impulsive control method [8], etc.At present, the researchers are concentrating on the following types of synchronization phenomena: complete synchronization [9], generalized synchronization [10], phase synchronization [11], lag synchronization [12], dislocated synchronization [13] and so on.
Lag synchronization, where the corresponding state vectors of response system follow the drive system with time delay.Recently, some literatures have been devoted to lag synchronization of chaotic systems.In Reference [14], the lag synchronization of Rössler system and Chua circuit has been investigated via a scalar signal.Li et al. [15] applied a nonlinear observer to lag synchronization of hyperchaotic Rössler system and hyperchaotic Matsumoto-Chua-Kobayashi (MCK) circuit.Zhang et al. [16] studied the same problem for hyperchaotic Lü system.These design of a controller depends on the considered dynamical system, the method can be used in the system with certain parameters.But in some real physical systems and experimental situations, chaotic systems may have some uncertain parameters, so a systematic design process of lag synchronization of chaotic systems with uncertain parameters is important.
In this paper, we investigate the lag synchronization of Lorenz chaotic system with uncertain parameters.Based on the adaptive technique, a novel controller and parameter adaptive laws are designed such that parameters identification is realized, and lag synchronization of Lorenz chaotic system is achieved simultaneously.Theoretically proof and numerical simulations are given to demonstrate the effectiveness and feasibility of the proposed method.
Problem Formulation
The Lorenz chaotic system [17] is proposed in 1963, the nonlinear differential equations for describing it are x t a x t x t x t cx t x t x t x t x t x t x t bx t Considering the drive system (1), the response system is controlled Lorenz chaotic system as following
t a y t y t u t y t c y t y t y t y t u t y t y t y t b y t u t
where a s , b s , c s of (2) are unknown parameters which need to be identified in the response system, , , is the controller which should be designed such that two systems can be lag synchronized.Let , .
e t y t x t e t y t x t e t y t x t
where 0 is the time delay for the errors dynamical system.Therefore, the goal of parameters identification and lag synchronization is to find an appropriate controller and parameter adaptive laws of a s , b s , c s , such that the synchronization errors Remark 1.When 0 , the lag synchronization will appear.When 0 , the anticipated synchronization will appear.More in general, complete synchronization will appear when 0 Remark 2. For the anticipated synchronization and complete synchronization, the discussions are similar to the method given in this paper.
Adaptive Lag Synchronization of Lorenz Chaotic System
In this section, based upon the nonlinear adaptive feedback control technique, a systematic design process of parameters identification and lag synchronization of Lorenz chaotic system under the situation of response system with unknown parameters is provided.
According to the systems (1) and ( 2), we have the errors dynamical system Obviously, lag synchronization of systems ( 1) and ( 2) appears if the errors dynamical system (6) has an asymptotically stable equilibrium point , where , , e t e t e t e t .
Then, we get the following theorem.Theorem Assuming that the Lorenz chaotic system (1) drives the controlled Lorenz chaotic system (2), take and parameter adaptive laws Systems ( 1) and ( 2) can realize lag synchronization and the unknown parameters will be identified, i.e., Equations ( 4) and ( 5) will be achieved.
Proof Equation ( 6) can be converted to the following form under the controller ( 7) Consider a Lyapunov function as Obviously, V is a positive definite function.Taking its time derivative along with the trajectories of Equations ( 8) and ( 9) leads to
e t e t e t e t e t e t a a a b b b c c c
ae t e t be t e Pe where .It is obvious that if and only if , , namely the set
M e t e t e t a a b b c c
is the largest invariant set contained in for Equation (9).So according to the LaSalle's invariance principle [18], starting with arbitrary initial values of Equation ( 9), the trajectory converges asymptotically to the set M, i.e., , , 0 s , s and s as .This indicates that the lag synchronization of Lorenz chaotic system is achieved and the unknown parameters a s , b s , c s , can be successfully identified by using controller (7) and parameter adaptive laws (8).Now the proof is completed.
Remark 3. Taking our adaptive synchronization method, we can not only achieve synchronization but also identify the system parameters.The values for parameters a, b, c of drive system (1) should be confined to it has a chaotic attractor.
Remark 4.Although this process is focused on the Lorenz chaotic system, the systematic design process could be used for many other complex dynamical systems with uncertain parameters.
Numerical Simulations
In order to verify the effectiveness and feasibility of the proposed method, we give some numerical simulations about the lag synchronization and parameters identification between systems (1) and (2).In the numerical simulations, all the differential equations are solved by using the fourth-order Runge-Kutta method.
For this numerical simulations, we assume that the initial states of drive system and response system are , , and ,
Conclusion
This paper investigates the adaptive lag synchronization for the classical Lorenz chaotic system with the response system parameters unknown.Based on Lyapunov stability theory and LaSalle's invariance principle, the controller and parameter adaptive laws are given to achieve lag synchronization and parameters identification simultaneously.Finally, numerical simulations are provided to demonstrate the effectiveness of the scheme proposed in this work.
1 )*
Supported by the National Natural Science Foundation of China (Grant Nos.61164020 and 61004101), the Natural Science Foundation of Guangxi, China (Grant No. 2011GXNSFA018147) and the project of Guangxi Key Laboratory of Spatial Information and Geomatics (Grant No. Gui1103108-24).having a chaotic attractor when , 10 a 8 3 b , 28 c .The phase portrait is shown in Figure 1.
exhibits a chaotic attractor.The simulation results are shown in Figures2-4.Figures2 and 3display the lag synchronization state variables and errors response of systems (1) and (2), respectively.
Figure 4
shows the identification results of unknown parameters a s , b s , c s . | 1,764 | 2012-06-21T00:00:00.000 | [
"Physics"
] |
IMMUNE RECONSTITUTION IN HIV-1 INFECTED PATIENTS TREATED FOR TWO YEARS WITH HIGHLY ACTIVE ANTIRETROVIRAL THERAPY
The aim of this paper was to evaluate the immune reconstitution of HIV-1 patients subjected to highly active antiretroviral therapy (HAART) for two years or more according to CD45RA and CD45RO cell count; determination of IL-2, IFN-γ, IL-4, IL-10 and TNF-α serum levels; CD4 T and CD8 T lymphocyte count; and plasma viral load (VL) determination. For this purpose, a cross sectional study was carried out in the Tropical Diseases Area, Botucatu School of Medicine, São Paulo State University, UNESP, Botucatu, São Paulo, Brazil. Between June 2001 and April 2002, 37 HIV-1 infected patients were evaluated, 13 with treatment indication but untreated (G1), 9 subjected to HAART for 5-7 months (G2), and 15 treated for two years or more (G3); both treated groups used medication regularly and without failure. Forty-nine normal individuals were studied as controls (GC-1 and GC-2). There was a tendency (p<0.10) for the predominance of two nucleoside reverse transcriptase inhibitors (NRTI) associated with one non-nucleoside reverse transcriptase inhibitor (NNRTI) regimen in G2; and two NRTI associated with a protease inhibitor (PI) in G3. Statistical differences between groups were seen for CD45RA (G1<[G3=GC-2]; p<0.05) and CD45RO (G1[G2=G3]; p<0.001), TNF-α serum determination ([G1>G3; G2=intermediate]>GC-1; p<0.001), IL-2 (G1<[G2=G3=GC-1]; p<0.01), IFN-γ ([G1=GC1]<[GC-2=G3]; p<0.001), IL-4 and IL-10 ([G1=G2=G3]>GC-1; p<0.001), serum cytokine profiles, with a higher proportion of subtype 2 in G1 and mature subtype 0 in G2 and G3 (p<0.005). There was no statistical difference for CD8 T lymphocyte counts (G1=G2=G3; p<0.50). Consistency was seen between positive correlations of profile 1 definer cytokines (IL-2 and IFN-γ), CD45RA and CD45RO cells, and CD4 T lymphocyte counts and between positive correlations of profile 2 definer cytokines (IL-4 and IL-10) with TNF-α, and VL. The negative correlations were also consistent as they expressed the inverse of the positives. The variables with the highest number of correlations were IL-2, IFN-γ, and VL, followed by CD45RA and CD45RO cells, and IL-10. The variables with the lowest number of correlations were CD4 T and CD8 T lymphocytes. The results express the partial but important immune reconstitution in HIV-1 infected individuals with the interference of HAART and the importance of cytokines especially IL-2 and IFN-γ, and CD45RA and CD45RO cells as surrogate markers of this reconstitution.
INTRODUCTION
The introduction of highly active antiretroviral therapy (HAART) was a major milestone in the attempt to recuperate HIV-1 infected individuals' immune systems.Despite this recuperation being slow, partial, and variable, the suppression of HIV-1 replication has permitted a higher level of control over associated diseases with a consequent decrease in mortality and improvement in quality of life.(10) The main surrogate markers of natural history and therapeutic efficacy currently used in HIV infected patient follow-up are clinical condition, CD 4 + T lymphocyte count, and plasma viral load (VL) determination (21,23).However, after VL drops below detectable levels, it does not allow us to consistently express the behavior of immune response reconstitution.Also, CD 4 + T lymphocyte count can show irregular behavior without significant elevation even in individuals with major clinical improvement (4) and not demonstrate whether this elevation expresses the real immune system reconstitution through the production of naive cells, which have CD 45 RA surface molecules, or just the recirculation of memory cells, also known as CD 45 RO (10,13,20).In addition, CD4 + T lymphocyte count does not characterize the quality of the immunomodulatory substances they produce.
PARTICIPANTS
Between June 2001 and April 2002, 37 HIV-1 patients on or not on HAART were studied.They were treated at the Special Outpatient Clinic and Infirmary of Tropical Diseases, Botucatu School of Medicine, UNESP.Twenty-five were male and twelve female, aged between 18 and 62 years (X=37.6years).
The assay detection limit varied from 3 to 5 pg/ml, depending on the cytokine.
Statistical Analysis
Group comparisons were made with the non-parametric Kruskal-Wallis test and analysis of variance for entirely randomized experiments (ANOVA-ERE).The χ 2 test was used for serum cytokine profile proportion comparisons in the three HIV-1 groups, and for PI proportion comparisons in the two HIV-1 groups under treatment.When the proportion was zero, Yates correction was applied.For the χ 2 test, statistical significance was when p<0.05 and the value equal or higher than 3.84 for one degree of freedom, and equal or higher than 5.99 for two degrees of freedom.In all other analyses, statistical significance was p<0.05.For the correlations between pairs of variables, Spearman coefficient of rank correlation was calculated with significance α=0.05 and α=0.01, (30).
This study was approved by the Research Ethics Committee of Botucatu School of Medicine -UNESP.
RESULTS
Table 3 slightly higher than that of controls.The prevailing mode of transmission in all infected groups was heterosexual.
Groups were not homogeneous to ARV treatment regimens as they were grouped using this parameter.G1 patients had not yet been treated; six G2 patients received two NRTI associated with one NNRTI, and three G2 patients, two NRTI associated with one PI; 11 G3 patients received two NRTI associated with one PI, and four G3 received two NRTI and one NNRTI.Although there was no significant difference between groups under treatment, PI tended to predominate in G3 (χ 2 1 =3.70; p<0.10).At exams collection, G1 had the highest number of individuals (six) with AIDS defining diseases; there were also 2 oligosymptomatic and 5 asymptomatic patients.In G2, asymptomatic was predominant; there were no oligosymptomatic, and only one individual with AIDS, who showed paradoxical reaction to tuberculosis.All G3 were asymptomatic.
Analyzing data from Table 4, statistical difference was seen between G1, and G3 and GC-2 for CD 45 RA cell count, with patients in G1 showing lower counts than those in G3 and GC-2 which were similar, and between G1, G3, and GC-2 for CD 45 RO cell count, with G1 showing lower counts than GC-2, and GC-2 patients showing lower counts than G3.
Table 4 shows patient characterization in relation to serum cytokine level determination.
There was a progressive TNF-α serum level decrease from G1 to G3, and all showed higher TNF-α than GC-1.Statistical difference was seen between G1 and G2, G3, and GC-1 for IL-2, with serum levels in G1 being less elevated than in groups G2, G3, and GC-1, which had no statistical difference between them.There was no difference in IFNγ serum levels between G1 and GC-1, but they were less elevated than in G2 and G3, which were not statistically different either.There was no statistical difference between G1, G2 and G3 for IL-10 and IL-4 serum levels, however these were higher than in GC-1 individuals.Statistical difference was seen for serum cytokine profile with predominance of profile 2 in G1, and pronounced predominance of mature profile 0 in G2 and G3 (χ 2 2 =13.65; p<0.005).In G3, only mature profile 0 was seen.No patient in the HIV-1 groups showed profile 1.According to data in Table 4, there was statistical difference between groups for CD 4 + T lymphocyte count with progressive increase from G1 to G3; this did not occur with CD 8 + T lymphocyte count.
In G1, all individuals had detectable VL and in G2 there were seven undetectable and two detectable VL patients; in G3 there were nine undetectable and six were greater than 80 copies/ml; statistical difference was seen between groups, with G1 being statistically greater than both G2 and G3 which had no difference between them (Table 4).
G1: HIV-1 infected individuals before the start of HAART.
G2: HIV-1 infected individuals under HAART for between five and seven months.
G3: HIV-1 infected individuals under HAART for more than 24 months. GC-
DISCUSSION
For a better evaluation of the qualitative aspects of the immune system behavior in patients on HAART, this paper used three other markers including serum cytokine determination and CD 45 RA and CD 45 RO cell count.
In relation to patient homogeneity, the sex behavior, age and transmission mechanism are in agreement with the most recent epidemiological condition of HIV-1 infected individuals in Brazil (18).
Predominance of infected individuals with AIDS defining diseases among untreated patients is strong evidence for the benefit of the treatment (11).
CD 45 RA cell count increased significantly with the treatment in patients treated for at least 2 years, with predominance of PI; levels reached the same values as in control group individuals, which did not occur with CD 4 + cells.According to literature, as treatment goes on, there is a progressive increase in the number of naive CD 4 + T lymphocytes (10).These findings suggest that CD 45 RA was more significant than CD 4 + and VL in the evaluation of immune reconstitution.They perhaps suggest the partial preserved capacity of thymus regeneration (4,10).There was a significant increase in CD 45 RO cells in this study for patients treated for over two years.It should be mentioned that the mean of values from this group was higher than that from controls.This is in agreement with Mezzaroma et al. (20), Haase (10), and Powderly et al. (24).This count, similar to that of CD 45 RA, was more sensitive than CD showing that plasma and in vitro TNF-α levels fall with treatment but not to normal levels.These authors (2) associate the persistent activation of TNF-α system components to therapeutic, virological, and immunological failure.
Serum IL-2 increased in the two under-treatment groups as treatment went on, reaching levels statistically equal to those of normal individuals.In absolute values, the G3 mean was higher than that of control group.This increase indicates a recuperation of the infected individual's immune system.Kaufmann et al. (13) report an increase in IL-2 secretion and Imami et al. (12), an elevation in specific RNA expression by RT-PCR in the first weeks of treatment.Weiss et al. (29) found increased levels of CD 4 + T cells, producers of IL-2; in most studied patients, IL-2 production capacity under stimulation was similar to that of seronegative control individuals.
At pretreatment phase, IFN-γ levels were statistically equal to those of normal individuals, increasing significantly with treatment, which is in agreement with Imami et al. (12).This increase may be explained by the maintenance of large quantities of CD 8 + T lymphocytes, which are the main IFN-γ producers.( 8) IL-4 and IL-10 were not statistically different between infected groups as treatment went on, but were always higher than in controls.However, in absolute values, in the pretreatment group, they were higher than in the other infected groups; this is in agreement with Imami et al. (12).
Characterization of serum cytokine profiles in patients from the three study groups showed none with profile 1 (Th-1).Most studied patients were considered as having mature 0 profile (Th-0), or 2 (Th-2 (14,25) have also reported a very small number of profile 1 patients, agreeing with this study.These authors (14,25), however, did not mention profile 0 in their findings.
When only patients under treatment are considered, absolute serum cytokine median values show patients treated for over 2 years with higher IFN-γ and lower IL-4.These variations, although small, are important because they may be interpreted as an evolution of the mature 0 related to the duration of treatment and predominance of PI.
There was a progressive increase in CD 4 + T lymphocyte counts in patients on HAART for over 2 years, with PI predominance; the median passed 300 cells/mm 3 by a small margin.Therefore, according to Haase (10), the increase in CD 4 + T lymphocyte count with treatment was progressive but partial.
There was no difference in CD 8 + T lymphocyte count behavior with treatment.This therefore did not offer any support for the evaluation of immune recuperation.The best performance was in the group of patients treated for over 2 years, and the analysis of all parameters used perhaps suggests that their immune reconstitution may be attributed to longer treatment time and participation of PIs.Hence there is still the need to perform further studies with these variables with more patients and longer treatment duration.
Autran et al.(3) and Kaufmann et al.(13) found similar results.HIV-1 VL showed a marked decrease with treatment duration, with some individuals remaining with VL at detectable levels (>80 copies/ml).These results agree with literature(3,20).The relationships of pairs of variables showed coherence of positive correlations between profile 1 defining cytokines (IL-2 and IFN-γ), CD 45 RA and CD 45 RO cells, and CD 4 + T lymphocyte count, as well as for the association between them.These correlations also show coherence of correlations between profile 2 defining cytokines (IL-4 and IL-10) and TNF-α, and VL, as well as for the association between them.The negative correlations are also coherent as they express the reverse of positive associations.The variables with the highest number of positive or negative correlations were IL-2, IFN-γ, and VL, followed by CD 45 RA and CD 45 RO cells, and IL-10.The variables with the lowest number of correlations were CD 4 + T and CD 8 + T lymphocytes.These results show the importance of cytokines especially IL-2 and IFN-γ, VL, and CD 45 RA and CD 45 RO cells as surrogate markers of HIV-1 infection with HAART interference.In literature (21), however, most authors consider CD 4 + T lymphocyte count R. A. M. B. Almeida et al.IMMUNE RECONSTITUTION IN HIV-1 INFECTED PATIENTS TREATED FOR TWO YEARS WITH HIGHLY ACTIVE ANTIRETROVIRAL THERAPY.J. Venom.Anim.Toxins incl.Trop.Dis., 2006, 12, 1, p.105 and VL are the markers of natural history, biological activity, and therapeutic efficacy in HIV-1 individuals.
Table 2 )
. All 19 GC-2 individuals were subjected to CD 45 RA and CD 45 RO cell count.
Table 5
. A. M. B. Almeida et al.IMMUNE RECONSTITUTION IN HIV-1 INFECTED PATIENTS TREATED FOR TWO YEARS WITH HIGHLY ACTIVE ANTIRETROVIRAL THERAPY.J. Venom.Anim.Toxins incl.Trop.Dis., 2006, 12, 1, p.98 + T lymphocytes and CD 45 RA cells.There was a significantly strong positive correlation (α=0.01) between TNF-α and IL-10; TNF-α and VL; IL-2 and IFN-γ; IL-2 and CD 45 RA cells; IL-10 and IL-4; CD 4 + T lymphocytes and CD 45 RA cells; CD 4 + T lymphocytes and CD 45 RO cells; and CD 45 RA and CD 45 RO cells.Analysis of the table also shows that there was a significantly negative correlation (α=0.05) between IL-10 and CD 45 RA cells; IL-4 and CD 45 RO cells; VL and CD 45 RA cells; and VL and CD 45 RO cells.A significantly strong negative correlation was seen (α=0.01) between TNF-α and IL-2; TNF-α and IFNγ; IL-2 and IL-10; IL-2 and IL-4; IL-2 and VL; IFN-γ and IL-10; IFN-γ and IL-4; IFN-γ and CV; and CD 4 + T lymphocytes and VL.The highest number of significant correlations was for IL-2, IFN-γ, and VL, with seven correlations each; this was followed by IL-10, CD 45 RA and CD 45 RO cells with six, TNF-α R
Table 5 :
IMMUNE RECONSTITUTION IN HIV-1 INFECTED PATIENTS TREATED FOR TWO YEARS WITH HIGHLY ACTIVE ANTIRETROVIRAL THERAPY.J. Venom.Anim.Toxins incl.Trop.Dis., 2006, 12, 1, p.101 Linear correlation between pairs of variables: TNF-α, IL-2, IFN-γ, IL-10, IL-4, CD 4 + T and CD 8 + T lymphocytes, plasma viral load (VL), and CD 45 RA and CD 45 RO cell count for six G1, three G2, and eight G3 individuals, all with HIV-1, sick or not. .A. M. B. Almeida et al.IMMUNE RECONSTITUTION IN HIV-1 INFECTED PATIENTS TREATED FOR TWO YEARS WITH HIGHLY ACTIVE ANTIRETROVIRAL THERAPY.J. Venom.Anim.Toxins incl.Trop.Dis., 2006, 12, 1, p.102 * U = Below detection limit (< 80 copies/ml).†I = Intermediate.R. A. M. B. Almeida et al.R 4 + count and VL values as indicators of immune reconstitution of patients on HAART.Another result of this study, which in a way reinforces these statements, was the strongly positive correlation between CD 4 + T lymphocytes and CD 45 RA and CD 45 RO cells, at the same time that there was a weak negative correlation between both CD 45 RA and CD 45 RO and VL. .A. M. B. Almeida et al.IMMUNE RECONSTITUTION IN HIV-1 INFECTED PATIENTS TREATED FOR TWO YEARS WITH HIGHLY ACTIVE ANTIRETROVIRAL THERAPY.J. Venom.Anim.Toxins incl.Trop.Dis., 2006, 12, 1, p.103 Serum TNF-α values decreased as treatment went on with lower levels in the over-twoyears-treatment group, with PI predominance.This decrease, even though at levels about four times higher than in controls, seems to suggest a compatibility with clinical improvement since all patients in this group were asymptomatic.Ledru et al. (15) have reported that the decrease in TNF-α levels is compatible with the decrease of viral replication and apoptosis once this cytokine induces these phenomena (1, 15).Lew et al. (16) have reported a decrease in the number of T cells producers of TNF-α in the early weeks of HAART.Kaufmann et al. (13) showed a decrease in TNF-α secretion with the same treatment.Aukrust et al. (2) have reported data agreeing with this study, R . A. M. B. Almeida et al.IMMUNE RECONSTITUTION IN HIV-1 INFECTED PATIENTS TREATED FOR TWO YEARS WITH HIGHLY ACTIVE ANTIRETROVIRAL THERAPY.J. Venom.Anim.Toxins incl.Trop.Dis., 2006, 12, 1, p.104 predominance of Th-2 (53.1%) and Th-0 (38.7%) profiles in their patients.Although the study group formation was different, results agreed.Other authors ). Meira et al. (19) called attention to the R | 3,956.4 | 2006-04-26T00:00:00.000 | [
"Biology",
"Medicine"
] |
Using Stock-Flow Diagrams to Visualize Theranostic Approaches to Solid Tumors in Personalized Nanomedicine
Personalized nanomedicine has rapidly evolved over the past decade to tailor the diagnosis and treatment of several diseases to the individual characteristics of each patient. In oncology, iron oxide nano-biomaterials (NBMs) have become a promising biomedical product in targeted drug delivery as well as in magnetic resonance imaging (MRI) as a contrast agent and magnetic hyperthermia. The combination of diagnosis and therapy in a single nano-enabled product (so-called theranostic agent) in the personalized nanomedicine has been investigated so far mostly in terms of local events, causes-effects, and mutual relationships. However, this approach could fail in capturing the overall complexity of a system, whereas systemic approaches can be used to study the organization of phenomena in terms of dynamic configurations, independent of the nature, type, or spatial and temporal scale of the elements of the system. In medicine, complex descriptions of diseases and their evolution are daily assessed in clinical settings, which can be thus considered as complex systems exhibiting self-organizing and non-linear features, to be investigated through the identification of dynamic feedback-driven behaviors. In this study, a Systems Thinking (ST) approach is proposed to represent the complexity of the theranostic modalities in the context of the personalized nanomedicine through the setting up of a stock-flow diagram. Specifically, the interconnections between the administration of magnetite NBMs for diagnosis and therapy of tumors are fully identified, emphasizing the role of the feedback loops. The presented approach has revealed its suitability for further application in the medical field. In particular, the obtained stock-flow diagram can be adapted for improving the future knowledge of complex systems in personalized nanomedicine as well as in other nanosafety areas.
INTRODUCTION
In the last years, the use of nano-biomaterials (NBMs) has led to great improvements in several biomedical applications such as diagnostic, therapeutic, and regenerative medicine (Wang et al., 2018). In particular, iron oxide NBMs have been used in a large variety of biomedical applications such as diagnostics, imaging, hyperthermia, magnetic separation, cell proliferation, photodynamic therapy, tissue repair, and drug delivery (Bruschi and de Toledo, 2019;Dadfar et al., 2019), thanks to their suitable structural, colloidal, and magnetic properties as well as their negligible toxic effects (Ghazanfari et al., 2016;Aires et al., 2017).
In the oncologic context, magnetite (Fe 3 O 4 ) NBMs can be used as contrast agent in magnetic resonance imaging (MRI) for diagnosis purposes, while in therapeutic nanomedicine they can be accumulated in cancer cells through the enhanced permeability and retention effect (Nuzhina et al., 2019), then generating heat upon the application of an alternate magnetic field (MF) in hyperthermia treatments (Vallabani and Singh, 2018). The combination of therapeutic and diagnostic capabilities using a single nano-based biomedical product, the so-called nanotheranostic (Theek et al., 2014), addresses the administration of Fe 3 O 4 NBMs to (i) obtain in vivo imaging of the tumor site, (ii) treat the tumor site after the target drug delivery, and (iii) induce cancer cell death by hyperthermia.
However, as some nanotheranostic agents are currently at Phase I and Phase II clinical trials (Singh et al., 2020;Verry et al., 2020), there is the urgent need to investigate not only the safety profile of nanotheranostics in both early and advanced phases of clinical trials (Singh et al., 2020) but also to understand how these innovative products can be personalized, considering interindividual variability in therapy selection, treatment planning, objective response monitoring, and follow-up therapy planning based on the specific characteristics of the tumor tissue (Ryu et al., 2014;Keek et al., 2018;Degrauwe et al., 2019). Indeed, as exhaustively explained in Bielekova et al. (2014), systems biology principles represent a unique opportunity to predict complex diseases in comparatively small cohorts of patients through the identification of functional networks at the organism/patientlevel. Moreover, as health systems are self-organizing and tightly linked, constantly changing and governed by positive or negative feedbacks (WHO, 2009), there is the need to identify and represent their complexity in a holistic perspective.
Systems Thinking (ST) approach shifts the attention from the study of local events, in terms of causes, effects, and mutual relationships, to the study of the systemic patterns from which they emerge, describing the change in the hierarchical feedbacks structure that gives access to the operational configurations of the system as a whole. In the ST context, analytical tools based on stocks and flows representations have been developing since the 70s by Jay Forrester at the Massachusetts Institute of Technology mainly focused on Social Systems (Forrester, 1971). Afterward, ST approaches have found application in several other fields, as reported, for example for business (Sterman, 2000), energy and sustainability (Higgins, 2015;Kutty et al., 2020), ecology (Assaraf and Orion, 2005), biogeochemistry (Haraldsson and Sverdrup, 2013), communication (Gonella et al., 2020), and medicine (Romano et al., 2021). Nevertheless, the use of ST approach in nanomedicine-related studies is still lacking.
As a matter of fact, nanotheranostic systems have started demonstrating their efficacy in diagnosis but lack therapeutic competence and vice versa (Alshehri et al., 2021). In particular, even when diagnostic and therapeutic protocols are separately well-established, mutual interference between them may arise when used together. For example, diagnostic and therapeutic procedures that operate at even slightly different temporal and spatial scales may compromise the effectiveness of the theranostic procedure. For this reason, the adoption of a ST approach can help in the identification and the choice of the best temporal and spatial scales for the whole theranostic protocol. Furthermore, the systemic overall reaction to the change of external parameters or driving forces (e.g., administration of other drugs) might be addressed through the analytical stock-flow representation.
In this work, a ST stock-flow diagram is developed for the first time to represent the complexity of the use of Fe 3 O 4 NBMs as a theranostic agent in solid tumors based on the personalized nanomedicine perspective. In particular, the magnetite case study consists of (1) a magnetic core of Fe 3 O 4 NBMs coated with polyethylene glycol (PEG) and poly co-glycolic acid (PLGA), (2) a sustained released anticancer drug, and (3) immune system (IS) cell loading. This product can be classified as Advanced Therapy Medical Products (ATMPs), which constitute a class of innovative pharmaceuticals based on emerging cellular and molecular biotechnologies for somatic cell therapy [Regulation (EC) No 1394/2007] and patient-specific products.
System Thinking Approach and Its Elements
As extensively described in Odum and Odum (2000), the comprehensive ST approach includes the development of three scales of modeling: 1. Structural graphic model, in which the fundamental structure that determines the system dynamics is diagrammed in terms of stocks, flows, and processes. 2. Analytical model, in which formal relationships are established between the system's components, allowing to define a set of differential equations able to describe the systemic behavior even for situations difficult to observe experimentally. 3. Computational model, which transforms the set of interconnected differential equations into a simulator, studying how the system dynamics is affected by a change in external parameters, driving forces, perturbations, or, in the case of disease systems, the application of specific therapies.
In this work, we present for the first time a structural graphic model for the representation of theranostic modalities of a nanobased biomedical product. The development of this model is the first step for the other two, which will be made possible when clinical data on the administration of magnetite NBMs containing anticancer drug used as theranostic agent start to become available. In the following sections, an introduction of the structural ST diagram and its application to the investigated case study is presented, to guide the reader through the final diagram. The structural stock-flow representation of the ST-based approach is set up following the procedure: 1. Identification of a set of stocks. 2. Choice of a proper boundary. 3. Identification of the flows connecting the stocks, also with the external environment. 4. Identification of the processes occurring within a system. Figure 1 shows the main symbols used in stock-flow diagrams based on the energy language (Odum and Odum, 2000), where shields indicate the stocks, line arrows the flows, and solid arrows the processes that are always activated or controlled by a stock inside or outside the system (i.e., arrow coming from outside the system), and the smooth gray rectangle the system boundary. For the second principle of thermodynamics, energy is partially lost in any physical process, and this is represented by the flow going down to the earth symbol (heat sink).
The Stocks
Stocks are elements represented by an extensive variable (i.e., material, energy, and information). A stock changes over time only through the action of flows (i.e., inflows and/or outflows), and may therefore act as delay or buffer or shock adsorber for the system (Sterman, 2000). The stock content must be countable extensive state variables Q i i = 1, 2, . . . , n, that constitute an n-tuple of numbers that at any time represents the state of the system. The choice of the set of variables depends on the hierarchical level of the desired description, as well as on the overall purpose of the study.
Stocks must be chosen respecting some requirements: 1. The number of the stocks must be as low as possible to describe the state of the system for the prescribed purposes. 2. It must be possible to describe any relevant macroscopic in terms of stocks interactions. 3. Any system change (either detectable from the external or not) must correspond to a change in the n-tuple of state variables. 4. Stocks should be measurables, or at least a set of plausible values at a certain time should be conceivable, to study their evolution.
A stock may represent either a physically located set of a variable, or a virtual set of elements that play a specific role in the system dynamics, even without having the corresponding location in the real space.
The choice of the stocks relevant for the case at issue is a fundamental step in the stock-flows approach. When clinical data are available, a value at t 0 must be assigned to each stock in order to make a quantitative analysis. The determination of these initial values can be performed by either directly measuring them or by determining a "plausibility interval." In this latter case, a sensitivity analysis is performed to validate the model testing the system response within the selected interval of values.
The Boundary
A proper choice and definition of the systemic boundary is an important task since the boundary defines the objective of the systemic study depending on the main inflows and the system outputs. In ST, the boundary is an abstract element, possibly extended in both space and time, and has the main role of isolating the elements which are necessary to give an exhaustive description of the dynamics of the system at the chosen level of the study and to focus on the relationships between the internal elements (Brown, 2004). The choice of the boundary will reflect also the hierarchical level of the feedbacks that will depend on the time-span of the diagram description.
The Flows
In a stationary state of the system, stocks values are constant and may change their values through inflows and/or outflows, represented by arrows entering or exiting stocks and expressed as dQ/dt.
In biological systems, flows can be flows of matter (energy), that constitute the mechanism by which a stock value may change in time, and flows of information, responsible for the control action exerted by stocks on the processes that in turn control the flows. The control flows network is a fundamental aspect in ST diagramming since their action is responsible for feedbacks and causal loop formation at different time scales (Haraldsson, 2004). The pattern of feedbacks is therefore the feature that defines the system dynamics. For example, the unregulated proliferation of tumor cells (TC) in the human body can be represented by reinforced feedback, as represented in Figure 2, when TC growing and multiplying in an uncontrolled manner (Cooper, 2000). An increase of TC in the stock will then determine an exponential proliferation of TC at time t 1 . Flows are described by phenomenological coefficients that represent how much of the contribution from one or more stocks will be effective in their interaction on the process. Therefore, these coefficients represent the dynamics of the system to point out the interconnection network between its operational elements. In fact, it is important to underline that ST approach is not interested in representing the physical mechanisms of feedback controls, but in drawing the interactions among them. A detailed description of the conceptual basis of the quantitative setting up of stock-flow diagrams may be found for example in Odum and Odum (2000), where the counter-intuitive aspects of the approach are examined in many different systems.
The Processes
Processes represent the interaction between the stocks and determine the dynamics of the system. Processes are capable to alter -either quantitatively or qualitatively -a flow, by the action of one or more system elements. Since the system state is a collection of stock values, and the only way to change the value of a stock is by acting on its in/outflows, processes are located along the flow lines. In general, the location of a process in the diagram does not have any correspondence in a physical location in real space. Moreover, a process must be activated by another driver as flows of information or matter control the occurring processes and thus the value and/or nature of the flows.
System Thinking Diagram of Theranostic Approach Combined With Personalized Nanomedicine of Solid Tumors Using Magnetite NBMs
Stocks, flows, and processes were selected based on information collected from the literature on the personalized nanomedicine and theranostics modalities of iron oxide NBMs. The descriptions of stocks and processes selected for the diagram on theranostic approach, combined with personalized nanomedicine of solid tumors using magnetite NBMs, are reported in Table 1. All the stocks are countable variables. Immune system (IS) and the bloodstream (BS) are regarded as systems since their action involves different variables which are not essential for the overall description of the system at issue. The MF and MRI are regarded as sources of energy and represented by a circle.
System Thinking Diagram of Theranostic Approach Combined With Personalized Nanomedicine of Solid Tumors Using Magnetite NBMs
In Figure 3, the final diagram representing the theranostic approach combined with personalized nanomedicine of solid tumors using magnetite NBMs coated with PEG and PLGA is presented. In red flows of mass, in green flows of energy, and yellow flows of information are represented, where dashed lines indicate the controls exerted by the stocks on the processes. Nano-biomaterials (J1) and anticancer drug (J2) are the main inflows of the diagram. A specific quantity of NBMs (J4) and drug (J3) are intravenously administered together with immune cells previously sampled from the patient's blood (J5). In healthy people, the IS plays important roles in controlling the growth of malignant cells while in cancer patients can even facilitate the growth of TC (Le et al., 2019). For this reason, a quantity of immune cells needs to be carefully sampled (J6) to provide an efficient uptake process. The uptake process transforms the injected medicinal product (J16) into one outflow represented by the ATMP bioavailable in the blood (J17), which is controlled and activated by both the IS (J7) and the BS (J8). Inflows of the BS and IS (J13 and J14) are coming from outside of the system. However, if the ATMP has low efficacy, the presence of the ATMP in the blood can activate the proliferation of reactive oxygen species (ROS) (J11) which may cause the activation of the proliferation of TC (Aggarwal et al., 2019). The stock of TC is formed by an inflow (J15) and a feedback loop (J12), which represents the uncontrolled proliferation of TC (as also represented in Figure 2). The accumulation of the ATMP (J18) at the tumor site (J10) is based on drawing it to the tumor site by using an external MF (Revia and Zhang, 2016) (J37) in the BS, that activates the targeting/activation process (J9). Depending on the efficacy of this ATMP, a small quantity of this product may undergo the clearance process by the reticuloendothelial system without reaching the tumor site (J19) (Yu and Zheng, 2015) and not all the ATMP at the tumor site may go through the hyperthermia or bioimaging processes (J23).
During the formation of the stock of ATMP and TC (J20), the release of anticancer drug at the tumor site is represented by the small gray box (drug delivery). Then, under alternating MF (J38), magnetite NBMs on the tumor site (J22) can transform the electromagnetic energy into heat (hyperthermia process) causing localized heating of the TC (J24) and thus triggering the commitment to apoptosis of cancer cells (Goya et al., 2008;Jagtap et al., 2020) and their death (J29). The apoptosis process of TC can be generated not only as a consequence of heating of tumor site but also activated by ROS production (Hou et al., 2014) (J25). Indeed, in the diagram, the commitment to apoptosis of TC is activated by the flow of ROS (J26). However, as some ROS can diffuse freely across cell membranes, they can mediate toxic effects far from the site of ROS production (J28) (Slimen et al., 2014), also activating the proliferation of TC (J27) (Aggarwal et al., 2019).
The formation of the stock of ATMP and TC (J21) can also permit to perform imaging of the tumor site and realtime treatment monitoring of therapeutic drug delivery using MRI (J39), thereby adjusting treatment methods (Revia and Zhang, 2016). Indeed, a flow of information is generated from the bioimaging process (J30) which constitutes, together with the medical knowledge of healthcare personnel (J32), the main inflow of the stock of information. All the collected information is then used to activate the hyperthermia process by setting the alternating MF properly (J35), depending on the morphological properties of the tumor tissues, (ii) to activate the following bioimaging process (J36), and (iii) to set the MF during the activation process of ATMP on TC (J34). Moreover, all the knowledge and considerations related to the inter and intrapatient variability are then used to define the quantity of the drug to be injected to increase the efficacy of the following therapy (Comte et al., 2020) (J33). Information collected during theranostic activities will be also used outside the system for further research (J31).
Feedback Loops
In the ST diagram, five reinforced feedbacks were identified. The first is represented by the proliferation of the TC as explained in section "The Flows, " while the other four are related to the personalized nanomedicine concept. As underlined in Figure 4, the bioimaging process permits the generation of a flow of information related to the morphological characteristics of the tumor site. This flow creates a feedback loop of information necessary to tune the MRI operation itself.
The same flow of information coming from bioimaging process is useful also to tune a subsequent administration of the ATMP depending on the morphological characteristics of the tumor site (Figure 5).
Moreover, information collected during the bioimaging is extremely useful also during the targeting of the magnetic NBMs to the tumor site through the application of a specific MF, as underlined in Figure 6.
During the treatment of the TC in the hyperthermia process, high levels of ROS are produced by the increased metabolic activity and mitochondrial dysfunction (Liou and Storz, 2010), which can lead to the proliferation of TC (Figure 7). This feedback loop represents the theranostic activities of this ATMP. Indeed, the correct administration of this product permits the identification of the tumor site as well as treating it minimizing the proliferation of TC.
DISCUSSION
This study represents the first application of the ST theory in personalized nanomedicine. More specifically, the ST approach has been considered to study the interconnection between diagnosis and therapy of solid tumors using a single nano-enabled biomedical product (so called nanotheranostics) (Theek et al., 2014) through the development of a stock-flow diagram.
The investigated nano-product is a not yet commercialized dispersion of Fe 3 O 4 NBMs coated with PEG and PLGA containing an anticancer drug which may be classified as ATMP. In the near future, dynamics related to the application of such innovative medicinal products will need to be carefully investigated in order to define the most suitable and effective procedure for the selection of a patient-specific therapy.
During the past, several methods have been developed to quantify biological networks, for example, flux balance analysis (FBA) (Lee et al., 2006), metabolic flux analysis (Lagziel et al., 2019), and quantitative systems pharmacology (QSP) (Wang et al., 2020;Balti et al., 2021;Chelliah et al., 2021). However, the resulting dynamic systems are not sufficiently comprehensive for generating a large-scale model.
For this reason, in the current study, a ST top-down approach has been followed to represent the self-organized system, through which the global dynamics of the systemic patterns may be obtained using an analytical representation of the stocks, flows, and processes in different systemic time-scales. For the development of the presented diagram, no specific tools are used for the identification of flows and stocks. Indeed, the ST approach permits to develop several systems representing the same complex system but a different level of hierarchy.
The structure of the presented ST diagram forces the system toward a limited set of possible configurations at the selected level of complexity, from which important feedback loops emerge, as (i) how the personalized nanomedicine can help in the diagnosis and treatment of tumor sites, (ii) what could affect the proliferation of TC, and (iii) how the obtained information can help in the choice of subsequent treatments and/or diagnosis.
The strength of the presented ST diagram is its ability to clearly communicate the network of feedbacks. Indeed, the interconnections between stocks and flows may therefore shed some new light on how to manage the complexity of a disease, since a correct identification of the accessible dynamical patterns may allow finding the proper leverage points for intervening (Meadows, 2008), especially in the oncologic context. The whole complexity of anticancer nanomedicine was also suggested by Sun et al. (2020) where authors underline the need to carefully evaluate the efficacy of nanoenabled anticancer drugs considering the tumor heterogeneity from a systemic point of view as otherwise, benefits could not outweigh adverse effects. Moreover, the ST diagram demonstrates the differences between personalized medicine and traditional medicine by the stock and flow of information. Indeed, the reinforced feedback of flows of information from the diagnosis to the therapy is the representation of the novelty of the personalized nanomedicine, which offers the opportunity to enhance the efficacy of a drug using patient-specific knowledge.
Missing clinical data in the literature related to coefficients needed for the quantification of selected stocks, flows, and processes did not permit yet to simulate the dynamic behavior of the system using a simulator, as analytical and computational models require the use of specific inventories on clinical data (Romano et al., 2021). However, the presented ST diagram allows us to investigate system configurations in response to external driving forces, with the aim at understanding and designing even safer personalized nanomedicine.
CONCLUSION
This paper introduced the ST approach in nanomedicine and presented one of its applications to a real case study under preclinical stage: a dispersion of Fe 3 O 4 NBMs coated with PEG and PLGA containing a generic anticancer drug. The proposed diagram successfully investigated the complexity of the administration of this nanotheranostic agent by adopting a personalized medicine approach.
The development of a stock-flow diagram allowed to identify interconnections between diagnosis and therapy of solid tumors after the administration of a nano-based contrast agent. The identification of these interconnections has revealed important leverage points and reinforcing feedbacks which permits to represent the complexity of the investigated system.
The application of the ST theory in nanomedicine using a case study that is still under preclinical stage offers the opportunity to focus on the novelty of the approach rather than the case study itself, focusing on the potential of a complementary approach to set up theranostic procedures.
Besides the use of stock-flow diagrams to point out the tight relationship between the diagnostic and therapeutic aspects after the administration of a theranostic agent, the presented approach addresses some peculiar development prospects. In particular, physiological responses and their feedbacks during theranostic procedures are expected to depend on the relative time scales of the processes in the systems. Indeed, the optimization of diagnostic and therapeutic actions may require trade-off in the operation planning of the administration of nano-based theranostic agents.
In this sense, the possibility of developing a quantitative simulator based on the stock-flow diagram makes the approach suitable to be personalized to specific agents and specific patients once clinical data on the administration of nanotheranostic agents will be available.
Finally, it is also worth mentioning how the presented approach can address a suitable standardization for the description and communication of theranostic activities, potentially useful in both in their development and their dissemination.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
AUTHOR CONTRIBUTIONS
VC: conceptualization and writing -original draft preparation. FG: methodology, writing -review, and editing. AR: investigation and supervision. All authors contributed to the article and approved the submitted version.
FUNDING
This work was supported by the Ca' Foscari University of Venice and Università degli Studi di Catania. | 5,943 | 2021-07-22T00:00:00.000 | [
"Medicine",
"Engineering"
] |
A Novel Antimicrobial–Phytochemical Conjugate With Antimicrobial Activity Against Streptococcus uberis, Enterococcus faecium, and Enterococcus faecalis
Antimicrobial resistance is one of the major threats to human and animal health. An effective strategy to reduce and/or delay antimicrobial resistance is to use combination therapies. Research in our laboratory has been focused on combination therapies of antimicrobials and phytochemicals and development of antimicrobial–phytochemical conjugates. In this study, we report the synthesis and antimicrobial activity of a novel sulfamethoxazole–gallic acid conjugate compound (Hybrid 1). Hybrid 1 not only showed much stronger activity than sulfamethoxazole towards Streptococcus uberis 19436, Enterococcus faecium 700221, and Enterococcus faecalis 29212, which were purchased from American Type Culture Collection (ATCC), but also exhibited a promising antimicrobial effect against two E. faecalis clinical isolates, one of which was multidrug-resistant. Further studies are warranted to establish the in vivo antimicrobial activity for Hybrid 1 and develop more potent sulfamethoxazole–gallic acid-based antimicrobial conjugates using hybrid 1 as a lead compound.
INTRODUCTION
Antimicrobialresistance, a key cause of morbidity and mortality, has been emerging as one of the main threats to human and animal health (Laxminarayan et al., 2013;Laxminarayan et al., 2016). Overconsumption and misuse of antimicrobials is the primary driver of antimicrobial resistance (Steinke and Davey, 2001;Goossens et al., 2005;Malhotra-Kumar et al., 2007). Recently, Klein et al. reported the antibiotic consumption in 76 countries between 2010 and 2015 and projected a future increase in global antibiotic consumption (Klein et al., 2018). Furthermore, overuse and/or improper use of antimicrobials has caused a significant increase of multidrug-resistant microbes, which have become an urgent issue facing medical sciences and might even impose a potential pandemic catastrophe (The Lancet, 2014;World Health Organization, 2018). Therefore, reducing usage and dosage is essential in preventing the development of antimicrobial resistance.
Streptococcus spp., which are usually divided into α-hemolytic streptococci and β-hemolytic streptococci, are a genus of Gram-positive cocci. They are among the most frequent cause of infections in human and animals (Patterson, 1996). Enterococcusspp., which are formerly known as group D of streptococci, were classified as a different genus in 1984 (Schleifer and Kilpper-Bälz, 1984). They can be found everywhere, including the intestines and feces of birds, animals, and human, and are important pathogens. For example, Enterococcus faecalisis one of the major bacteria in hospital-acquired infection (HAI), bovine mastitis, and enterococcosis in poultry. Both streptococci (Haenni et al., 2018) and enterococci (Miller et al., 2014) develop antimicrobial resistance rapidly, which, in turn, not only impose a serious risk to human and animal health but also cause a huge economic loss. Although antimicrobial combination therapies are normally used to reduce antimicrobial resistance, many bacteria have developed resistance to such therapies (i.e., multidrug resistance) (Nikaido, 2009;Laxminarayan et al., 2013).
Our previous studies have shown that some phytochemicals possess antimicrobial activities and exhibit synergistic/additive effects with antimicrobials (Jayaraman et al., 2010;Jayaraman et al., 2011;Rajamanickam et al., 2019a;Rajamanickam et al., 2019b). Therefore, co-administration of phytochemicals can significantly reduce the dosage and usage of antimicrobials. Phytochemicals are naturally occurring secondary metabolites in plants. They are relatively safe to use and do not leave toxic residues. However, the biodistribution and metabolism profiles of the phytochemicals may not coincide with those of the antimicrobials, and thus challenging the antimicrobial-phytochemical combination therapies on whether the optimal therapeutic efficacy is really achieved. To overcome this obstacle, we adopted an in silico approach to design novel antimicrobial-phytochemical conjugate compounds (i.e., conjugate the active parts of antimicrobials and phytochemicals based on computer-aided molecular simulations) (Jayaraman et al., 2013). In the current study, we report the synthesis of a sulfamethoxazolegallic acid conjugate (hybrid 1) and the evaluation of its antimicrobial activity against three bacterial stains purchased from the American Type Culture Collection (ATCC) and twoE. faecalis clinical isolates (designated as isolates 1 and 2) from a Saskatchewan poultry farm. The three ATCC strains are Streptococcus uberis 19436, Enterococcus faecium 700221, and E. faecalis 29212, and the E. faecalis clinical isolate 2 was a multidrug-resistant strain.
MeThODs
Bacterial strains, Culture Media, and Chemicals S. uberis 19436, E. faecium 700221, and E. faecalis 29212 were purchased from ATCC (Manassas, VA, USA). E. faecalis clinical isolate 1 and E. faecalis clinical isolate 2 (multidrugresistant) were collected from a Saskatchewan poultry farm. Cell culture media for these bacterial strains were purchased from Cedarlane Canada (Burlington, ON, Canada). Gallic acid was purchased from ThermoFisher Scientific (Ottawa, ON, Canada). Sulfamethoxazole and other chemicals were purchased from Sigma-Aldrich Canada (Oakville, ON, Canada).
synthesis of Compound hybrid 1
Synthesis procedure of compound Hybrid 1 is illustrated in Figure 1. Compound 2 was synthesized by adding thionyl chloride (4.19 g, 35.29 mmol) into a suspension solution of gallic acid (compound 1, 5 g, 29.41 mmol dissolved in 50 ml methanol) at 0°C. The reaction mixture was under stirring at room temperature for 5 h. Upon completion of the reaction, compound 2 was obtained as a white solid by vacuum evaporation and drying. Compound 3 was synthesized by adding triethylamine (6.6 g, 65.22 mmol) and acetic anhydride (6.5 g, 63.67 mmol) into a suspension solution of compound 2 (2 g, 21.72 mmol dissolved in 30 ml dichloromethane) at 0°C. The reaction mixture was under stirring at room temperature for 2 h. Upon completion of the reaction (confirmed by thin-layer chromatography), the reaction mixture was diluted with water and extracted by dichloromethane thrice. The collected organic phase was dried over anhydrous Na 2 SO 4 , concentrated, and purified by column chromatography (12% ethyl acetate in hexane) to obtain compound 3 as a white solid (5.0 g, 16.12 mmol, 74% yield). The structure of compound 3 was confirmed by 1 H NMR (500 MHz, CDCl3): δ 7.80 (s, 2H), 3.90 (s, 3H), 2.30 (s, 3H), 2.29 (s, 3H), and LC-MS (electrospray ionization, ESI): [M+H] + , 332.96. Compound 6 was synthesized by adding dibromoethane (compound 5, 741 mg, 3.94 mmol) and potassium carbonate (682 mg, 4.94 mmol) into a solution of sulfamethoxazole (compound 5, 500 mg, 1.97 mmol) in 12 ml dimethylformamide at 0°C. The reaction mixture was under stirring at room temperature for 1 h. Upon completion of the reaction (confirmed by thin-layer chromatography), the reaction mixture was extracted with ethyl acetate twice. The collected organic phase was dried over anhydrous Na 2 SO 4 , concentrated, and purified by column chromatography (9% ethyl acetate in hexane) to obtain compound 6 as a colorless oil (450 mg, 1.25 mmol, 63% yield). LC-MS (ESI) identified [M+H] + of 360.01. Compound 7 was synthesized by slowly adding potassium carbonate (230 mg, 1.66 mmol) into a solution of compounds 6 (300 mg, 0.83 mmol) and 3 (284 mg, 0.91 mmol) in 10 ml dimethylformamide at 0°C. The reaction mixture was under stirring at room temperature for 1 h and subsequently at 70°C for 4 h. Upon completion of the reaction, the reaction mixture was extracted with ethyl acetate twice. The collected organic phase was dried over anhydrous Na 2 SO 4 , concentrated, and purified by column chromatography (60% ethyl acetate in hexane) to obtain compound 7 as a colorless oil (100 mg, 0.21 mmol, 26% yield). LC-MS (ESI) identified [M+H] + of 464.04. Finally, compound Hybrid 1 was synthesized by slowly adding 1.0 M NaOH solution into a solution of compound 7 (90 mg, 0.19 mmol) in 10 ml NaOH (1.0 M) under stirring at 0°C. The reaction mixture was continuously under stirring at room temperature for 0.5 h with reaction progress monitored by thin-layer chromatography. Upon completion of the reaction, the reaction mixture was extracted with ethyl acetate twice. The collected aqueous phase was neutralized with sodium hydrogen sulfate and then extracted with ethyl acetate twice. The organic phase was dried over anhydrous Na 2 SO 4 , concentrated, and purified by column chromatography (80% ethyl acetate in hexane) to obtain compound Hybrid 1 as a white solid (70 mg, 0.15 mmol, 82% yield, purity of 98.4% based on LC-MS).
Determination of MIC
In this study, minimum inhibitory concentration (MIC) refers to the lowest concentration of an antimicrobial agent (Hybrid 1 or sulfamethoxazole) to inhibit the visible growth of a microorganism (equivalent to ~85% growth inhibition based on OD 655 measurement using a Bio-Rad iMark Microplate Reader) after 18-24 h incubation. The protocol used to determine MIC of Hybrid 1 towards the bacterial stains has been published previously (Rajamanickam et al., 2019a). The concentration of Hybrid 1 ranges between 9.38 and 1,200 µg/ml towards S. uberis 19436, E. faecium 700221, and E. faecalis 29212 using sulfamethoxazole as a control (concentration range, 9.38-1,200 µg/ml), whereas the concentration of Hybrid 1 ranges between 15.62 and 2,000 µg/ml towards the two E. faecalis clinical isolates. The treatment time was 18-24 h.
statistical Analysis
All experiments were performed in triplicate and statistical analyses were performed using GraphPad Prism 5.0 statistical software (GraphPad Software, La Jolla, CA, USA). The experimental data were analyzed by one-way ANOVA with post hoc Tukey's multiple comparison test with significance set at p ≤ 0.05 (*p ≤ 0.05, **p ≤ 0.01). Correlation coefficient (r) value was calculated by using Pearson's correlation method.
ResULTs AND DIsCUssION
Synthesis procedure of the novel sulfamethoxazole-gallic acid conjugate Hybrid 1 is shown in Figure 1. The chemical structure of Figure S1). The antimicrobial activity of compound Hybrid 1, with sulfamethoxazole as a control, was firstly evaluated towards the three ATCC strains-S. uberis 19436, E. faecium 700221, and E. faecalis 29212-using a protocol developed in our laboratory (Rajamanickam et al., 2019a). S. uberisis the most common bacterial mastitis-causing pathogen in lactating cows worldwide (Leigh, 1999;Valentiny et al., 2015), although Staphylococcus aureus is probably the most common pathogenic bacterium in Canadian dairy farms (Olde Riekerink et al., 2008). As shown in Figure 2A, sulfamethoxazole did not inhibit the growth of S. uberis 19436, which is consistent with previous reports that S. uberis isolates from dairy cows with mastitis are highly resistant to sulfamethoxazole (Phuektes et al., 2001;McDougall et al., 2014). However, S. uberis 19436 was responding to Hybrid 1, with the MIC of Hybrid 1 measured at 1,200 µg/ml. E. faecium and E. faecalis are the most common Enterococcus spp. in not only hospital-acquired infections but also enterococcal infections on dairy and poultry farms. For example, Tyson et al. recently assessed the prevalence and antimicrobial resistance of enterococci isolated from retail meats in the United States between 2002 and 2014 and found that >90% of meats are contaminated with enterococci (Tyson et al., 2017). Sulfamethoxazole gave a non-concentration-dependent fIGURe 1 | Synthesis procedure of the novel sulfamethoxazole-gallic acid conjugate Hybrid 1.
inhibition of 25-40% on the growth of E. faecium 700221 and of 10-20% on the growth of E. faecalis 29212, respectively ( Figures 2B, C). However, Hybrid 1 exhibited a much stronger antimicrobial activity than sulfamethoxazole, with the respective MIC of 1,200 µg/ml for both Enterococcus strains. Furthermore, we examined the antimicrobial activity of Hybrid 1 towards two E. faecalis clinical isolates collected from a Saskatchewan poultry farm (Figure 3). Hybrid 1 gave a ~60% inhibition on the growth of E. faecalis clinical isolate 1 and a 70% inhibition on the growth of the multidrugresistant E. faecalis clinical isolate 2. These promising in vitro results implicate that Hybrid 1 serves as a good lead compound to develop new conjugates (i.e., chemical analogues of Hybrid 1) which possess more potent antimicrobial activities, although the high MICs might limit its potential clinical usage. Further studies, such as mouse air pouch model, calf infection model, and pharmacokinetic evaluation, are warranted to establish the in vivo antimicrobial activity and structure-activity relationship (SAR) for Hybrid 1 and its chemical analogues.
CONCLUsION
In this study, we designed and synthesized a novel antibioticphytochemical conjugate compound, Hybrid 1, and showed fIGURe 2 | Antimicrobial activity of sulfamethoxazole and Hybrid 1 towards Streptococcus uberis 19436 (A), Enterococcus faecium700221 (B), and Enterococcus faecalis 29212 (C) using a protocol developed in our laboratory (Rajamanickam et al., 2019a). The concentrations of both sulfamethoxazole and Hybrid 1 range from 9.38 to 1,200 µg/ml. The experiment was carried out in triplicate and the data were analyzed with one-way ANOVA with post hoc Tukey's multiple comparison test using GraphPad Prism 5 (GraphPad Software, La Jolla, CA, USA). The significance was set at p ≤ 0.05 (*p ≤ 0.05, **p ≤ 0.01).
it has potent antimicrobial activity towards not only three ATCC strains-S. uberis 19436, E. faecium 700221, and E. faecalis 29212-but also two E. faecalis clinical isolates. The current study suggests that co-administration of antimicrobials and phytochemicals and development of antimicrobialphytochemical conjugates may be a valid and promising strategy in tackling antimicrobial resistance in bacteria.
DATA AVAILABILITY sTATeMeNT
The datasets generated for this study are available on request to the corresponding authors.
AUThOR CONTRIBUTIONs
This work was designed by JY and MS and carried out by KR. The manuscript was written by JY and MS and approved for publication by all authors. | 2,977.8 | 2019-11-28T00:00:00.000 | [
"Medicine",
"Chemistry",
"Environmental Science"
] |
Curcumin Promotes KLF5 Proteasome Degradation through Downregulating YAP/TAZ in Bladder Cancer Cells
KLF5 (Krüppel-like factor 5) plays critical roles in normal and cancer cell proliferation through modulating cell cycle progression. In this study, we demonstrated that curcumin targeted KLF5 by promoting its proteasome degradation, but not by inhibiting its transcription in bladder cancer cells. We also demonstrated that lentivirus-based knockdown of KLF5 inhibited cancer cell growth, while over-expression of a Flag-tagged KLF5 could partially reverse the effects of curcumin on cell growth and cyclin D1 expression. Furthermore, we found that curcumin could down-regulate the expression of Hippo pathway effectors, YAP and TAZ, which have been reported to protect KLF5 protein from degradation. Indeed, knockdown of YAP by small interfering RNA caused the attenuation of KLF5 protein, but not KLF5 mRNA, which was reversed by co-incubation with proteasome inhibitor. A xenograft assay in nude mice finally proved the potent inhibitory effects of curcumin on tumor growth and the pro-proliferative YAP/TAZ/KLF5/cyclin D1 axis. Thus, our data indicates that curcumin promotes KLF5 proteasome-dependent degradation through targeting YAP/TAZ in bladder cancer cells and also suggests the therapeutic potential of curcumin in the treatment of bladder cancer.
Introduction
In 2014, 74,690 new bladder cancer cases and 15,580 cancer deaths are estimated to occur in the United States. Among men, it is the fourth most common cancer and is the eighth leading cause of cancer death [1]. Generally, non-muscle invasive superficial bladder cancers are treated by transurethral resection following intravesical chemotherapy or immunotherapy, but the recurrence rate is still high, and some cases progress to a higher grade [2]. Thus, efforts to uncover the molecular mechanisms of bladder cancer progression and to develop novel agents to target relevant molecules or pathways can help patients to achieve better therapeutic efficacy.
KLF5 (Krüppel-like factor 5), a member of the Krüppel-like factor (KLF) family, has been shown to play important roles in the development of several types of human cancers by modulating the transcription of its target genes [3,4]. Deletion of KLF5 from the developing bladder urothelium blocked epithelial cell differentiation and impaired bladder morphogenesis and function in mice [5]. Moreover, exogenous KLF5 expression increased cell cycle transition and up-regulated cyclin D1 in TSU-Pr1 human bladder cancer cells [6]. These findings suggest a pro-oncogenic role of KLF5 in bladder cancer. On the other hand, post-transcriptional modifications, especially ubiquitination of KLF5 protein, can greatly affect its functional display. Several E3 ubiquitin ligases, including WWP1, FBW7 and SMURF2, promote ubiquitination and degradation of KLF5 [7][8][9]. Additionally, YAP and TAZ, two effectors of the Hippo tumor suppressor pathway, can inhibit WWP1-KLF5 protein interaction and stabilize KLF5 [10,11]. Therefore, as an important growth-promoting gene, KLF5 could be a candidate target for bladder cancer treatment, and modulating its degradation will be an efficient approach to inhibit KLF5.
Curcumin, a hydrophobic polyphenol derived from turmeric (Curcuma longa), is one of the most studied plant-derived natural products. In various human cancers, curcumin has shown anti-proliferation, apoptosis induction, chemoprevention/sensitivity, anti-angiogenesis and anti-invasion/metastasis properties [12][13][14]. In bladder cancer, curcumin also has shown a promising anticancer activity [15,16]. Although inhibition of NF-κB-dependent genes is one of the most predominant effects of curcumin, its potential targets have been expanded to a wide range of other pathways and molecules [12]. KLF5 has been shown to play critical roles in proliferation and tumorigenesis in several cancer types, including bladder cancer [3,6]. Therefore, we proposed a hypothesis that KLF5 could be an important target of curcumin in bladder cancer cells.
In the present study, using in vitro and in vivo assays, we determined whether KLF5 was a target of curcumin and whether KLF5 played a role in the anti-proliferative function of curcumin. Mechanistically, we further investigated the effects of curcumin on the expression of KLF5-related E3 ubiquitin ligases and YAP/TAZ. We also examined whether KLF5 expression was affected by YAP knockdown. Moreover, we determined whether curcumin inhibited the growth of bladder cancer in a xenograft mouse model.
Curcumin Down-Regulated KLF5 Protein Expression in a Dose-and Time-Dependent Manner in 5637 and WH Bladder Cancer Cells
Curcumin inhibited the cell viability of 5637 and WH human bladder cancer cells in a dose-dependent manner after 48 h of treatment, as determined by the 3-(4,5-dimethylthiazol-2-yl)-2,5diphenyltetrazolium bromide (MTT) assay ( Figure 1A). Through western blot analysis, we also found that KLF5 protein expression decreased with increasing curcumin concentration (0-30 μM) or prolonging treatment (0-24 h) in both cell lines ( Figure 1B). To further determine whether the transcription inhibition of KLF5 was involved, we performed a real-time qPCR assay to analysis KLF5 mRNA expression and found that along with the curcumin treatment, the mRNA level of KLF5 was not decreased significantly, which was not consistent with the protein level decrease ( Figure 1C). These results indicated that curcumin could decrease KLF5 protein expression via a post-transcriptional regulation. Results were presented as mean ± SD from three independent experiments.
Curcumin Promoted Proteasome-Dependent Degradation of KLF5 Protein
We further investigated whether the protein stability of KLF5 was decreased by curcumin. Indeed, pretreating 5637 cells with proteasome inhibitor MG132 abolished the down-regulation of KLF5 protein after curcumin treatment (Figure 2A), which suggested that curcumin promotes proteasome-dependent degradation of KLF5. Next, we used a cycloheximide (CHX) chase assay to examine whether the half-life of KLF5 protein was affected by curcumin treatment. Unlike the DMSO control group, curcumin pretreatment accelerated KLF5 protein degradation in the presence of CHX ( Figure 2B). After being normalized to GAPDH, the results were plotted as the relative KLF5 levels compared with those at the zero time of CHX treatment ( Figure 2C). The half-life value of KLF5 was calculated by nonlinear regression analysis using GraphPad Prism software (GraphPad, San Diego, CA, USA). The putative half-life of KLF5 decreased from 1.121 h (95% confidence interval (CI), 0.942 to 1.384) to 0.585 (95% CI, 0.521 to 0.667) ( Figure 2D). Therefore, our data demonstrated that curcumin could promote proteasome-dependent degradation of KLF5 protein. (A) 5637 and WH cells were pretreated with 10 μM MG132 for one hour before the treatment of curcumin. Twelve hours later, the whole-cell lysates were harvested, and KLF5 protein was analyzed by western blot; (B) KLF5 protein attenuation in the cycloheximide (CHX) chase assay was examined after DMSO or curcumin treatment in 5637 cells; (C) KLF5 protein levels were quantified using Image Lab software and were normalized to GAPDH; and (D) KLF5 half-life values were calculated by nonlinear regression analysis with GraphPad Prism software. The results represent three independent experiments with similar results. Error bars show the 95% confidence intervals. * p < 0.05.
KLF5 Mediated the Anti-Proliferative Effect of Curcumin
To further clarify whether accelerated degradation of KLF5 might play roles in the anti-proliferative effect of curcumin, we modified the expression level of KLF5 and determined the changes of its well-known target gene cyclin D1 expression and cell growth. Firstly, our results showed that both curcumin treatment and lentivirus-shKLF5 inhibited the expression of cyclin D1 protein ( Figure 3A,B). Moreover, stable established KLF5 knockdown 5637/KOKLF5 cells consistently displayed a reduced growth rate compared with scramble control 5637/KOSC cells ( Figure 3C). Because the KLF5 with Flag tagged at the N-terminal is more stable than KLF5 alone [17], we transfected 5637 cells with a Flag-tagged KLF5-expressing plasmid and its control vector. KLF5 and cyclin D1 expression in the Flag-KLF5 group were not higher than the control group in the normal condition. However, after curcumin treatment, which rapidly decreased endogenous KLF5 protein, the appearance of Flag-KLF5 induced a higher level of cyclin D1 ( Figure 3D). Consistently, Flag-KLF5-transfected cells also showed higher cell viability compared with the control cells after treatment with 15 μΜ curcumin for 24 h ( Figure 3E). These results indicated that a more stable form of Flag-KLF5 protein could partially reverse curcumin-induced growth arrest and cyclin D1 suppression. Therefore, downregulation of KLF5 may be essential for curcumin to inhibit bladder cancer cell growth.
Curcumin Down-Regulated YAP and TAZ Expression
Mechanistically, we assumed that curcumin might regulate the expression of E3 ubiquitin ligases and then mediate KLF5 ubiquitination and degradation, but real-time qPCR assays showed that there was no significant changes of several well-identified KLF5 targeting E3 ubiquitin ligases, including WW domain containing E3 ubiquitin protein ligase 1 (WWP1), F-box and WD repeat domain containing 7 (FBW7) and SMAD specific E3 ubiquitin protein ligase 2 (SMURF2) (Supplementary Figure S1). Interestingly, both mRNA and protein levels of two Hippo pathway effectors, YAP and TAZ, which had been reported to competitively antagonize WWP1's binding with KLF5, were significantly down-regulated by curcumin in a dose-dependent manner ( Figure 4A,B). Furthermore, the downstream targets of YAP and TAZ, including integrin, beta 2 (ITGB2), AXL receptor tyrosine kinase (AXL), cyclin-dependent kinase 6 (CDK6) and cysteine-rich, angiogenic inducer, 61 (CYR61), were also down-regulated after curcumin treatment ( Figure 4C). These results revealed that curcumin decreased YAP and TAZ gene expression, which could lead to the decease of KLF5 protein stability. Results were presented as the mean ± SD from three independent experiments. * p < 0.05.
YAP Played Critical Roles in KLF5 Protein Stability in 5637 Cells
The important roles of YAP and TAZ in maintaining KLF5 protein stability have been shown in other systems, but whether the same mechanism exists in bladder cancer remains to be determined. In 5637 cells, we used small interfering RNA (siRNA) to knockdown YAP and then to study its effects on KLF5 and cyclin D1 expression, as well as cell proliferation. After transfection, two differently-designed YAP-specific siRNAs effectively impaired YAP mRNA compared with the negative control group ( Figure 5A). The expression of TAZ and KLF5 mRNAs was not affected, which not only showed the specificity of the designed siRNAs, but also meant that transcriptional suppression was not involved in YAP-induced KLF5 protein attenuation ( Figure 5C, DMSO group). However, treatment with proteasome inhibitor MG132 rescued the KLF5 protein instability caused by YAP down-regulation ( Figure 5C, MG132 group). Consistent with KLF5 expression, cyclin D1 expression at both the mRNA and protein level was also affected by YAP knockdown (Figure 5A,C). The proliferation rates of 5637 bladder cancer cells were decreased after transfection, as determined by the MTT assay ( Figure 5B). These data demonstrated the important role of YAP in maintaining KLF5 protein stability and also suggested that the pro-proliferative YAP/TAZ/KLF5/cyclin D1 axis might be an attractive target in bladder cancer.
Curcumin Inhibited Subcutaneous Tumor Growth and the YAP/TAZ/KLF5/Cyclin D1 Axis in Vivo
Next, we want to confirm the inhibitory effects of curcumin on tumor growth and the YAP/KLF5/cyclin D1 axis in a nude mice xenograft model. 5637 bladder cancer cells (1 × 10 6 ) were transplanted into the right flank of nude mice. Seven days later, vehicle or curcumin was injected intraperitoneally for the next three weeks. Compared with the vehicle group, curcumin inhibited tumor growth at the end of the assay ( Figure 6A). Both the weight and volume of the curcumin-treated group tumors were significantly inhibited by curcumin ( Figure 6B,C). Consistently, proliferating cell nuclear antigen (PCNA) staining in xenograft tissues was also evidently inhibited after curcumin treatment ( Figure 6D). Importantly, curcumin treatment also caused potent suppressive effects on YAP/TAZ, KLF5 and cyclin D1 expression, which was consistent with our in vitro results. These results further supported that curcumin suppress tumor growth and the YAP/TAZ/KLF5/cyclin D1 axis in bladder cancer in a xenograft tumor nude mouse model.
Discussion
Although KLF5 inhibits cell growth or promotes cell apoptosis in prostate cancer and esophageal squamous cell cancer [18,19], it has been reported that KLF5 plays an oncogenic role in other cancers. For example, KLF5 has been identified as a therapeutic target in colorectal cancer, and a small molecule, compound CID 5951923, which specifically inhibits KLF5 expression, has been developed through high throughput screening [20]. Exogenous KLF5 promotes TSU-pr1 bladder cancer cell growth in vitro and in vivo, which is consistent with our results that knockdown of KLF5 leads to the down-regulation of cyclin D1 and a decrease of the cell proliferation of 5637 bladder cancer cells (Figure 3). These results suggest that KLF5 could also be a target in the treatment of bladder cancer.
Like other crucial transcription factors, such as p53 and c-MYC, KLF5 is rapidly metabolized. The KLF5 protein half-life is about 1.5 h by pulse chase assays [3,4]. In the present study, we found that the expression level of KLF5 protein, but not KLF5 mRNA, was significantly attenuated by curcumin, indicating a post-transcriptional regulation mechanism behind. In 5637 and WH cells pretreated with proteasome inhibitor MG132, the down-regulation of KLF5 protein by curcumin was inhibited; moreover, through the cycloheximide chase assay, we found that the half-life of KLF5 protein in 5637 cells was significantly decreased after curcumin treatment ( Figure 2). These data demonstrated that KLF5 was a new candidate target for curcumin and that curcumin might activate the proteasome pathway to impair KLF5 protein stability. Since KLF5 was identified as an oncogenic molecule in intestinal cancer and estrogen receptor (ER)-negative breast cancers [4], curcumin or its derivatives may also be applied to these cancers to manipulate the KLF5 levels.
In this study, we found that curcumin decreased the proliferation of both 5637 and WH bladder cancer cells, and KLF5 was down-regulated by curcumin (Figure 1). Furthermore, by transfecting KLF5 shRNA or a stable KLF5, which was fused with a Flag tag on its N-terminal [17], we found that the KLF5 could antagonize the inhibitory effect of curcumin on cell growth ( Figure 3D,E). This demonstrated that down-regulation of KLF5 might be essential for curcumin to inhibit bladder cancer cell growth.
Cyclin D1, a key regulator of cell cycle progression, is frequently amplified and over-expressed in cancers [21]. Our results showed that curcumin inhibited the expression of cyclin D1 in both 5637 and WH cell lines, which is consistent with previous reports [22]. Cyclin D1 is a typical downstream target of KLF5 [6,23]. Curcumin treatment or knockdown of KLF5 by lentivirus-shKLF5 down-regulated cyclin D1 expression and decreased cancer cell growth. Moreover, over-expression of Flag-tagged KLF5 partially reversed curcumin-induced cyclin D1 down-regulation and growth inhibition in 5637 cells. These results indicated a potential role of KLF5 in curcumin-induced suppression of bladder cancer proliferation.
The decrease in the level of numerous proteins (i.e., Sp1, ErbB2 and cyclin E1) after curcumin treatment has been reported [16,24,25], but the mechanism has not been well elucidated. Previous studies have shown that curcumin directly inhibited proteasome activity in vitro and in vivo [26,27], which resulted in the accumulation of certain ubiquitinated proteins, such as IκB-α and Bax. In this study, we made efforts to uncover the mechanism behind the accelerated KLF5 protein attenuation caused by curcumin. The mRNA level of WWP1, FBW7 and SMURF2, which have been reported to interact with KLF5 and mediate the ubiquitination of KLF5, were not significantly up-regulated by curcumin in 5637 and WH bladder cancer cells (Supplementary Figure S1). On the other hand, because KLF5 protein stability can also be increased by Hippo pathway effector YAP/TAZ [10,11], we turned to investigate whether curcumin regulates KLF5 through YAP/TAZ modulation. Both the real-time qPCR assay and western blot analysis results indicated that YAP/TAZ was downregulated by the treatment of curcumin ( Figure 4A,B). Furthermore, several downstream targets of YAP/TAZ were also down-regulated after curcumin treatment ( Figure 4C). Our results confirmed that YAP/TAZ is down-regulated by curcumin, and because they stabilize KLF5 in cancer cells, the decrease of YAP/TAZ may lead to the degradation of KLF5. Although the requirement of YAP and TAZ in maintaining KLF5 protein stability has been reported in breast cancer [10,11], whether the same mechanism functions in bladder cancer remains to be proven. Two different siRNAs designed to target YAP not only knocked-down YAP effectively, but also suppressed 5637 cells' growth rate, down-regulated KLF5 protein (not mRNA) and its target gene cyclin D1. What is more, proteasome inhibitor MG132 could reversed this KLF5 protein attenuation ( Figure 5). Thus, our study connected YAP and KLF5 in bladder cancer, and the pro-proliferative YAP/TAZ/KLF5/cyclin D1 axis was also revealed. Importantly, the in vivo study further confirmed the potent inhibitory effects of curcumin on tumor growth and this axis ( Figure 6). In conclusion, we found that KLF5 promotes the growth of bladder cancer cells and that curcumin suppress the KLF5 protein level in a proteasome-dependent way. Mechanistically, YAP is essential for the stability of KLF5, and curcumin inhibits KLF5 at least partially through suppressing the expression of YAP/TAZ. Thus, the YAP/TAZ/KLF5/cyclinD1 axis is important for the growth of bladder cancer, and it is a potential therapeutic target for curcumin and other chemicals in cancer treatment.
Cell Culture and Reagents
Human bladder cancer 5637 and WH cells were cultured in RPMI-1640 (5637) or DMEM (WH) medium supplemented with 10% fetal bovine serum at 37 °C, aired with 5% CO 2 .
Curcumin and proteasome inhibitor MG132 were obtained from Sigma-Aldrich (St. Louis, MO, USA). Polybrene and the protein synthesis inhibitor, CHX, were purchased from Biyuntian (Shanghai, China). These reagents were dissolved in DMSO and stored at −20 °C. The final concentration of DMSO for all treatments (including controls) was maintained at less than 0.5%.
Cycloheximide (CHX) Chase Assay
5637 cells were treated with DMSO or 15 μM curcumin for 2 h, then washed with PBS 3 times and refed with medium containing 10 μg/mL CHX. The whole-cell lysates were harvested at the indicated time and subsequently subjected to a western blot analysis.
Plasmids, Lentivirus Preparation, siRNA and Transfection
PLKO.1 lentiviral vectors encoding short hairpin RNA (shRNA) targeting human KLF5 or scramble shRNA (SC), as the control, were constructed by GenPharma (Shanghai, China). The target sequence of the selected shRNA is sh710: GGTTACCTTACAGTATCAACA. To generate lentivirus, PAX2, VSV-G and the plasmids described above were co-transfected into 293T cells using the Lipofectamine 2000 reagent (Invitrogen, CA, USA) according to the manufacturer's protocol. After 72 h, the supernatants were harvested and used to infect 5637 and WH cells in the presence of 8 μg/mL polybrene. The stable KLF5 knockdown (KO) 5637 cell line and its control were selected and designated as 5637/KOKLF5 and 5637/KOSC. The pcDNA3.1-based vector and Flag-KLF5 over-expression plasmids used to transfect 5637 cells were described previously [30]. The siRNAs targeting YAP and non-specific control (NC) were purchased from RiboBio (RiboBio Co., Ltd., Guangzhou, China). The specific sequences of the YAP gene for siRNA to target were: GCGTAGCCAGTTACCAACA (#1) and CAGTGGCACCTATCACTCT (#2). For mRNA detection, total RNAs were isolated 48 h after transfection. For western blot, cells were harvested 72 h after transfection. For the analysis of cell viability, transfected cells were trypsinized and seeded at 3500-5000 cells per well in a 96-well plate after 24 h.
Real-Time Quantitative PCR (qPCR)
Total RNA from cells was isolated with TRIzol reagent (Life Technologies, Rockville, MD, USA) and reverse-transcribed to cDNA by using the PrimeScript™ RT reagent kit (Takara, Dalian, China). The cDNA was studied using a CFX96 real-time PCR system (Bio-Rad, Hercules, CA, USA) with SYBR Green PCR Master Mix (Takara, Dalian, China) to determine the transcriptional expression of specific genes (seeing Table 1 for primer sequences). GAPDH was used for normalization. Relative gene expression was calculated by the2 −ΔΔCt method.
Tumor Xenograft Model
Methods to establish a 5637 bladder cancer cell xenograft model were reported previously [31]. Briefly, 1 × 10 6 5637 cells in 100 μL of serum-free RPMI-1640 medium mixed with matrigel (1:1 v/v) were injected subcutaneously into the right flank of 5-week-old male BALB/c nude mice. After 7 days, tumor-bearing mice were randomly divided into vehicle-and curcumin-treated groups (6 mice per group). For curcumin-treated groups, curcumin in phosphate buffered saline (PBS) was injected intraperitoneally at 200 mg/kg/day for 3 weeks. This dosage was proven safe and effective by previous studies in mice [32]. The vehicle group received PBS only. At the end of the experiment, mice was sacrificed, and tumors were weighed and measured to calculate tumor volume (formula: largest diameter × smallest diameter × smallest diameter × 0.5236). Then, tumors were fixed in 4% paraformaldehyde and embedded in paraffin for the next histological examination. Animal care and protocols were approved by the Institutional Animal Care and Use Committee of Xi'an Jiaotong University, and the permit number was SCXK2014-0155 (5 March 2014).
Statistical Analysis
GraphPad Prism version 5.0 software (GraphPad, San Diego, CA, USA) was used for analyzing differences between two groups (Student's t-test) and performing nonlinear regression analysis. A p-value less than 0.05 was considered to be statistically significant.
Conclusions
Our in vitro and in vivo results demonstrate that curcumin inhibits the cell growth of bladder cancer at least partially through inhibiting Hippo pathway effectors YAP/TAZ, which induce the accelerated degradation of KLF5 protein and subsequently cause the downregulation of cyclin D1. The study also suggests potential use of curcumin to target the YAP/TAZ/ KLF5/cyclin D1 axis in bladder cancer treatment. | 4,838.4 | 2014-08-28T00:00:00.000 | [
"Biology",
"Chemistry"
] |
Gas-Water-Rock Interactions and Implications for Geoenvironmental Issues
Water and gas, as the two most common fluids and primary geologic forces, are crucial components in various geological processes. Gas-water-rock interactions play indispensable roles in the evolution of geoenvironmental issues. For example, the accurate prediction of groundwater flow and contaminant transport requires a profound understanding of physicochemical processes that occur among liquid, solid, and gas phases [1]. At present, more attention should be paid to gas-water-rock interactions related to the transport and retention of toxic contaminants such as heavy metals and organic contaminants in aquifers and vadose zones, due to the release of toxic contaminants from intensive human activities [2]. In addition, a gas-water-rock reaction might change the stress field, groundwater seepage field, and properties of rocks and soils, which subsequently leads to the instability of slope and landslide hazards [3]. A rapid sliding rock mass is likely to trigger waves or stir the atmosphere generating air blasts and facilitating its transport, both aggravating the hazards.
Water and gas, as the two most common fluids and primary geologic forces, are crucial components in various geological processes.Gas-water-rock interactions play indispensable roles in the evolution of geoenvironmental issues.For example, the accurate prediction of groundwater flow and contaminant transport requires a profound understanding of physicochemical processes that occur among liquid, solid, and gas phases [1].At present, more attention should be paid to gas-water-rock interactions related to the transport and retention of toxic contaminants such as heavy metals and organic contaminants in aquifers and vadose zones, due to the release of toxic contaminants from intensive human activities [2].In addition, a gas-water-rock reaction might change the stress field, groundwater seepage field, and properties of rocks and soils, which subsequently leads to the instability of slope and landslide hazards [3].A rapid sliding rock mass is likely to trigger waves or stir the atmosphere generating air blasts and facilitating its transport, both aggravating the hazards.
Contents of the Special Issue
In the paper "Sources Identification of Nitrogen Using Major Ions and Isotopic Tracers in Shenyang, China," H. Huang et al. used multiproxy analysis in their research (stable isotope analyses in combination with chemical and hydrogeological data of the study area) to investigate the interactions between surface water and groundwater, as well as to identify the source of nitrogen contamination in groundwater in Shenyang City, China.δ 18 O water and δ 2 H water were used to determine the amount of surface water that was discharged into groundwater, while δ 18 O nitrate and δ 15 N nitrate were employed to determine the sources of nitrate and ammonium in groundwater, which are the main contaminants in the study area.According to the results, the reducing environment in groundwater may result from the prevailing iron and manganese, occurring from the weathering of minerals and rocks, which prevents the ammonium being oxidized into nitrate.The ratios of the recharge from the Hun River into groundwater were also identified.Multiproxy analysis also indicated that human activities, such as manure and sewage discharge, are the prevailing source of nitrogen in the waters.
In the paper "A Statistical Constitutive Model considering Deterioration for Brittle Rocks under a Coupled Thermal-Mechanical Condition," M. Gao et al. investigated constitutive behaviors of rocks under thermal-mechanical coupling conditions.A statistical damage constitutive model was firstly established on the basis of Weibull's distribution, by considering the thermal effects and crack initiation strength.Then, the parameters of the model were determined and expressed according to the characteristics of the stressstrain curve.Finally, the model was verified by conventional triaxial experiments of granite under thermal-mechanical actions (25 MPa, 40 °C-60 °C).The results show a relatively good coincidence between experimental curves and theoretical curves in the case studies.The validity of the model was therefore confirmed.
In the paper "Integration of an Iterative Update of Sparse Geologic Dictionaries with ES-MDA for History Matching of Channelized Reservoirs," S. Kim et al. proposed to couple an iterative sparse coding in a transformed space with an ensemble smoother with multiple data assimilation (ES-MDA) for dealing with the non-Gaussian problem.In this approach, discrete cosine transform (DCT) is followed by the repetition of K-singular value composition (K-SVD) for constructing sparse geologic dictionaries that preserve geological features of the channelized reservoir.Two channelized gas reservoirs were used to validate the proposed algorithm and the results show that the integration of DCT and iterative K-SVD improves the matching performance of gas rate, water rate, bottom-hole pressure, and channel properties with geological plausibility.
In the paper "The Monitoring-Based Analysis on Deformation-Controlling Factors and Slope Stability of Reservoir Landslide: Hongyanzi Landslide in the Southwest of China," B. Han et al. conducted a comprehensive analysis to improve the understanding on the deformation characteristics and controlling factors of the Hongyanzi landslide in the Southwest of China.The results indicated that significant deformation occurred during the drawdown period; otherwise, the landslide remained stable.The major reason of the reservoir landslide deformation was the generation of seepage water pressure caused by the rapidly growing water-level difference between inside and outside of the slope.The influences of precipitation and earthquake were insignificant.
In the paper "Hydrochemical Characteristics and Formation of the Madeng Hot Spring in Yunnan, China," Z. Ren et al. investigated the hydrochemical characteristics and formation of the Madeng hot spring.Through field data collection and studies, the temperature of the hot spring is 42.1 °C.The spring water has a pH value of 6.41, TDS of 3.98 g/L, F contents of 3.08 mg/L, and H 2 SiO 3 of 35.6 mg/L.Stable hydrogen and oxygen isotopes indicate that the hot water is of meteoric origin.Groundwater is recharged from the infiltration of precipitation in the mountain regions, undergoes a deep circulation, obtains heat from the heat flow, flows upward along fractures, and emerges as an upflow spring through the Quaternary sand and gravel in the central low-lying river valley.
In the paper "Fluid Geochemistry of Fault Zone Hydrothermal System in the Yidun-Litang Area, Eastern Tibetan Plateau Geothermal Belt," Y. Hou et al. investigated the chemical and isotopic compositions of thermal water in an underexploited geothermal belt in the eastern Tibetan Plateau.By analyzing water samples from 24 hot springs, mostly taken from locations in fault zones, it was revealed that the water chemical types of the hot springs are mainly Na-HCO 3 -type water.Besides, water-rock interaction and cation exchange and mixture are the dominant hydrogeochemical processes in the hydrothermal evolution.According to the results, the hydrothermal systems are recharged by the meteoric water and are heated by the different deep, thermally and topographically driven convection heat along faults undergoing subsurface boiling before going back to the surface.
In the paper "Investigation on the Relationship between Wellhead Injection Pressure and Injection Rate for Practical Injection Control in CO 2 Geological Storage Projects," B. Bai et al. proposed the complete constraint conditions of wellbore injection and used it to investigate the relationship between wellhead injection pressure and injection rate.The results show that these two parameters were mutually constrained.For a certain injection project, the allowable wellhead injection pressure and injection rate separately formed a continuous interval.A change of one parameter within its allowable interval could also change the other, both forming a closed region.Thus, controlling the wellhead injection parameters in this closed region could simultaneously ensure the effectiveness and the safety of injection.
In the paper "Numerical Investigation into the Evolution of Groundwater Flow and Solute Transport in the Eastern Qaidam Basin since the Last Glacial Period," Q. Hao et al. utilized TOUGHREACT to perform a reactive solute transport simulation and considered the influence of watersoluble components on the fluid density in the aridsemiarid Qaidam basin in the northeastern Tibetan Plateau since the last glacial period.A three-level nested groundwater flow system was developed in the study area.Based on the simulation results, there are significant differences in the flow ranges and velocities of the different groundwater flow systems.The seepage velocity of the local water flow system is significantly higher than that of the intermediate and regional water flow systems.Since the last glacial period, the groundwater in the eastern part of the Qaidam Basin has experienced solute concentration and enrichment.The distributions of the groundwater flow system and solutes have been greatly affected by climate variations in different geological periods.The groundwater in the discharge region is currently in the stage of carbonate precipitation and is far from gypsum and halite precipitation.The findings in this study are useful for sustainable utilization of local groundwater resources and for coping with climate change.
In the paper "Characterization of Microscopic Pore Structures of Rock Salt through Mercury Injection and Nitrogen Absorption Tests," J. Chen et al. collected rock salt samples from the Yunying salt mine of Hubei province in China and implemented high pressure mercury injection, ratecontrolled mercury penetration, and nitrogen absorption tests with them.The pore size distribution was evaluated based on fractal analysis.The results showed that the pore size of rock salt varied from 0.01 to 300 μm with a major concentration of pore sizes smaller than 1.00 μm.The pore's radiuses were mainly distributed within a range between 15 and 50 nm.The research further revealed that the pore channel size of rock salt was randomly distributed, but the distribution of pore throat radius fitted very well with fractal law.By analysis of permeability, it was found that the maximum and medium radiuses of the pore throat had significant impacts while porosity was not apparently related to the permeability of rock salt.The higher the fractal dimension, the higher impacts on the permeability of the small throat was detected and the lower influence on the permeability of the big throat was exhibited.Therefore, the small throat determined majorly the permeability of rock salt.
Geofluids
In the paper "Interaction between Vetiver Grass Roots and Completely Decomposed Volcanic Tuff under Rainfall Infiltration Conditions," L. Xu et al. identified and clarified the influence of vetiver grass roots on soil properties and slope stability through planting vetiver grass at the Kadoorie Farm in Hong Kong and leaving it to grow without artificial maintenance.Under the natural conditions of Hong Kong, growth of the vetiver grass roots can reach 1.1 m in depth after one-and-a-half year from planting.The percentage of grain size which is less than 0.075 mm in rooted soil is more than that of the nonrooted soil.The rooted soil of high finer grain content has a relatively small permeability.As a result, the increase in water content is therefore smaller than that of nonrooted soil in the same rainfall conditions.Shear box test reveals that the vetiver grass roots significantly increased the peak cohesion of the soil.The combined effects of grass roots on hydrological responses and shearing strength significantly stabilized the slope in local rainfall conditions.
In the paper "A Measured Method for In Situ Viscosity of Fluid in Porous Media by Nuclear Magnetic Resonance," Z. Yang et al. established a method for determining the in situ viscosity of fluids in porous media and tested the in situ viscosity spectra of water in tight cores under different displacement conditions.The results show that the in situ viscosity distribution of water in porous media was inhomogeneous, and it was not a constant but was related to the distance between water and rock walls.If the distance was small enough, the viscosity would increase rapidly and be greater than the bulk viscosity.
In the paper "Stability Analysis of Partially Submerged Landslide with the Consideration of the Relationship between Porewater Pressure and Seepage Force," Y. Wang et al. presented a modified mathematical expression for the stability analysis of a partially submerged landslide, based on the relationship between porewater pressures and buoyancy acting on the underwater zone of a partially submerged landslide, and the relationship between porewater pressures, seepage force, and buoyancy acting on the partially submerged zone.The resultant porewater pressures acting on the underwater slice equaled the buoyancy, and the porewater pressures acting on the partially submerged slice were equivalent to the seepage force and the buoyancy.The result showed that there were two equivalent approaches for considering the effect of water on landslide stability in the limit equilibrium method.One was based on total unit weight and porewater pressures, and the other was in terms of the buoyant weight and the seepage force.The study provided a good opportunity for simplifying the complex boundary porewater pressures in limit equilibrium analysis for the stability of the partially submerged landslide.
In the paper "CO 2 Leakage-Induced Contamination in Shallow Potable Aquifer and Associated Health Risk Assessment," C. Y. Kim et al. focused on the risk assessment of CO 2 leakage in a shallow aquifer.2D reactive transport models were developed and used to simulate the groundwater contamination.The results show that the movement of leaked CO 2 was mainly governed by local flow fields within the shallow aquifer.The dissolution of aquifer minerals and increased permeabilities of the aquifer are caused by the induced low-pH plume.The distribution of the total arsenic plume was similar to the one for the arsenopyrite dissolution.Authors conclude that the shape of the arsenic plume impacts the human health risk.
In the paper "Hydrochemical Characteristics and Evolution of Geothermal Fluids in the Chabu High-Temperature Geothermal System, Southern Tibet," X. Wang et al. presented reasonable reservoir temperatures and cooling processes of subsurface geothermal fluids in the Chabu high-temperature geothermal system in Southern Tibet.It investigated the hydrochemical characteristics of a geothermal spring by analyzing 36 geothermal spring samples, and combining this analysis with cluster analysis of multivariate statistical analysis to reveal the cooling processes of subsurface geothermal fluids.According to the results, the geothermal waters of the research area are generally mixed with the shallow cooler waters from the reservoirs.The major cooling processes of the subsurface geothermal fluids gradually transform from adiabatic boiling to conduction from the central part to the peripheral belt.
In the paper "Impact of Redox Condition on Fractionation and Bioaccessibility of Arsenic in Arsenic-Contaminated Soils Remediated by Iron Amendments: A Long-Term Experiment," Q. Zhang et al. focused on the water-soil interactions related to the transport and retention of heavy metal(loid)s such as arsenic in soils.It investigated the effect of redox condition on arsenic fractions and bioaccessibility in arsenic-contaminated soils remediated by iron grit.Specifically, it investigated arsenic fractions in soils under the anoxic condition and aerobic condition before or after the addition of iron grit.According to the results, the labile fractions of As in soils decreased significantly after the addition of iron grit, while the unlabile fractions of As increased rapidly, and the bioaccessibility of As was negligible after 180 d incubation.More labile fractions of As in iron-amended soils were transformed into less mobilizable or unlabile fractions with the contact time.The increase of crystallization of Fe oxides, decomposition of organic matter, molecular diffusion, and the occlusion within Fe-(hydr)oxides cocontrolled the transformation of As fractions in iron-amended soils under different redox conditions.
In the paper "Effects of Dissolved Organic Matter on Sorption of Oxytetracycline to Sediments," Z. Wang et al. investigated the effect of two representative dissolved organic matters (DOMs) on the adsorption of oxytetracycline (OTC) to three typical sediments (first terrace sediment, river floodplain sediment, and riverbed sediment).Two typical DOMs were derived from corrupt plants (PDOM) and chicken manure (MDOM).Elemental analysis and threedimensional fluorescence were deployed to elucidate the mechanism of the effect of DOM on the adsorption of OTC to sediments.The samples subjected for testing were collected from the Weihe River, Northwest China.According to the results, the humus-like DOM can promote the adsorption of OTC while the protein-like DOM can inhibit the adsorption of OTC to sediments, which is determined by the aromaticity, hydrophilicity, and polarity of the DOMs.
In the paper "A Regional Scale Investigation on Groundwater Arsenic in Different Types of Aquifers in the Pearl 3 Geofluids River Delta, China," Q. Hou et al. focused on groundwater arsenic and other hydrochemical compositions in various aquifers in the Pearl River Delta.It investigated the source and driving forces of arsenic in different types of aquifers in the Pearl River Delta.Specifically, 399 groundwater samples were collected from various aquifers in the Pearl River Delta, 20 chemical compositions of groundwater samples were analyzed, and the relationship between arsenic and other hydrochemical compositions was evaluated by the principal component analysis (PCA).According to the results, about 9.4% and 2.3% of the samples with high concentrations (>0.01 mg/L) of arsenic were in granular and fissured aquifers, respectively, but no samples with a high concentration of arsenic were in karst aquifers.The source and mobilization of groundwater arsenic in granular aquifers are likely controlled by the following mechanism: organic matter in marine strata was mineralized and this provided electrons for electron acceptors, resulting in the release of NH 4 + and I − and the reduction of Fe/Mn and NO 3 − , which was accompanied with the mobilization of arsenic from sediments into groundwater.
In the paper "A Coupled One-Dimensional Numerical Simulation of the Land Subsidence Process in a Multilayer Aquifer System due to Hydraulic Head Variation in the Pumped Layer," Y. Wang et al. focused on a case study of land subsidence modeling in China.A numerical model of a coupled one-dimensional multilayer aquifer system is developed.The results show that the pressure head in layers does not rise immediately after pumping ceases.Also, the results show that there is a transition period between land subsidence and rebound.In this transition period, land could continue to subside while the head in the pumped layer starts to recover. | 4,002.6 | 2018-10-08T00:00:00.000 | [
"Geology",
"Environmental Science"
] |
Towards a complete next-to-logarithmic description of forward exclusive diffractive dijet electroproduction at HERA: real corrections
We studied the $ep\rightarrow ep+2jets$ diffractive cross section with ZEUS phase space. Neglecting the $t$-channel momentum in the Born and gluon dipole impact factors, we calculated the corresponding contributions to the cross section differential in $\beta=\frac{Q^{2}}{Q^{2}+M_{2jets}^{2}}$ and the angle $\phi$ between the leptonic and hadronic planes. The gluon dipole contribution was obtained in the exclusive $k_{t}$-algorithm with the exclusive cut $y_{cut}=0.15$ in the small $y_{cut}$ approximation. In the collinear approximation we canceled singularities between real and virtual contributions to the $q\bar{q}$ dipole configuration, keeping the exact $y_{cut}$ dependency. We used the Golec-Biernat - W\"usthoff (GBW) parametrization for the dipole matrix element and linearized the double dipole contributions. The results give roughly $\frac{1}{2}$ of the observed cross section for small $\beta$ and coincides with it for large $\beta.$
Introduction
One of the main outcomes of the HERA research program is the evidence and detailed study of diffractive processes. Indeed, almost 10 % of the γ * p → hadrons deep inelastic scattering (DIS) events were shown to contain a rapidity gap in the detectors between the proton remnants Y and the hadrons X coming from the fragmentation region of the initial virtual photon, namely the process was shown to look like γ * p → X Y . These diffractive deep inelastic scattering (DDIS) events were revealed and extensively studied by H1 and ZEUS collaborations [1][2][3][4][5][6][7][8]. The existence of a rapidity gap between the diffractive state X and the proton remnants, with vacuum quantum numbers in t−channel, is a natural place for a Pomeron-like description. Two types of approaches have been developed.
First, based on the existence of a hard scale (the photon virtuality Q 2 for DIS), a collinear QCD factorization theorem was derived [9] and applied successfully to diffractive processes. For inclusive diffraction, this theorem is usually applied with so-called resolved Pomeron models, where one introduces distributions of partons inside the Pomeron, similarly to the usual parton distribution functions for proton in DIS, convoluted with hard matrix elements. In the framework of collinear factorization diffractive dijet photoproduction was calculated in [10] and [11] in NLO pQCD, where the authors observed collinear factorization breaking. To describe the data it was necessary to introduce a model for the suppression factor or gap survival probability. They demonstrated that a global suppression factor or a model depending on the light cone momentum fraction and the flavour of the interacting parton describe the HERA data. Inclusive dijet photoproduction was also studied in this framework and was shown to be very sensitive to the details of nuclear PDFs in the Pb-Pb ultraperipheral collisions in the LHC kinematics [12], [13].
Second, it is natural at very high energies to view the process as the coupling of a Pomeron with the diffractive state X of invariant mass M . In the rest of this paper, we generically call such descriptions as high-energy factorization pictures. In DDIS case, for low values of M 2 , X can be modeled by a qq pair, while for larger values of M 2 , the cross section with an additional produced gluon, i.e. X = qqg, is enhanced. A good description of HERA data for diffraction was achieved in such a model [14], in which the Pomeron was described by a two-gluon exchange.
In the present paper, we study in detail the cross section for exclusive dijet electroproduction in diffraction, as was recently reported by ZEUS [15]. A first theoretical study of such processes within a high-energy factorization picture was performed in [16], in a leading order (LO) approximation in which the dijet was made of a qq pair.
Our aim is to make a description of the same process, relying now on our complete next-to-leading order (NLO) description of the direct coupling of the Pomeron to the diffractive X state, obtained in refs. [17,18], and further extended to the case of a light vector meson in ref. [19]. In our approach, the Pomeron is understood as a color singlet QCD shockwave, in the spirit of Balitsky's high energy operator expansion [20][21][22][23] or in its color glass condensate formulation [24][25][26][27][28][29][30][31][32].
The exclusive diffractive production of a dijet will be a key process for the physics at the Electron-Ion Collider (EIC) at small x. Indeed, it was proven to probe the dipole Wigner distribution [33]. Several recent studies have been performed in order to build precise target matrix elements for EIC phenomenology [34][35][36] and for Ultraperipheral collisions at the LHC [37]. The gluon Wigner distributions probed by our process can describe a cold nuclear origin for elliptic anisotropies, as studied for dilute-dense collisions [38,39]. Finally the (subeikonal) target spin asymmetry for dijet production was proven to give a direct access to the gluon orbital angular momentum in the target [40,41]. In this paper, we are interested in building accurate descriptions of the final state via a jet algorithm, to be combined later with the target matrix elements in the aforementioned studies for future precise EIC predictions We present explicit formulas for Born ep → ep ′ + 2jets cross section allowed by HERA kinematics. We argue that for Born production mechanism, HERA selection cuts for diffractive DIS [15] severely reduce contributions from jets in the aligned configuration since simultaneous restrictions on p ⊥jet > 2 GeV and M 2jets > 5 GeV forbid a jet with a very small longitudinal momentum fraction of the photon. As is known [42], the aligned jets give the dominant contribution to the cross section, which in the presently studied kinematics is cut off. Thanks to these cuts the typical transverse energy scale in the Born jet impact factor is greater than the t-channel transverse momentum scale set by the saturation scale Q s determined by the proton matrix element. As a result, we can expand in the t-channel transverse momentum in the impact factor and analytically take integrals for the γp cross section. This naturally gives the leading power ∼ Q 4 s ∼ W 2λ behavior of the cross section (where 1 + λ is the pomeron intercept) unlike ∼ Q 2 s ∼ W λ for the aligned jets [42] describing large dipoles and saturation. We called this procedure "small Q s " or "BFKL-like" approximation.
Next, we study the real radiative corrections. According to the exclusive k t jet algorithm [43,44] used in the ZEUS data analysis [15], these corrections come from the ∼ √ y cut -wide border of the Dalitz plot (see figure 2), with y cut = 0.15 being the algorithm parameter. One can symmetrically divide this area into 3 subareas with predominantly q − (qg),q − (qg), and g − (qq) jets, where one of the jets is made ofqg, qg, or qq correspondingly. At large M 2jets the third region gives enhanced contribution since in such kinematics a subdiagram with a t-channel gluon has large s = M 2 2jets . Most of the real production matrix elements were calculated in ref. [18] in arbitrary kinematics. We have obtained here the remaining ones and we present them in appendix B. The real production matrix elements have soft and collinear divergencies in the first two regions while the contribution of the third region is finite. Integrating the singular parts over the first two regions, we cancel the singularities with the singular contribution of the virtual part from ref. [18]. As a result, we have the contribution of soft and collinear gluons to the 2 jet cross section in the k t algorithm. Since the divergent contributions factorize as the Born cross section times the collinear singular factor, the validity criteria of the small Q s approximation for such contributions are the same as for the Born cross section. Therefore we used this approximation to take the inner integrals in the γp cross section. The average value of this correction is about 10%. However, we noticed that the small y cut expansion of this contribution is very inaccurate since ln 2 y cut , ln y cut , and the constant contributions together are of the order of the next term ∼ √ y cut = 0.39, which is the true expansion parameter. Although we calculated this contribution exactly in y cut , all other (nonsingular) contributions are ∼ √ y cut geometrically. Therefore this term alone can not be a good approximation. Instead one can look at it as at a subtraction term for future full numerical calculation. Among the nonsingular contributions there are ones with the gluon emitted before the shockwave. Suppose for definiteness that it was emitted from the quark and we consider the second,q − (qg) region. In such a contribution the invariant mass of the qg pair is small ∼ √ y cut , and the only hard scale in the quark propagator between the photon and the gluon vertices comes from the t-channel. It means that one cannot neglect the t-channel momentum in the impact factor, i.e. the small Q s approximation is inapplicable. In other words this correction is very sensitive to Q s . In general, one can say that if one experimentally restricts from below both the mass of the dijet system and the transverse momentum of the jet so that the aligned jets are cut off from the Born cross section, the radiative corrections will greatly depend on saturation effects. For a generic, roughly symmetric, dijet configuration in the third region with roughly 1 2 of the photon's longitudinal momentum taken by the gluon and roughly 1 4 taken by the quark and antiquark each, the typical transverse energy scale in the impact factor is determined by the same parameters as in the Born one: the photon virtuality Q, M 2jets , and the experimental cut p ⊥jet min . Therefore one can also try calculating the contribution of such gluon dipoles in the small Q s approximation. The validity of this approximation for the configuration when the (qq) jet itself has the aligned structure is not justified, however, since then the quark or antiquark's part of the longitudinal momentum of the pair becomes a new small parameter. Such a situation happens in the corners of the third g − (qq) area in the Dalitz plot since in these corners the invariant mass of qg orqg becomes small and we return to the situation discussed in the previous paragraph.
Anyway in this paper we have calculated the contribution of all real radiative corrections from the third g − (qq) area in the Dalitz plot, i.e. the gluon dipole contribution in the small Q s approximation, i.e. expanding the impact factors in the t-channel momenta. The error of our result comes from the corners of the phase space discussed above and its numerical value will be judged from comparison of our result to the future full numerical calculation. This difference will be related to saturation effects. This paper is organized as follows. The second part discusses kinematics and yields the LO computation of the cross section, including its leptonic part in section 2.1, hadronic part in section 2.2, HERA acceptance in section 2.3, small Q s approximation in section 2.4 and analysis of the result in section 2.5. The third part discusses the NLO real corrections including the k t -jet algorithm in section 3.1, q − (qg) andq − (qg) dipoles in section 3.2 and g − (qq) dipole in section 3.3. The conclusion summarizes the paper. Appendix A contains discussion of aligned vs symmetric jet contributions to the Born cross section. Appendix B presents the dipole -double dipole interference impact factors for real correction. Appendix C discusses the overall normalization and matching to non-perturbative distributions in the Golec-Biernat Wüsthoff formulation of DDIS.
Leptonic part
We will use hereafter the light cone vectors n 1 and n 2 , defined as For any vector p we note The DIS kinematic variables read where p 0 , k, k ′ and q are the proton, initial electron, final electron and photon's momenta and we integrated out the azimuthal angle of the scattered electron w.r.t. the initial electron via overall rotational invariance. The cross section for diffractive dijet production reads Here is the γ * -proton cross section, obtained from the γ * -proton scattering amplitude M µ , and The photon polarization vectors read where n µ = ε µναβ p ν q q α p β 0 , andp q ≡ p q⊥ = p q − q These polarization vectors obey the identity e x µ e x * ν + e y µ e y * ν = e 0 µ e 0 * ν − g µν + q µ q ν q 2 . (2.12) Hereafter, we label the polarizations using Latin indices, while greek letters are used for Lorentz indices. We get (2.14) we thus have In our light-cone frame It is the frame where the photon and the proton are back-to-back, and the z axis is along the direction of the photon momentum. The photoproduction cross section [18] was calculated in this frame. Hence
Hadronic part
The density matrix for the cross section in our frame was obtained in (5.21-23) of ref. [18].
To get the proper normalization we have to multiply all cross sections in ref. [18] by 1 2(2π) 4 as is discussed in appendix C. The LO cross sections in our frame read 24) and the total transverse cross section reads As a result the convolution of the electron tensor and the photon cross section reads .
(2.27)
Here φ is the angle between the quark and the electron's transverse momenta in our frame. Experimentally φ is the angle between the jet and the electron, and the jet may come from the antiquark. Then the angle between the quark and the electron is π − φ. Therefore one measures the sum of the cross sections with the quark -electron angle equal to φ and to π − φ. In this sum the interference term σ i 0T L vanishes, σ 0LL and σ 0T T become twice bigger, and the angle changes from 0 to π. Hence starting from here we will omit the σ i 0T L contribution, understand φ as the angle between the jet and the electron, φ ∈ [0, π], and double σ 0LL and σ 0T T .
Next, we have to substitute a model for the hadronic matrix elements F. We will use the Golec-Biernat -Wüsthoff (GBW) [45] parametrization, which was formulated in the coordinate space. To get the proper normalization we Fourier transform (2.23) and compare it with Eq. (4.48) in ref. [42]. Using we have Comparing it with (4.48) in ref. [42], the GBW parametrization of the forward dipole matrix element in our normalization reads Here which describes the fraction of the incident momentum lost by the proton or carried by the pomeron. Neglecting the t−channel exchanged momentum, we will write In the above model, for 3 active flavours. The nonforward matrix element can be written totally in the impact parameter space Here one can take a simple model [46] that the b-dependence factorizes into a Gaussian proton profile We will need this function in the momentum space and One then gets Here φ is the relative angle between the jet and leptonic planes. It is useful to introduce the Bjorken variable β normalized to the pomeron momentum, which reads Neglecting the t-channel exchanged momentum (experimentally, t could not be measured in ZEUS analysis, but was presumably rather small), we will use the simplified expression and thus, denotingβ = 1 − β, We need the differential cross section in x, β and φ. From we thus have
Experimental cuts
We will now consider the experimental set-up of the ZEUS collaboration. The HERA kinematics is such that E e − = 27.5 GeV and E p = 920 GeV, i.e. √ s = 318 GeV. The phase space covered by the ZEUS collaboration reads [15] W min = 90 GeV < W < W max = 250 GeV, Q min = 5 GeV < Q, (2.51) Hence, using eq. (2.34) one has For fixed β we have, using eq. (2.48), A careful study shows that .
(2.55)
On the other hand, eq. (2.54) leads to The inelasticity restriction reads y min = 0.1 < y < y max = 0.65, i.e. y min s < Q 2 + W 2 < y max s . (2.57) Eqs. (2.54) and (2.57) thus result in the following constraints for Q 2 : One should note that in eq. (2.58), min and thus, using the expression of β max , see eq. (2.55), that For the experimental values of ZEUS, this is not satisfied, and one can thus simplify the constraints (2.58) on Q 2 as Similarly, Eqs. (2.54) and (2.57) result in the following constraints for W 2 : Additionally, there is a restriction on the transverse momentum of the jet In the t = 0 limit, i.e. τ = 0, we have from eq. (2.44) p = | p q | = | pq| = √ xxM . Thus, the contraint (2.64) reads and leads to the following restrictions on x: There is one more experimental cut imposed in ref. [15]. It is the restriction on the jet rapidity η max = 2, where the rapidity is defined in the detector frame with the z axis along the proton and electron velocities in the proton beam direction. One can rewrite this cut as cut on x min as well. Indeed, one can transform momenta from the proton-photon frame (2.16-2.18) to the detector frame. For any vector l (2.70) After this transformation one gets In the detector frame This condition fixes p + e or λ, the remaining parameter representing freedom in z-boosts in the γ-proton frame. Then p qDet 's rapidity reads where we changed the sign to take into account the propagation along the negative z direction (the z axis in the ZEUS frame and in our frame are opposite). Obviously, this constraint should be fulfilled for both quark and the antiquark jets, i.e. eq. (2.75) with x →x. A careful inspection then shows that these two constraints turn into The minimal value for x is thusx with the additional constraint thatx min < 1 2 . However as we will show later, numerically this rapidity restriction is negligible. Therefore we will include it only in the discussion of the final result.
Finally, one has to calculate The t-channel integrals can be simplified (2.87) These integrals will be calculated numerically.
BFKL-like approximation
In our kinematics the saturation scale is much lower than all other scales. Indeed, we have It means that neglecting p 2 in the denominator in (2.87) gives the error (2.90) ) precision one can neglect the t-channel momentum in the integrals and calculate them analytically to get and (2.92) In this approximation the ep cross section (2.78) reads Then the integral w.r.t. x can be performed analytically The results integrated w.r.t. φ ∈ [0, π] are in figure 1. As one can see the approximation errors are smaller than the experimental ones.
Analysis of the LO result
Following ref. [42], we rewrite (2.94) in terms of diffractive structure functions F D . These functions are defined through is the Bjorken variable. Since one gets which gives in the small β (M 2 ≫ Q 2 ) region This behavior contradicts the known one [42] x PF where we introducedF to distinguish them from our result. First, let us emphasize that our transverse structure function F
D(3) T
is correctly proportional toβ. Indeed, since the final qq pair has opposite helicities, it carries angular momentum as orbital momentum and its wave function scales like p ⊥ ∼ M. Therefore it should vanish at M = 0, i.e. β = 1.
Next, F Therefore the current experimental setup does not let us probe the leading twist contribution to the transverse cross section. In other words the experimental cuts kill the leading twist aligned jets which come from the saturation region. As a result we are left with the subleading twist perturbative BFKL-like (σ ∼ s 2λ ) behavior (2.94). One can also feel that the experiment sees only the subleading twist contribution from fig. 6d in ref. [15] where they cut off the p ⊥ distribution peak. The longitudinal structure function is subleading to the transverse one in twist (2.102). The whole 0 < x < 1 range contributes to it. Therefore the experimental cuts only change the β-dependence of the result.
k t jet algorithm
in the c.m.f.
The distance between two particles according to the k t algorithm [43] reads Here E i,j , θ ij are the particle's energies and the relative angle between them in c.m.f. Two particles belong to one jet if d ij < y cut . In our case y cut = 0.15 [15]. One introduces the variables which satisfy i=q,q,g In our variables and usingx q +xq +x g = 1 (3.9) we have In the c.m.f. we also have x q + xq + z = 1, p g + p q + pq = 0. 1soft gluon || antiquark, 1-y0 (1, 1), A, B, C, (1, 0) into regions is arbitrary. We found the tessellation depicted here convenient for integration.
The integral over the area covered by regions 1-4 in figure 2 gives the contribution of configurations where the antiquark and the gluon form one jet, jet i.e. when the gluon and the antiquark are almost collinear to each other. The other jet is then formed by the quark. So we have 13) The cross section for qqg production has a contribution dσ 3 with 2 dipole operators, a contribution dσ 4 with a dipole operator and a double dipole operator, and a contribution dσ 5 with 2 double dipole operators (see (6.5-6.8) in ref. [18]), Here dσ 3 describes final state interaction and contains collinear and soft singularities while dσ 4 and dσ 5 are finite. Collinear singularities lie at x q = 1 and xq = 1 and the soft one is in the corner x q = xq = 1 in figure 2. In this paper we will work only with the singular part of dσ 3 , where (see (7.8) of ref. [18]) 16) and the collinear factor nj (see (7.9) in [18]) reads Here we modified the integration area in nj according to k t jet algorithm whereas in ref. [18] we used cone algorithm. After integration we get nj + n j = 4 ln where w(y cut ) = 2Li 2 − y 0 2y cut − Li 2 y 2 0 4y cut + 2Li 2 1 − y 0 1 − y cut + Li 2 (y cut ) + ln y cut y 2 0 + 2y 0 − 3y 2 cut + 6y cut − 3 2 + 2 ln(1 − y 0 ) − y 2 0 + 2y 0 + 3 2 ln y 0 2 + y 2 cut + 2y cut − y 0 (y 0 + 2) 2 ln(y 0 − y cut ) + 3 − y 2 cut − 2y cut 2 ln(1 − y cut ) + 6y 3 cut + y 2 cut (y 0 − 20) + 2y cut y 2 0 + 7y 0 + 16 + y 0 y 2 0 + 10y 0 + 14 4(2y cut + y 0 ) Here This result cancels soft and collinear singularities in the virtual part and we get instead of (7.24) in ref. [18] In the small y cut approximation The remaining contributions of dσ 3 , dσ 4 , and dσ 5 are suppressed in y cut . Therefore the contribution of the soft and collinear gluons to the cross section after cancellation of divergencies with the virtual part reads It means that leading in y cut contribution is numerically of the same order as O( √ y cut ) corrections. But corrections of this order come from all other contributions to the cross section, i.e. the remaining part of dσ 3 , dσ 4 , and dσ 5 integrated over the whole 3-jet area (regions 1-6). Therefore the result for S R alone can not be a good approximation. It has importance rather as a subtraction term for future full numerical calculation. Nevertheless using eq. (2.93), Then the x integral is doable analytically, see eq. (2.94) The result is given in figure 3. One may notice a sharp corner of the graph at β = 0.5. It is related to the change of the functional dependence on β in the limits of Q and W integrations of the cross section at β = 0.5, which is a consequence of the HERA cuts.
Quark+antiquark in one jet
The integral over the area covered by regions 1-4 in figure 4 is ∼ √ y cut . These regions cover the configurations with a collinear quark-antiquark pair. However, this contribution may be enhanced in the large produced mass M limit thanks to the t-channel gluon in the impact factor. In this picture collinear qq configurations cover regions 1-4, where which follows from (3.7-3.10). The cross section for qqg production has a contribution dσ 3 with 2 dipole operators, a contribution dσ 4 with a dipole operator and a double dipole operator, and a contribution dσ 5 with 2 double dipole operators (see (6.5-6.8) in ref. [18]), see appendix C for proper normalization, dσ (qqg) = dσ 3 + dσ 4 + dσ 5 . (3.31) Since the photon in the initial state can appear with different polarizations, the various cross sections are labeled as The dipole × dipole contribution reads The dipole × double dipole contribution reads The double dipole × double dipole contribution to the 3 jet cross section reads Here the impact factors are given in ref. [18] and in Appendix B, whereas the hadronic matrix elements are given by eq. (5.3) of ref. [18]. Changing variables The hadronic matrix elements can be written as (see (5.2-5.8) in ref. [18]) As a first approximation one may neglect the nonlinear term. Then we havẽ (3.43) Intrinsically this assumes large N c approximation so that we will neglect 1 in N 2 c − 1. Integrating w.r.t. p via and substituting First, one has to integrate these expressions over the area covered by Regions 1-4 in the Dalitz plot ( fig. 4). In terms of the plot variables x the integral reads Since the impact factor is symmetric w.r.t. q ↔q interchange, one can rewrite the latter expression as The impact factors are not singular as ∆ g ≡ | ∆ g | → 0 and ∆ 2 g ∼x g . Therefore to get the leading in √ y cut contribution, one can put ∆ g = 0 in them and integrate w.r.t. ∆ g Next, we will work in the small Q s approximation as we did for the LO impact factor (see eqs. (2.88-2.92)). It means that after integrating out delta-functions and calculating derivatives in eqs. 3 ) and neglects their absolute values everywhere except in the exponents. Then the exponential integrals are calculated straightforwardly giving +∞ 0 dp 2 e −R 2 0 p 2 = 1 . As a result one has the following cross sections We demonstrate this procedure on the example of the longitudinal photon contribution to σ 5 . The impact factor for longitudinal photon × longitudinal photon was calculated in ref. [18] (B.1). It reads (3.56) As was outlined above, using small Q s and small y cut approximations, one can take t-channel integrals Then one integrates over regions 1-4 via eq. (3.51) and w.r.t. x j according to eq. (3.45).
Keeping only the leading contribution y cut , one gets The product of the transverse photon × transverse photon impact factor ) * was calculated in ref. [18], see eq. (B.16). The integration in this case is similar to the previous case, albeit with more cumbersome expressions. Therefore we do not present the intermediate results giving only the final answer The longitudinal photon × longitudinal photon impact factor Φ + 3 (p 1⊥ , p 2⊥ )Φ + 3 (p ′ 1⊥ , p ′ 2⊥ ) * was calculated in eqs. (B.2-4) and the transverse one in eqs. (B.17-19) in ref. [18]. They lead to The remaining cross section dσ 4JI contains Φ 4 (p 1⊥ , p 2⊥ , p 3⊥ )Φ 3 (p ′ 1⊥ , p ′ 2⊥ ) * . We present these convolutions in the Appendix B. Integrating them according to the guidelines discussed above we get To get the distribution in β one has to integrate this equation w.r.t. φ from 0 to π because jets are treated as identical. The results are in figures 5, 6, 7. As one can see, the interference term dσ 4T is negative, which significantly diminishes the leading power asymptotics of dσ 5T . In addition, the large N c approximation decreases dσ 5T for ∼ 10% since we expand N 4 On the other hand the rapidity cut (2.75-2.76) dependence is very low.
Conclusion
This paper discussed the exclusive diffractive dijet electroproduction with HERA selection cuts [15]. We started from the analytic formulas from ref. [18] for fully differential Born cross section and its real correction with dipole × dipole and double dipole × double dipole configurations. In addition, in appendix B we calculated the remaining interference real production impact factor with dipole × double dipole configuration. We used the GBW parametrization for the dipole matrix element between the proton states and the large N c approximation for the double dipole matrix elements. We constructed the differential ep → ep + 2jets cross section in β = Q 2 Q 2 +M 2 2jets and in the angle φ between the leptonic and hadronic planes with HERA acceptance. We argued that HERA selection rules [15] suppress the aligned jet contribution indicative of saturation to the Born cross section. These cuts allowed us to neglect the t-channel momentum in the Born impact factor and integrate the γp cross section analytically. The result is in eq. (2.94).
Next, we cancelled the singularities from soft and collinear gluons between real and virtual corrections in the collinear approximation by integrating the singular contributions over the q − (qg) andq − (qg) areas in the Dalitz plot of fig. 2 within the k t jet algorithm. As the Born cross section, the resulting correction was analytically integrated in the small Q s approximation in ref. (3.27). It gives ∼ 10% of the Born result.
Finally, we integrated all real corrections in the small Q s and small y cut approximations over the the g − We noted that firstly, the small Q s approximation works for Born, collinearly enhanced radiative corrections to qq dipole configuration, and generic gluon dipole configuration since the HERA cuts Q, M 2jets > 5 GeV and p ⊥ min > 2 GeV effectively restrict jets with very small longitudinal momentum fraction x. It means that the typical hard scale in the impact factor is of order of M 2 2jets , Q 2 , M 2 2jets x, Q 2 x, p 2 ⊥ min and multiplication with x here can not make it smaller than Q 2 s . So we can expand the impact factor in Q s . However for Born, the region x < Q 2 s / max(Q 2 , M 2 2jets ) is the aligned jet region indicative of saturation. Secondly, this approximation fails for other corrections to qq dipole configuration since Q s may be the largest scale in the impact factor in them. It also fails for gluon dipole configuration when the qq pair forming one of the jets is in the aligned jet configuration itself since the longitudinal momentum fraction of q orq may be the small parameter making the impact factor scales smaller than Q s . Thirdly, we nevertheless calculated the gluon dipole contribution in the small Q s approximation neglecting that it may be incorrect in the aforementioned corners of the phase space. Therefore comparison of our answer to the full numerical result will show how important these contributions are. This is left for future studies.
Finally, we noted that the corrections in y cut may be significant since the real expansion parameter is √ y cut = √ 0.15 ≃ 0.39. Therefore the O( √ y cut ) corrections to qq dipole configuration which we did not calculate may give sizable corrections. However we expect these corrections as well as the nonsingular virtual corrections to be peaked at moderate β as the Born term.
eq. (2.87): where we approximated e −R 2 0 p 2 ≃ θ(R −2 0 − p 2 ). Then we get the known behavior of eq. (2.102) It is easier to observe in the coordinate space (following ref. [ In the large β region Q 2 R 2 0 ≫ 1, Q 2 ≫ 1 R 2 0 ≫ M 2 the longitudinal cross section reads where neglecting logarithms (A.12) and and this dominant contribution comes from the whole region in x.
In the small β region Q 2 R 2 0 ≫ 1, for the longitudinal cross section we have where again neglecting logarithms (A.20) Therefore and this contribution comes from the whole region in x. In the small β region Q 2 R 2 0 ≫ 1, for the transverse cross section we have where
C. Normalization
In this appendix we discuss the overall normalization of the cross section and the relation of our matrix elements F defined in (5.2-8) of ref. [18] to the GBW dipole cross section. The density matrix for the LO cross section in our frame (5.21-23) was obtained in ref. [18]. To get the proper normalization we have to multiply all cross sections in ref. [18] by 1 2(2π) 4 . Indeed, the factor 1 2 comes from the normalization of A 3 in eq. (5.11) of ref. [18].
The in and out proton states are normalized there to have Since the S-matrix does not depend on state normalization, A 3 is two times bigger than the standard amplitude normalized to . As a result, the cross section should have an extra 1 4 to compensate for it, i.e. in (5.1) of ref. [18] we should have had The same correction must be done in eq. (6.1) of ref. [18]. The 2π power must be corrected in eq. (5.11) of ref. [18] in the overall factor Indeed, the amplitude A 3 is exactly the matrix element (3.1) of ref. [18] after removing (2π) 4 δ (4) (p γ + p 0 − p q − pq − p ′ 0 ). In this matrix element transverse and (−) delta functions appear together with (2π) 2 and 2π as eqs. (5.7-8) and eqs. (5.2-3) of ref. [18] correspondingly. Only the (+) delta function is without 2π in eq. (3.1). Therefore we must have an extra 2π in the denominator in A 3 in addition to 1 (2π) D−3 from eq. (3.1) of ref. [18]. This gives us the aforementioned substitution. The same misprint was done in eq. (6.4) of ref. [18]. After these corrections we get eqs. (2.22-2.25).
Next, we have to substitute a model for the hadronic matrix elements F. We will use the Golec-Biernat -Wüsthoff (GBW) [45] parametrization, which was formulated in the coordinate space. To get the proper normalization we Fourier transform eq. (2.23) and compare it with Eq. (4.48) in ref. [42]. Using 1 l 2 + a 2 = d 2 r K 0 (ar) 2π e −i l r , F( k) = d re −i k r F ( r), Comparing it with eq. (4.48) in ref. [42], the GBW parametrization of the forward dipole matrix element in our normalization reads One can check the consistency of this normalization by deriving the inclusive γ * p cross section with the same matrix elements. Using propagators in the shockwave background (2.19-20) from ref. [18], one gets for the γ * p → γ * p amplitude Extracting the dependence on the overall momentum transfer we get Then, using the optical theorem Comparing this result to eqs. (3.7-9) in ref. [42], we get the same result (C.7) for F as before. | 8,523.2 | 2019-05-17T00:00:00.000 | [
"Physics"
] |
Crustal deformation, active tectonics and seismic potential in the Sicily Channel (Central Mediterranean), along the Nubia–Eurasia plate boundary
Based on multidisciplinary data, including seismological and geodetic observations, as well as seismic reflection profiles and gravity maps, we analysed the pattern of crustal deformation and active tectonics in the Sicily Channel, a key observation point to unravel the complex interaction between two major plates, Nubia and Eurasia, in the Mediterranean Sea. Our data highlight the presence of an active ~ 220-km-long complex lithospheric fault system (here named the Lampedusa-Sciacca Shear Zone), approximately oriented N–S, crossing the study area with left-lateral strike-slip deformations, active volcanism and high heat flow. We suggest that this shear zone represents the most active tectonic domain in the area, while the NW–SE elongated rifting pattern, considered the first order tectonic feature, appears currently inactive and sealed by undeformed recent (Lower Pleistocene?) deposits. Estimates of seismological and geodetic moment-rates, 6.58 × 1015 Nm/year and 7.24 × 1017 Nm/year, respectively, suggests that seismicity accounts only for ~ 0.9% of crustal deformation, while the anomalous thermal state and the low thickness of the crust would significantly inhibit frictional sliding in favour of creeping and aseismic deformation. We therefore conclude that a significant amount of the estimated crustal deformation-rate occurs aseismically, opening new scenarios for seismic risk assessments in the region.
Results
Seismology. Data from instrumental seismicity catalogues (see "Data and Methods" section) highlight main seismic features in the Sicily Channel region, which, despite the presence of active faults and its composite geodetic deformation, is characterized by a low-to-moderate seismic activity 27,32 in comparison with the one observed ( Fig. 1a) on nearby regions such as Sicily 28 and North Africa 8,9,29 , located along the Nubia-Eurasia convergent plate boundary. In detail, in the Sicily Channel, instrumental seismicity is scant and mainly concentrated along the above mentioned N-S-oriented belt 26,27,32,33 (Fig. 2a). Conversely, in southern Sicily, seismicity is mainly clustered in the Hyblean plateau and the Belice area, and marks the presence of active tectonic structures which were the site of historical earthquakes (Fig. 2a). Historical catalogues document that, for the whole region ( Fig. 2a), large seismic events (M ≥ 6.5) have taken place since 1125 (https ://www.emidi us.eu/SHEEC /).
Our statistical evaluation of the deformation-rate budget for the Sicily Channel was focused on the area delimited by the blue polygon in Fig. 2a, which was chosen in relation to the distribution of continuous GPS stations. In this area, the SHEEC catalogue reports the occurrence of moderate earthquakes (M > 4.5) only since 1578 (Fig. 2b), close to Pantelleria and along the south-western Sicily coast (Fig. 2a). The seismic moment-rate was calculated according to Eq. 1 (see "Data and Methods" section), which implies a seismic moment-rate estimate dependent on the largest magnitude value (M x ) in the area. The simplest method of calculating M x is performed considering the largest earthquake reported in the seismic catalogue and by adding 0.5 35 . The largest earthquake striking our study area (Fig. 2a) took place in 1740, with an estimated magnitude of 5.2 (https ://www.emidi us.eu/ SHEEC /). Therefore, we assumed a value of 5.7 as a maximum potential magnitude. Definition of M x is a critical aspect for a robust seismic moment-rate estimate. By using the MMAX toolbox 36 , we performed some statistical estimates of M x under different circumstances (completeness and temporal length of the catalogue, magnitude distribution and uncertainties, number of earthquakes, etc.), by adopting a wide spectrum of statistical procedures. The statistical estimations of M x have been made by considering all historical and instrumental earthquakes with M ≥ 4.0. Achieved magnitude values range in the interval 5.22 (± 0.47) − 5.52 (± 0.44). The assumed value of potential magnitude 5.7 is at the upper boundary of these estimates, and therefore was considered a suitable estimate for the investigated region.
Under these assumptions and considering Eq. 1, our seismic moment-rate estimate for the study area is 6.58 × 10 15 Nm/year (Table 1).
Geodetic data. GPS observations acquired in the 2001.0-2018.0 time-interval from continuous stations located around the Sicily Channel and southern Sicily have been analysed to describe the current crustal deformation in the study area. Estimated GPS velocities, referred to a Nubia-fixed reference frame 37 , and associated uncertainties (at 95% level of confidence) are reported in Fig. 3. Within this frame, the station LAMP (Lampedusa, Fig. 3 www.nature.com/scientificreports/ Pantelleria (PZIN) and along the SW Sicilian onshore move eastward, with rates ranging between ~ 3.8 and 2.1 mm/year, respectively. The strain-rate field also suggests that the western sector of our study area (Fig. 2a) is dominated by a prevailing contractional field, with ε hmin axes having a WNW-ESE orientation between Pantelleria and SW Sicily, and a NW-SE attitude between Pantelleria and Lampedusa. Conversely, the eastern sector is characterized by a strike-slip deformation field, with ε Hmax and ε hmin axes aligned to the NE-SW and to the NW-SE direction, respectively (Fig. 3). Assuming a value of 13 km 30 as average seismogenic thickness H s , and according to Eq. 4 (see "Data and Methods" section), we estimated a geodetic moment-rate of 7.24 × 10 17 Nm/ year for the investigated area (Table 1). (Fig. 4). Conversely, SPARKER profiles show evidences of incipient activity along the N-S shear zone, depicting a diffuse and complex pattern of transtensional and transpressional deformations, affecting the sedimentary sequence up to the seafloor (Fig. 5). Sediments in depocenters can be subdivided into two seismo-stratigraphic units, separated by a major unconformity (H1 in Fig. 4). A recent sediment layer of the upper unit appears relatively undeformed and shows only local evidence of sub-vertical faulting, never affecting the seafloor (Fig. 4). Based on available seismostratigraphic constraints, H1 might be correlated to a major Early Pliocene tectonic event, when fault-dominated extension shifted to a magma assisted rifting without a strong tectonic component 20 . On the other hand, chronostratigraphic well logs in the Gela basin (i.e., the Palma well whose data are reported in the ViDEPI project), suggest that reflector H1 observed in our seismic reflection profiles might correspond to a stratigraphic hiatus dated to the lower Pleistocene. However, we note that in the depocenters, H1 is located about 1 s two-way-time below the seafloor, roughly corresponding to about 1 km of depth. Considering a constant sedimentation rate . Estimated GPS velocities and associated uncertainties (at the 95% level of confidence) are reported as blue arrows. Velocities refer to a Nubian-fixed reference frame 37 . The geodetic horizontal strain-rate field (red and yellow arrows indicate the greatest extensional and contractional strain-rates, respectively) as estimated for the area defined by the blue polygon is also reported. Maps compiled using the Generic Mapping Tool, version 5 14 ; image editing using Inkscape, version 1 (https ://inksc ape.org).
Seismic reflection profiles.
Scientific Reports | (2020) 10:21238 | https://doi.org/10.1038/s41598-020-78063-1 www.nature.com/scientificreports/ of 1 mm/year, as deduced for the uppermost sedimentary sequence by radiometric dating 46 , this level might be dated back to about 1 Ma. This estimate, although very rough, agrees with chrono-stratigraphic and biostratigraphic reconstructions carried out in the Gela basin, where the more recent depositional sequence boundaries were dated to 1.4 Ma (Early Pleistocene) and 0.8 Ma, during a peak of the regression 43,47 . More detailed age constraints, which would allow to distinguish between different scenarios are not available to date, but we suggest that H1 could represent the end of the fault-guided rifting processes responsible for the development of the main tectonic depressions. Therefore, H1 should mark an abrupt change in the stress regime of the Sicily Channel region. Indeed, such an estimated age corresponds to a change in the Mediterranean geodynamics occurring as a consequence of a reduction of ca. 55% of the Nubia-Eurasia convergence rate 18 .
Gravity maps. Seismic reflection data available for the study area give insights into the shallow structural development, but suffer significant limitations. In fact, they are not homogeneously distributed and, in general, not oriented perpendicularly to the features under observation. For these reasons, mapping tectonic structures, especially those with an important strike-slip component, is challenging. Moreover, penetration of the seismic signal is relatively shallow, in general less than 1 km in the sedimentary sequence, thus seismic profiles are not able to image deformations affecting the acoustic basement. To overcome these problems, and in the attempt to gather structural information at the scale of seismological data, we carried out integrated analyses of gravity maps (Fig. 6a,b) compiled using the 29.1 release of satellite-derived data 48 publicly available at https ://topex .ucsd.edu/WWW_html/mar_grav.html. To compute the Bouguer correction, we adopted a Fast Fourier Transform approach 49 , employing bathymetric data from the EmodNet repository (https ://www.emodn et.eu). The "gravfft" module of the Generic Mapping Tool software package 50 was used for this purpose, considering densities of 1035 kg/m 3 and 2700 kg/m 3 for water and crust, respectively. The free-air gravity map of Fig. 6a highlights the presence of negative anomalies centred on the deep tectonic depressions which show NNW-SSE oriented axes. The map also highlights the presence of a major N-S transverse boundary displacing left-laterally major crustal features, which appears more evident in the Bouguer anomaly map (Fig. 6b).
Robustness of geodetic and seismic moment-rate estimations.
Geodetic and seismic momentrate estimates are affected by some physical uncertainties. For instance, geodetic measurements should sample a time-interval long enough to: (i) minimize the effect of velocities uncertainties; and (ii) adequately sample both www.nature.com/scientificreports/ seismic and aseismic spectrum, as well as long-term deformation transients. Moreover, factors such as stations density, network geometry and smoothing parameters chosen for strain-rate estimates also affect the resulting geodetic moment-rates. Seismic moment-rate estimates are commonly affected by the completeness (i.e. all the earthquakes above a given magnitude should be fully reported) and the temporal length of seismic catalogues. Indeed, a relatively short time-interval (100-300 years) may not be representative of typical seismic cycles in a given region. To be considered robust, seismic moment-rate estimates performed using data from seismic catalogues require shorter average earthquake recurrence intervals than the catalogue duration 4 . On the other hand, instrumental 50-100 year-long catalogues are the most common source of data used worldwide in probabilistic seismic hazard analysis, under the assumption that such a time span would be adequate to derive earthquake return periods over timescales of 500-5000 years 3 .
Considering the above-mentioned factors, we performed some tests to assess the robustness of our estimates. First, we calculated additional strain-rate fields by simply varying the size of the computational grid (from 0.05° to 1.0°; see Supplementary Information). Results highlight that, as the grid size increases, the smoothing pattern and the number of local artefacts decrease (Fig. S1). Moreover, moment-rates estimates in the interval 1.13 × 10 18 -4.31 × 10 17 Nm/year decrease as the computational grid size increases (Fig. 7a), its estimation being related to the largest value of strain-rate in the investigated region (Eq. 4). However, even considering the smallest value, the difference between geodetic and seismic moment-rates remains too large, as seismicity accounts only for 1.4% of the geodetic deformation. We performed additional estimations varying the seismogenic thickness H s in the 9-13 km interval 30 . Results of this last test (Fig. S4 in Supplementary material) highlight that the geodetic moment-rate decreases according to the decrease of H s . Estimated values range in the interval 2.98 × 10 17 -1.13 × 10 18 Nm/year. Considering again the smallest value, seismic deformation accounts only for ~ 2% of the geodetic one.
Regarding the seismic moment-rate, our seismic catalogue is temporally short with respect to the estimated return period for a wide area encompassing the investigated one 30,52 , so it might not be complete. To test this eventuality, as a and b parameters (Eq. 3) are well constrained, we did some additional tests by simply varying M x in the 4.6-7.5 interval, where the lower value is the maximum magnitude reported in our instrumental catalogue, and the greater one represents the largest magnitude reported in the historical catalogue for the surrounding regions (the 1693 M7.5 earthquake striking the Hyblean Plateau; https ://www.emidi us.eu/SHEEC /). Results of this test (Fig. 7b) highlight that the seismic moment-rate increases according to the increase of M x . Estimated values range in the interval 2.51 × 10 15 -3.18 × 10 16 Nm/year. Even considering the largest value, again the difference between the seismic moment-rate and the geodetic one remains large (seismicity accounts only for 4.4% of the geodetic deformation).
The performed tests clearly indicate that the observation of a significant mismatch between geodetic and seismic moment-rate is reliable.
Discussion
Combining GPS observations and earthquake catalogues, we performed a statistical evaluation of the deformation-rate budget for the Sicily Channel area, which suggests that crustal seismicity accounts only for ~ 0.9% of the cumulated deformation. This implies a seismic moment deficit possibly covered by a portion of aseismic deformation (ongoing unloading by creep and other plastic processes) or by ongoing strain not yet released by seismicity (elastic storage).
Seismic reflection profiles analyses carried out in this study and those reported in the literature, have enabled us to provide an updated picture of the current geo-structural setting of the Sicily Channel. We note that, the NW-SE aligned tectonic depocentres are presently inactive, while seismic reflection profiles show evidences of active deformation along the N-S shear zone as highlighted by gravity, seismological and geodetic observations. Along this corridor, tectonic activity is marked by seafloor scarps, displaced seismic reflectors and chaotic sediment facies (Fig. 5). Moreover, it corresponds to an area where ongoing magmatic activity (e.g., subaerial and www.nature.com/scientificreports/ submerged volcanic edifices) has been described by several authors [11][12][13] , and where aligned patterns of magnetic anomalies suggest the widespread presence of magmatic bodies at different depths 11,53 . The integrated analysis of available morpho-bathymetric data, seismic reflection profiles and gravity maps thus allows identifying a first-order structural feature, whose location, geometry and inferred kinematics, is in good agreement with the spatial distribution of recent volcanism and seismicity, therefore supporting the presence of a sub-vertical lithospheric shear zone favouring magma ascent 26 . Although the nature of this shear zone is poorly established, its deep roots and orientations inferred by gravity data, suggest that it might represent part of inherited Mesozoic discontinuities that cut the basement and formed along the rifted passive margin of the Tethys ocean 54,55 . These structural discontinuities acted as normal or transtensional faults until the Late Miocene, when they underwent transpressional reactivation 39,56-59 , but their recent activity seems accommodating the reorganization of the Nubia-Eurasia convergence along this segment of the plate boundary.
The observed N-S shear zone connects northward with two already described fault systems: (i) the Capo Granitola fault to the west; and (ii) the Sciacca fault system to the east. The former is made up of a sub-vertical master fault with few related splays, and extends for ca. 50 km with a N-S orientation from the offshore area of Capo Granitola to the volcanic area of the Graham Bank (Fig. 1b). The Capo Granitola fault, does not show clear evidence of present-day tectonic activity 20,42,45 , while the Sciacca fault, forming a positive flower structure, shows deformations reaching the sea-floor 42 .
Seismic reflection profiles analysed in this study do not show clear evidence of compressive deformation along the southern segment of the shear zone (Fig. 5). Older compressional features, such as folds and structural sediment undulations, call for a recent transtensional reactivation, as also suggested by the downthrow of seismic reflectors along high-angle structures (Fig. 5). This implies that the wide N-S shear zone is characterized by a complex pattern, including variable fault kinematics depending on relative orientation between pre-existing discontinuities and the present-day stress field. Compressional strain seems prevailing in the northern part of the shear zone 45,59 , while transtension characterizes a wide region to the south of the rifting depressions. This reconstruction however needs to be targeted by further analysis of geophysical data, which should combine high-resolution and deep penetration seismic images to investigate the deep tectonic control on the shallow structural development. Observed deformations define a ~ 220-km-long complex highly segmented lithospheric fault system that extends from Lampedusa to the SW Sicily offshore and shows prevailing left-lateral kinematics and named Lampedusa-Sciacca shear zone (LSSZ). Northward, the inland belt of Sicily is separated by a NNWstriking diffuse deformation zone separating the western and eastern belts. Along this diffuse deformation zone, oblique thrusting associated with clockwise rotations and wrench motions led to a differential shortening during the Neogene accretion of the Sicilian-Maghrebian thrust belt, therefore resulting in major rates in the eastern belts compared to the westernmost ones 55,60 . Along this diffuse deformation zone, an advective transfer through buried deep extensional faults linked to the mantle has been inferred on the basis of the occurrence of rising gas and hot waters enriched in mantle elements 61 similarly to what was detected along major transcurrent/transform domains 62,63 . Moreover, recent tomographic studies imaged the presence of a deep discontinuity extending at least down to 30 km depth 26 , related also to a strong variation of the Moho depth, from 34 to 36 km below the eastern sector to less than 30 km below the western sector 55 . The same tectonic setting, with lithospheric connection between the lower plate mantle and upper plate structures, was described in the adjacent Ionian Sea, where a series of transverse/transtensional faults deeply fragmenting the convergent plate boundary trigger lower plate serpentinite diapirism 64 . Furthermore, this diffuse deformation zone together with the LSSZ might represent the current shallow expression of an inherited Mesozoic lithospheric discontinuity, formed along the rifted passive margin of the Tethys ocean. Such a discontinuity was involved in the last few Ma (Upper Neogene), into the Nubia-Eurasia convergence process, as suggested by the narrow indentation of the external front of the Sicilian-Maghrebian thrust belt (Fig. 1b).
Evidence of recent tectonic activity has been identified on the top of the Madrepore Bank and Malta High (Fig. 1b), in Late Quaternary deposits 43 . All these results clearly highlight the presence of several faults showing significant traces of activity during the Holocene, mainly along the LSSZ. Based on their deduced surface length (from 10 up to 50 km), these faults would be capable to generate earthquakes with magnitude values up to 7.2 65 . The use of scaling relations between the length of the fault and the maximum earthquake is widely used on regions where there are no historical data, but a number of issues arise when the fault planes are not exposed at the surface (i.e. buried and/or offshore faults) so that their geometry is constrained from regional seismic reflection profiles or from earthquake sequences and perhaps their length is poorly constrained. Beside these main problems, our deduced magnitude values agree well with the ones estimated for the Sicily Channel area by adopting other scaling relations 66 .
If crustal seismicity accounts for only ~ 0.9% of the cumulated deformation, in a region affected by active faulting capable of generating earthquakes with large magnitudes (M > 7), we need to re-evaluate the conditions for a reliable seismic hazard assessment to address the following questions: (i) is the lack of large earthquakes related to a longer return period than the observation time-span? (ii) will the excess of deformation be released through major impending earthquakes?
A return period of ca. 1000 years for M = 7.5 earthquakes has been estimated for a wide area including both the Hyblean Plateau and the Sicily Channel 52 . The moment-rate difference (Table 1) can be expressed in terms of the "missing" earthquake necessary to match the geodetic moment-rate. We therefore estimate that in the investigated crustal volume, an M = 5.8 earthquake is necessary each year to match the moment-rate difference. Alternatively, such a moment-rate difference can be filled by an M = 7.0 earthquake every 50 years or by an M = 7.5 earthquake every 310 years. This last estimate is ~ 3.2 times smaller than the return period reported in literature 52 . Although both instrumental and historical seismic catalogues correspond to a random sampling of the long-term seismicity pattern over the seismic cycle, it would be unrealistic to associate the moment-rate discrepancy only to a "missing" part of the earthquake catalogue. Indeed, available historical and instrumental Scientific Reports | (2020) 10:21238 | https://doi.org/10.1038/s41598-020-78063-1 www.nature.com/scientificreports/ catalogues suggest a scenario where a small portion of this moment-rate difference could be compensated by minor to moderate earthquakes. This agrees with the structural setting of the LSSZ as highlighted by seismic reflection data, where a composite kinematic pattern (i.e., transtension and transpression) would imply segmentation of the tectonic features. If the excess of deformation is compensated by aseismic strain across creeping faults, could it be related to the crustal rheology?
This hypothesis is supported by some observations, such as the anomalous temperature structure, the presence of magmatic activity and the low crustal thickness (Fig. 6c). Surface heat flow measurements carried out in the last decades (http://www.datap ages.com/gis-map-publi shing -progr am/gis-open-files /globa l-frame work/globa l-heat-flow-datab ase) show values ranging from 50 to 100 mW m −2 in the Malta trough and in the Gela basin, while values up to 135 mW m −2 are observed in the Linosa and Pantelleria troughs and on the Adventure Bank (Fig. 6c). In addition, subaerial and submerged Plio-Pleistocene volcanoes [11][12][13] are mainly located along the LSSZ. At crustal depth (15-25 km) these volcanic edifices correspond to some low P-wave velocity bodies 32 . Moreover, the estimated crustal thickness is ~ 21 km in the central part of the Sicily Channel, with increasing values up to 32 km both northward and southward 51 . All this evidence lends credit to a more ductile rheology across the LSSZ, which would significantly inhibit frictional sliding and favours creeping and aseismic deformation. Indeed, the LSSZ allows weak material ascent into the intraplate shear zone, and eventually migrating laterally to form strong lateral heterogeneities, both in composition and mechanical strength. Based on these observations, we favour the hypothesis that a considerable amount of the estimated crustal deformation-rate budget occurs aseismically, at least, in the northern sector of the Sicily Channel area. Moreover, under the higher confinement that exists deeper in the Earth's lithosphere, brittle-like (i.e. sudden, localized) failure may occur, as testified by observation of occasionally deep earthquakes along the LSSZ (Fig. 2a).
Conclusions
A multidisciplinary analysis of geodetic, seismological and seismic reflection data provides an updated picture of active tectonic features in northern Sicily Channel region, a key area recording the Nubia-Eurasia plate interaction in the Mediterranean region. All analysed data, integrated by reconstructions and observations coming from literature, point to the presence of a N-S tectonic lineament we named the Lampedusa-Sciacca shear zone (LSSZ), which represents the most active tectonic domain in the study area and accounts for only ~ 0.9% of crustal deformation, as deduced by comparing geodetic and seismological moment-rate budgets. Our preferred scenario, supported by collateral evidences, such as incipient magmatism, high heat-flow and the reduced crustal thickness, points to a relatively ductile rheology of the crust, suggesting an aseismical restoration of this deficit. This implies a thorough re-evaluation of the seismic hazards in this region, where only a small portion of the inferred deformation would be compensated by minor to moderate future earthquakes.
Data and methods
Seismological data. We collected a catalogue of instrumental seismicity taking into account all data records reported in on-line bulletins (http://www.isc.ac.uk/iscbu lleti n/searc h/catal ogue/; http://iside .rm.ingv. it). For the study area (Fig. 2a), we selected 1780 earthquakes covering the time interval 1966-2018, with magnitude between 1.5 and 5.5. Hypocentres collected from ISC bulletin span the 1966-1984 time interval. For the earthquakes of this period (~ 3% of the whole collected dataset), the bulletin does not provide uncertainties of location parameters, except for a few records, for which the mean error on horizontal coordinates is ~ 12 km. Records coming from the other bulletin (http://iside .rm.ingv.it) cover the period 1985-2018, and refer to earthquakes mainly acquired by the seismic network managed by Istituto Nazionale di Geofisica e Vulcanologia (INGV). Uncertainties of the hypocenter locations are, on average, 3, 6 and 2 km for longitude, latitude and depth coordinates, respectively. Nevertheless, numerous locations are reported with fixed focal depth, so they may suffer from greater uncertainties. Available historical seismic catalogues report, for the area in Fig. 2a, the occurrence of large earthquakes (M ≥ 6.5) since 1125 (https ://www.emidi us.eu/SHEEC /). The accuracy of these catalogues is not uniform and the epicentral location of some historical earthquakes may result uncertain, mainly due to the presence of wide sea areas and the sparsely populated region. This is the case of several earthquakes which are clustered closely to the main towns and villages, clearly reflecting the distribution of populated areas along the southern Sicilian coastal area and Pantelleria island where the shocks could be felt 27 .
The seismic moment-rate ( Ṁ seis ) has been calculated as 34 : where φ is a correction for the magnitude (M)-moment (M seis ) relation, M x is the magnitude value of the largest earthquake that could occur within the investigated region, c and d (with values 1.5 and 9.1 for M seis in Nm, respectively) are the coefficients of the relation 67 : while a and b are the coefficients of the earthquake frequency-magnitude distribution 68 : with N(M) the annual cumulative number of earthquakes having magnitude equals to or greater than M. The earthquake frequency-magnitude distribution breaks down at the value of M c (magnitude of completeness), which theoretically defines the lowest magnitude at which 100% of the earthquakes in a space-time volume are (2) log M seis = cM + d www.nature.com/scientificreports/ detected 69 . To estimate the coefficients of the earthquake frequency-magnitude distribution, we defined a subcatalogue by extracting from our instrumental catalogue only the earthquakes falling within the area outlined by the blue polygon in Fig. 2a. Such a sub-catalogue (468 events) covers the 1968-2018 interval with magnitude values between 1.5 and 4.6 (Fig. 2c). Finally, we calculated the a, b and M c values for this sub-catalogue by using a maximum likelihood estimation technique 70 , obtaining values of 3.85 (± 0.11), 1.12 (± 0.08) and 2.8 (± 0.2), respectively (uncertainties at the 95% of confidence; Fig. 2d and Table 1). Earthquake magnitudes in our data refer to different scales: M l (local magnitude), M d (duration magnitude), m b (body wave magnitude) and M s (surface wave magnitude). Ideally these magnitudes should be converted into moment magnitude (M w ), which for the moment-rate calculation should be used as the standard scale, given the limitations of the other magnitude scales. Due to the lack of a regional relationship between the different scales, here we converted all earthquake magnitudes directly into scalar moments by using the above generalized relation. Finally, assuming φ = 1.71 (which reflects an average error of 0.3 on catalogue magnitudes 34 ) and M x = 5.7, we estimated a seismic moment-rate of 6.48 × 10 15 Nm/year (see Table 1 for details on all parameters).
Geodetic data. Raw GPS observations were reduced to loosely constrained daily solutions by using the GAMIT/GLOBK software packages 71 . The analysed dataset consists of data from up to 50 GPS stations (with more than 2.5 years of observations) belonging to various networks developed in the last two decades for crustal deformation studies and commercial applications (e.g., mapping and cadastral purposes). Estimated GPS velocities refer to a Nubian-fixed reference frame 37 .
In order to estimate the geodetic strain-rate, in a first step we derived a continuous velocity gradient over the study area on a regular 0.25° × 0.25° grid (with nodes not coinciding with any GPS stations) using a "spline in tension" function 72 by using as input the horizontal velocity field and associated uncertainties. The tension is controlled by a factor T, where T = 0 leads to a minimum curvature (natural bicubic spline), while T = 1 allows for maxima and minima only at observation points 73 . We set T = 0.5 because such value represents the optimal to minimize the short wavelength noise 74 . Lastly, we computed the average strain-rate tensor as derivative of the velocities at the centres of each cell (Fig. 3). We also estimated the geodetic moment-rate Ṁ geod which is defined as 75 : with μ the shear modulus of the rocks (taken here as 3·10 10 N/m 2 ; Table 1), H s (seismogenic thickness) and A (surface area; 4.1·10 10 m 2 in our case study) define the seismogenic volume over which strain accumulates and its elastic part is released as earthquakes, ε Hmax and ε hmin are the principal geodetic horizontal strain-rates and Max is a function returning the largest of the arguments. In order to assess the robustness of our moment-rate estimations we performed some additional computations of the strain-rate field by simply varying, from 0.05° to 1.0°, the size of the computational grid (see Supplementary Information).
Marine geophysical data. We analysed a set of SPARKER seismic reflection profiles acquired by the Istituto di Geologia Marina (now Istituto di Scienze Marine; ISMAR) in the Sicily Channel during the 70's. These data, available only in hard copies, have been digitized, processed and geo-referenced using the open-source software Seisprho 76 . The seismic source was a 30 kJ Teledyne system, and the receiver was a single channel streamer with an active section of 50 m. Shot interval was 4-8 s, corresponding to a horizontal spacing of about 12-24 m. Among the available seismic profiles (Figs. 4 and 5), we selected those crossing the area characterized by higher instrumental seismicity. www.nature.com/scientificreports/ | 6,938.4 | 2020-12-01T00:00:00.000 | [
"Geology",
"Environmental Science"
] |
Interactive Medical Image Segmentation using PDE Control of Active Contours
Segmentation of injured or unusual anatomic structures in medical imagery is a problem that has continued to elude fully automated solutions. In this paper, the goal of easy-to-use and consistent interactive segmentation is transformed into a control synthesis problem. A nominal level set PDE is assumed to be given; this open-loop system achieves correct segmentation under ideal conditions, but does not agree with a human expert's ideal boundary for real image data. Perturbing the state and dynamics of a level set PDE via the accumulated user input and an observer-like system leads to desirable closed-loop behavior. The input structure is designed such that a user can stabilize the boundary in some desired state without needing to understand any mathematical parameters. Effectiveness of the technique is illustrated with applications to the challenging segmentations of a patellar tendon in MR and a shattered femur in CT.
I. Introduction
Microwave-frequency Magnetic-Resonance-Imaging (MRI) [1] and X-Ray Computed Tomography (CT) [2] yield three-dimensional volumetric images which are viewed by a medical professional for diagnosis, treatment planning, or population studies [3], [4]. Typically, only a particular anatomic region or organ is of interest and must be segmented. Segmentation is the task of identifying and localizing salient structures in the image volume. Since there is an abundance of raw image data that is not analyzed due to the infeasible time and cost of manual segmentation, automated methods for segmentation are the subject of much recent medical computing literature [5]- [7]. However, a human expert's ability to combine observed image data with prior anatomical knowledge to accurately perform segmentation is unmatched by computer algorithms. There is substantial mistrust both from patients and doctors towards fully automatic algorithms. Recognizing this, there has been a recent drive towards interactive segmentation [8]- [10]. Interactive approaches use a datadriven automatic algorithm to process a majority of the volume. As the automatic segmentation runs and displays the current state, a human user can influence the algorithm's behavior to more closely align with an expected result. Fig-1 shows an example of the segmentation corrections a human would make. Ideally, an interactive system should enable a user to create excellent segmentation results with a minimal amount of time and effort.
Interactive medical image segmentation employs software tools such as Seg3D [11] and 3D-Slicer [12] for applying algorithms and visualizing the results; a human expert uses the algorithms to achieve a segmentation that is as close as possible to the ideal region boundary. Available algorithms include iterative methods with a concept of time; a partial differential equation (PDE) is used to model the space-time relationship between image data and segmentation boundaries [13]- [16]. Given user-specified parameters and initial state, incremental modifications are made to the segmentation and shown to the user [17]- [20]. It is not clear to a user whether it is possible for their ideal region boundary to be a steadystate. In practice, they must stop the algorithm at some time t f when the segmentation is reasonably accurate, then apply smoothing and manual corrections to the boundary. An alternative approach is to formulate segmentation as a time-independent problem; in [21]- [23], user input acts as a constraint in finite-dimensional nonlinear optimization problems. It is not known a-priori whether the user's ideal region boundary is a feasible solution for some collection of constraints, while a changing number of user input constraints can affect the computational complexity. Furthermore, the time-independent formulation can lead to large changes in the segmentation output when new user input is received. Other classes of algorithms such as graph-cuts [19], [24], [25] are also effective for automated segmentation; we consider only level-set PDE algorithms due to their theoretical compatibility with methods in the PDE control literature.
Level-set methods define a region boundary implicitly as the zero level-set of a function ϕ(x, t) with domain 1 Ω ⊂ ℝ n . Temporal changes of the region boundary are described by a partial differential equation (PDE) as a consequence of the implicit representation. Typically arises as the gradient flow that minimizes some meaningful functional of ϕ and the image data I(x). From a controls perspective, the image-dependent PDE is an open-loop system. We present a framework for interactive segmentation using feedback augmentation of a level set PDE system; the results and theory are a substantial extension of the preliminary version [26]. This paper is motivated by the following observation: when human users influence the level-set evolution, they have in mind a desired reference state and are trying to apply control to an image-dependent PDE system.
In the literature on control of PDE systems, two characteristics of problems are the domain of input actuation and the available measurement (pointwise throughout a domain, boundary-value only, or as an integral over space). Control through region boundaries is of paramount interest; [27] characterizes the stabilizing controls and admissible boundary conditions for a class of unstable reaction-diffusion (R-D) systems. Similarly, [28] explicitly computes invariant regions for coupled R-D systems. Stabilization of the viscous Burgers equation using boundary actuation is achieved in [29], [30] by first designing a feedback law using u(x, t) over all x, then deriving a u-observer that uses only boundary measurements. In [31], the inviscid is stabilized with a boundary input u(0, t) from an admissible set of controls that admits a weak solution to the initialboundary-value problem. A common theme in PDE-control for setting input and gain values are scalar functions w(t) defined as functional of the state u(x, t) [32]- [34]; e.g., w(t) = ∫ Ω k(x, t)u(x, t)dx. Such inputs appear in this paper as well, with image dependence entering via a term analogous to k(x, t). The model used in this paper uses actuation and measurement within a neighborhood of the time-varying segmentation boundary.
Contribution
Using a PDE formulation guarantees that the computational complexity is fixed and that the segmentation result changes continuously over time. By incorporating control-theoretic tools, it is shown that the steady state segmentation can be driven to an ideal reference boundary. Input from the human user indicating locations where the current state does not match the desired reference state is processed and used as feedback in the PDE. An approach similar to backstepping [33], [35] is used; first, stability of a labeling error functional is shown under the assumption of a known reference state. Second, an auxiliary observer-like system that reacts to user input is formulated. The net coupled system is shown to have bounded error when a sufficient amount of user input has been accumulated. To the best of our knowledge, this is the first approach to interactive level set segmentation with input from the user used in feedback to guarantee stabilization about a reference boundary.
Organization
The remainder of this paper is as follows.
Image segmentation based on the narrow-band level-set method is reviewed in Section II. Following an approach similar to back-stepping, a controller is proposed in Section III that stabilizes a labeling-error term, assuming exact knowledge of a reference state. An auxiliary observer-like system is designed in Section IV to process a user input signal and estimate the reference state. Application of the technique to interactive segmentation of CT and MRI volumes is demonstrated in Section V, using images that are difficult to segment with existing methods. The proposed algorithm is compared to related methods in Section VI. Section VII summarizes the results and applicability of the approach. Key components of the final closed-loop system and corresponding paper sections are visualized in Fig-2.
II. Level Sets and Automated Segmentation
The class of open-loop systems considered in this work are PDE-based segmentation algorithms using the popular level set method, reviewed in Section II-A. Section II-B describes the limitations of open-loop segmentation, thus motivating the feedback control model of interactive image segmentation in Section III-A.
A. Review of Level Set Methods
Level set methods represent time-varying region boundaries in a computationally straightforward manner [36], [37]. Define Ω ⊂ ℝ n to be the spatial domain and x a coordinate in Ω. Labeling assignments are represented with an implicit function ϕ(x, t) : Ω x [0, t) → ℝ. Boundaries between regions of interest are represented as level sets where ϕ(x, t) = C. Propagation of ϕ over time is defined by ϕ t = − 〈∇ϕ, f〉 for some vector field f that is a function of image data, of ϕ, and spatial derivatives of ϕ. In this paper, ϕ(x, t) > 0 denotes the interior of a segmented region and the (outward) normal vector N along a level set of ϕ is . Assuming the gradient and normal of ϕ exist, the general form of a level set PDE is (1) In "variational level set methods" [38]- [40], the F in Eq-1 is constructed to regulate some quantity of the form (2) As pointed out in [41], many image segmentation applications use an artificial time parameter t, which arises solely due to an iterative minimization of Eq-2. In this paper, however, t corresponds to a physical time since the human user watches a time-varying visualization of ϕ(x, t) to decide where and when to provide input.
It is usually desirable to have |ϕ t | > 0 only in a neighborhood of the moving zero level set [42]- [44]. This narrowband restriction is used in the image processing community for several reasons. First, efficient numerical techniques such as the "sparse field method" [45] update ϕ only within the narrow-band region. Dimensions of 3D medical image volumes are on the order of 512 3 ; real-time performance on desktop computers is attainable only by restricting computations to a subset of Ω. Second, algorithms typically seek to separate Ω into statistically different regions within the I(x) image. It is sufficient to know only sign(ϕ) when labeling regions as interior and exterior; the magnitude of ϕ has little meaning in segmentation applications. Finally, if ϕ t is nonzero for arbitrary values of |ϕ|, zero crossings can develop far from the initial ϕ = 0 level-set; a visualization of ϕ would show new "boundaries" spontaneously appearing. Users will be confused by such behavior; they expect to initialize ϕ(x, 0) and watch a moving level set.
Ω is divided into exterior and interior regions by a regularized version of the Heaviside step function denoted by ε , illustrated in Fig-3. This regularized step function and its derivative δ ε are defined as (3) (4) This paper considers systems described by narrowband level-set PDEs with the general form (5) Application-specific goals, such as minimization of a functional, dictate the choice of image-dependent G(·) in Eq-5; an example is given in the next section. A concise notation for the elliptic operator is κ(F), where F is a smooth function with non-vanishing gradient defined on x ∈ Ω.
In practice, the nonlinear PDEs encountered in level-set segmentation will tend to develop discontinuities. Periodic reinitialization of ϕ to a signed distance function [37], [46] mitigates these effects while preserving the ϕ = 0 level set. Re-distancing enforces the properties (6) where p 1 , p 2 , and κ 0 are fixed constants for a given re-distancing method. These bounds are helpful for control synthesis in Section III.
B. Segmentation as an Open-Loop System
Automated image segmentation systems are designed under the assumption that a particular discriminative model captures distinguishing features that separate regions of interest from the rest of the image volume. Consequently, the systems often lead to erroneous segmentation results when their underlying assumptions do not hold. The term "open-loop" in the context of automatic segmentation means that the system evolves without any external input that might indicate failure of model assumptions and that the boundary is not moving towards a desired steady-state. Such systems arise from discriminative statistical models in the literature, wherein functionals are proposed that either maximize statistical differences between an object and its background or maximize similarity of the object to a template [40]. Several examples of statistical quantities used are region-based feature means [14], feature covariance [47], and n-dimensional non-parametric density estimates [48], [49].
Nevertheless, a recurring problem is that many objects of interest do not coincide with minima of a proposed functional. As an example, consider segmentation via the "mean-alignment" system [14]. The weighted means of image I(x) in the interior (ϕ > 0) and exterior (ϕ < 0) regions are (7) A gradient flow for the functional (8) gives the following narrowband open-loop system:
A. Reference State and Input Structure
Rather than having the human user give up on the PDE system and manually outline the desired region of interest, the PDE can be augmented with a user-driven control input. A control solution is sought due to limitations in the efficacy of open-loop system Eq-5 for real images. Necessary human effort in segmentation can then be kept low; the user need not apply input in locations where the open-loop system keeps ϕ in agreement with a desired segmentation.
Let ψ denote the ideal reference segmentation. A human user could manually trace the level set ψ(x) = 0 if given unlimited time. We seek to drive ϕ towards an explicit estimate of ψ, while maintaining closed-loop stability and minimizing burden placed on the user. Userdriven control effort should preserve the advantages of the narrowband formulation noted in Section II-A; therefore, an admissible control signal f(x, t) will act only on the |ϕ| ≤ ε subdomain of Ω. The closed-loop system then becomes (10) In this section, ψ(x) is assumed known; a control is synthesized to drive ϕ such that it matches ψ. Later in Section IV, a coupled system that estimates ψ from available user input is formulated.
B. Existence of a Regulatory Control
Define the pointwise and total labeling error as ξ and , respectively: (11) If ψ is known and available, regulation of (t) is straightforward with the following two theorems in this section. The regulatory control uses known bounds on the image-dependent G term of Eq-10; define G M (x), G ̅ M to be upper bounds such that for any segmentation state ϕ, (12) Theorem III.1 Using a spatially-varying U(x), a control for Eq-10 that stabilizes functional Eq-11 is (13) Given constants λ 0 > 0, λ 1 > 0, ρ ∈ (0, 1), a sufficient condition for boundedness of is (14) Furthermore, when the error ξ is large in the sense of (15) the rate of convergence is bounded by (16) Proof: Re-arranging terms in ′ and integrating by parts making use of δ ε ∘ ϕ = 0 on ∂Ω: The Poincaré inequality in L 1 guarantees the existence of a constant r such that (21) where r is at most half the diameter of Ω [50]. Substituting Eq-22 into Eq-19 bounds the Lyapunov functional's time derivative: (23) The case of |ξ| being large relative to ρ in an integral sense (Eq-15) also implies (24) Substituting the condition on |U(x)| magnitude from Theorem III.1 gives (25) and the error rate ′ is negative semidefinite with a bound (26) Boundedness of is established after substituting the λ proposed in Theorem III.1: ′ > 0 is only possible for < (rG ̅ M − λ 0 ) /λ 1 , with ′ ≤ 0 otherwise. Thus, is bounded.
Remark A near-optimal (i.e. low) value for r can be obtained by substituting the definitions of δ ε , ε (eqns. 4,3) into , applying the chain rule, and comparing to (omitted for space). In practice, r can be directly estimated via numerical evaluation of the integrals in Eq-21 and is on the order of the |∇ϕ| due to re-distancing.
IV. Auxiliary System Design
In the previous section, a stabilizing controller was developed that drives the segmentation towards a reference state relying on fixed quantities ψ and U, which are not known in practice. The user will be employed to provide the missing information by occasionally applying discrete corrections to the segmentation; these corrections are accumulated over time. From the current segmentation and user input, an estimate of the ideal ψ must be inferred. These considerations lead to the coupled dynamical system presented in this section. A method for processing discrete input from a mouse or stylus to generate a distributed U is proposed in Section IV-A, while the accumulation of input is addressed in Section IV-B. An observer-like system is formulated in Section IV-C to compute ψ, the explicit estimate of ideal state ψ.
A. User Input Processing
Raw input from the user arrives in the form of binary decisions as to whether a given location in space is correctly labeled as inside or outside the segmentation boundary. The user clicks with a mouse or stylus at discrete points in Ω and time, as illustrated in Fig-6. Define t k , k ∈ ℕ to be the sequence of times at which the user sees a visualization of ϕ and has an opportunity to apply input. At time t k , they look at the labeling of ϕ(x k , t k ) and either (a) apply a signed impulse denoting a "vote" for setting the label there or (b) do nothing because they agree with the current labeling of ϕ(x k , t k ). Denote these sequential actions as (30) Before these inputs can be accumulated into U, they must be mapped into the space-time domain with some fixed support Define the function h(x, t) as (31) where h 0 (·) is a weight function and δ(t − t k ) is the Dirac delta As noted in [21] using an image-dependent metric for h 0 (·) is a useful way to weight spatial distances The examples in this paper use (32) which incorporates both Euclidean distance from x to x k and similarity between image values at I(x) and I(x k ).
B. Accumulation of User Input
The label error impulse inputs accumulate over time to define the control U. However U must be regulated to prevent excessive input magnitudes while ensuring spatial smoothness and enabling |U| ≥ U M to satisfy the conditions of Section III-B An undesirable excess U and |∇U| can occur in U t = h(x, t) because the human user causes h(x, t) without understanding how their "vote" inputs influence the segmentation dynamics Furthermore when the label-error is shrinking at a consistent rate but over a large area it is expected that the human user will be impatient and apply excess input magnitude in an attempt to speed up the moving ϕ = 0 boundary.
Regulation of U is achieved using a nonlinear diffusion process together with accumulation of h(x, t): (33) Changes in U are dominated by h(x, t) for |U| ≪ U M . As |U| grows, the diffusion coefficient ε ∘ ((U/U M ) 2 −1) gains influence. The following example illustrates qualitative behavior of the U system.
Example Consider the two-dimensional image slice shown in Fig-7a. A simulated "user" chooses locations x k at which an update h(x, t) is applied according to Eq-31. Blue '×' and red '+' denote places where h(x, t) is negative and positive, respectively. Fig-7b shows what U would look like without nonlinear diffusion. Fig-7c shows the response of the regulated U-system from Eq-33. Comparing 7b and 7c, it is clear that the latter is smoother and satisfies U ≤ U M .
C. Label-Error Estimate Dynamics
Dynamically estimating the reference state necessitates a coupled system; ϕ and the estimate of ψ evolve simultaneously Let ψ(x, t) be an estimate for ψ(x) and define the error terms (34) Feedback in the ϕ system (Eq-10) will now use ξ, (35) where the initial ϕ 0 is specified by the user The auxiliary ψ(x, t) observer-like system is driven by accumulated user input U together with error terms ξ̂ and e U : (36) Total labeling error is defined by the following functionals of ξ̂ and e U , where α is a constant parameter: (37) ( 38) In addition to stabilizing + , the control proposed in Theorem IV.1 is designed to achieve a useful qualitative behavior. When the user is satisfied with the agreement between ϕ(x, t) and their ideal ψ(x), it is assumed that U(x) remains constant; either the user never needed to apply a correction near x or has otherwise stopped adding more inputs. In this case, ψŝ hould follow ϕ. Conversely, when U(x) grows due to persistent human input, ψ̂ is to become increasingly driven towards U irrespective of agreement between ψ̂ and ϕ. Subsequently, ψ̂ should pull ϕ along due to the coupling term −ξÛ 2 (Eq. 13) in the closedloop dynamics of ϕ .
Theorem IV.1 Let g(ξ, U, e U ) = −e U (αU) 2 and consequently (39) Assume that user input has stopped (U remains constant) and Theorem III.1 is satisfied. Then, the sum V(t) ≐ + has a negative semidefinite time derivative: (40) Proof: Computing the time derivative V′(t) = ′> + ′, Substituting for ϕ t and ψt, After adding Eq-43 to Eq-44 and combining the terms, the portion containing error e U can be conveniently factored: (45) When U and λ satisfy Theorem III.1, it follows that (47) Thus, V′(t) is negative semidefinite and V(t) bounded.
D. Synthetic Image Example
To demonstrate the coupled dynamics, this section considers a simple segmentation scenario. Fig-8 illustrates closed-loop system behavior on the synthetic image used previously in Section II-B. In the absence of user input, ϕ(x, t) behaves like the open-loop system; Fig-8a shows the ψ̂ estimate following ϕ until they both reach steady state. With user input, the estimated ideal contour mediates between user input and the open-loop segmentation dynamics. In Fig-8b, the user starts to apply input upon noticing the ϕ = 0 boundary creeping through the bridge between the two ellipsoidal regions. Input stops and the system reaches steady-state after the user is satisfied with the displayed segmentation. Comparing Fig-8a and Fig-8b, we see that regardless of user input, the closed-loop sytem aligns the zero level-sets of ϕ and ψ̂ at steady-state; the presence of user input in Fig-8b shifts the steady-state of ϕ and ψ. In both cases the α for Eq-39 is set to 1/U M .
V. Application to MRI and CT Images
In this section, the feedback-augmented level-set methods are applied to two specific problems involving interactive medical image segmentation of X-ray Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) volumes. In Section V-A, a fractured piece of the femur is segmented in a CT volume. Next, the technique is applied to extract a patellar tendon in an MRI volume in Section V-B. For both applications, we first review the clinical problem. Next, an open-loop system appropriate for the image type is chosen. Finally, the closed-loop ϕ system is summarized, followed by a discussion of the segmentation results.
A. CT Segmentation with Mean-Alignment
The realignment of bone fragments after a fracture, also referred to as fracture reduction, is a crucial task during the operative treatment of complex bone fractures. Anatomically incorrect fracture reduction can result in severe post-traumatic complications. In order to avoid such problems and obtain an optimal fit between all relevant fracture fragments, the surgeon traditionally exposes the fractured bone by cutting the soft tissue envelope to access the fragments directly. Subsequent realignment of the recovered fracture fragments requires a trial and error approach, which prolongs surgery and increases the risk of complications for the patient. Therefore, there is a clear need for the development of less invasive techniques to reconstruct complex fractures. Segmentation of the image data to localize the fragments (as in Fig-10) is a key step first step toward computing and planning the optimal way of realigning the fractured bone.
Bone tissue generally appears very bright in CT imagery; therefore, the segmentation of bone in CT is modeled with the mean-alignment system Eq-9. Using the control from Eq-13 leads to the closed-loop system (48) Note that a healthy bone can often be segmented in its entirety using the open-loop system alone, since the zero level-set of ϕ is naturally drawn to boundaries of bright objects.
Interactive control becomes vital, however, when segmenting bone subject to disease or injury; accurate segmentation is not possible without feedback. (Fig-11a). After the system has segmented this first edge and is nearly at steady-state, the user finds another edge along which to apply input (Fig-11b). A further refinement is made in Fig-11c that leads to the final steady-state segmentation of Fig-11d. The number of voxels actuated by the user's mouse strokes are plotted versus (scaled) time in Fig-11e. In a fully manual segmentation, each of the 16404 boundary voxels in Fig-11d would need to be marked by the user; the second y-axis on the right indicates the actuated voxels as a percentage of the fully manual effort. It is difficult for a person to accurately decide whether or not a fracture edge in a distant part of the volume is part of the same fragment Fig-11e indicates that a substantial portion of time is spent with ϕ near a steady-state while the user scrolls through slices to decide where fracture edges are located and whether these edges are part of the same fragment or another one in close proximity In Fig-10a two light regions are determined to be part of the fragment being segmented while a third (in the upper left) is a separate bone fragment.
Normalized histograms of the image intensity distribution inside of the segmentation boundary at steady-state are shown in Fig-11f Without feedback the segmentation encloses a region with a highly peaked I(x) histogram In contrast the closed-loop system reaches steady-state with a heavier-tailed intensity histogram The distribution shift is due to the user applying input to correctly label bone near and along the jagged fragment edges These regions are precisely where we care most about accurate segmentation since the fragment's edges are to be matched with those of other fragments during the fracture reduction task.
B. MRI Segmentation with Localized Statistics
Surgical repair of a torn anterior cruciate ligament (ACL) requires choosing a location from which to harvest a graft of sufficient length and thickness. The most common choice today is the patellar tendon (PaT). While the width and thickness of a PaT are quite predictable based on patient height and weight, the tendon length varies widely. This variability in shape continues to complicate surgery due to mismatch between the graft and drilled tunnel, especially in "anatomic reconstruction" [51]- [53] where the replacement ACL is to be oriented exactly as before the injury 2 . Quantifying the variability of PaT shape and comparing it to other graft choices (namely the hamstring and quadriceps tendons) requires accurate segmentation in MRI volumes.
Soft tissue including tendons and ligaments is readily visible in MRI, unlike CT where only mineral-dense bone gives a strong response. However, images obtained by MRI have a complicated mapping between tissue type and observed intensity; segmenting soft tissue in MRI is generally more difficult than bone in CT. The distribution of intensity values in MRI arising from a particular anatomic structure will vary significantly between slices (Fig-12b), and will also overlap the distributions of other structures (Fig-12a). An effective approach for MRI segmentation is to separate regions based on spatially-varying statistics of I(x). To do so, open-loop dynamics are chosen to use the localizing active contours of [54] that define intensity means μ in (x) and μ out (x) locally as integrals over a Euclidean ball of radius r. With feedback the system becomes (49) Despite the advantages of the underlying open-loop system segmenting a PaT remains challenging for two reasons First, the tendon is very thin relative to its height and width making a satisfactory choice of r in Eq-49 difficult Second, I(x) at the insertion points of the tendon has the same distribution locally as adjacent connective tissue The human user however employs their anatomic knowledge to enable successful segmentation via the closed-loop system Fig-13 shows the final result; the tendon has been segmented between its attachment on the inferior pole of the patella to the end of its insertion on the tibial tubercle For context the patella bone is also segmented and displayed.
Incremental progress during interactive segmentation of the tendon is shown in Fig-14 (a)-(d) Red and blue markers denote positive and negative extrema of U, respectively while the green semi-opaque surface represents a reference segmentation known to the human expert As the ϕ = 0 boundary evolves after initialization a small amount of input yields the segmentation in Fig-14b With the bulk of the tendon outlined the user applies input to fill gaps in the vertical edges and remove the over-segmented regions around the insertion points at the patella and tibia bones (Fig-14c) Unlike the fracture scenario in Fig-11 the open-loop system applied to segment this tendon leads to massive "bleed-through" of the segmentation because the image distribution around the tendon insertion points is identical to that of the tendon itself Fig-14d shows the steady-state reached by the closed-loop system; user input stabilizes the segmentation at the desired reference boundary and prevents bleed-through past the insertion points on the patella and tibia. In Fig-14e, the number of voxels actuated by the user's mouse strokes is 4.7% of what would be needed to trace all of the tendon's boundary voxels manually. Comparing Fig-11e and Fig-14e, the latter has more piecewise constant regions because the human user spends substantial time looking for anatomic markers and adjusting displayed image contrast to decide where the tendon begins and ends.
A. Overview
In many implementations of level-set segmentation (e.g. [11], [17], [20]), the smoothing factor λ is a parameter that is set by a user. However, understanding such a parameter requires users to have more mathematics background than is typical for the medical community. Here, we set λ automatically to achieve desired behavior. The PDE control formulation here has a constant computational cost with respect to amount of user input and no abrupt changes to the segmentation, unlike in [21]- [23]. Under the proposed controller, sufficient input U(x) from the user guarantees agreement with the reference state; relaxed constraints in [22] dictate that it may be impossible for the segmentation to respect the user's inputs. Rushed use of the mouse by a human is not possible in [21], [23] because constraints are exactly enforced. In contrast, the input processing used in the current work provides leeway for the user: a small |U| will not dominate the open-loop dynamics. If needed, a large accumulation of |U| is achieved by "scribbling" repeatedly in a region.
It is emphasized that the closed-loop control formulation in this paper does not seek to replace existing curve evolution algorithms but rather to augment them. The controltheoretic approach enables a user to reach the desired segmentation at steady-state; running the level set evolution for a longer time will not cause the boundary to "bleed-through" or contract.
B. Quantitative Comparison with GrabCut
Two orthopedic images are used as test data to quantitatively compare the method presented in this paper with the popular GrabCut algorithm [19]. The user's goal is to segment the epiphysis and physis of the femur in Fig-15a and Fig-15b, respectively. User input via mouse click-and-drag is implemented and measured identically for both algorithms. The GrabCut implementation used here is available as part of the OpenCV library [55]. A location through which the cursor was dragged is defined as an "actuated voxel;" the extents around the cursor that mark seed regions in GrabCut are not counted towards this total. Locations in the image whose assigned label changes between background and foreground are tracked over time and are referred to as "reclassified voxels." In both implementations, total segmentation time is primarily a function of how long it takes the user to evaluate the current state and apply more corrective mouse strokes. The total number of actuated voxels needed to complete the segmentation is a robust indicator of user effort.
Actuated voxels after initialization are plotted in Fig-16. At termination, all of the segmentations have greater than 98% overlap with a manually created reference. For the adult epiphysis image of Fig-15a, the average final actuated voxel counts are 348 (GrabCut) and 118 (proposed). For the juvenile physis segmentation, the averages are 536 (GrabCut) and 141 (proposed). Segmenting the physis is more difficult with GrabCut due to the elongated shape, the nearly identically-looking fluid around the bone and the bimodal appearance of cortical bone above and spongy bone below the physis. A GrabCut iteration can change the segmentation dramatically; when this change is erroneous, significant corrective effort becomes required. In Fig-16b, we see this manifested by the large increases in actuated voxels during the first few rounds of GrabCut user input. In contrast, the proposed algorithm provides rapid continuous visual feedback for the user; small corrections are made before a large error can develop.
Predictability of how the segmentation changes in response to mouse strokes is a criterion for practical ease of use. Two scatterplots quantify the predictability in Fig-17; dynamic response is characterized in terms of the number of reclassified voxels (Y -axis) and the number of newly actuated voxels (X-axis). Each mark corresponds to one iteration when new user input was applied. Linear regression lines are overlaid on the data. The two algorithms have a similar dynamic response in the epiphysis segmentation, shown in Fig-17a; correlation coefficients are 0.70 (GrabCut) and 0.90 (proposed algorithm). Two issues become apparent in Fig-17b for the juvenile physis scenario. First, the distribution of GrabCut data points is quite broad; correlation coefficients are 0.61 and 0.92 for GrabCut and the proposed algorithm, respectively. Second, many of the GrabCut data points are below the dashed green line, indicating a waste of user effort since there are more voxels actuated than reclassified. The dynamic response of GrabCut makes it hard for a user to predict how much change new mouse strokes will cause.
VII. Conclusion
This paper has presented a modeling approach that enables control-theoretic analysis and design for interactive medical image segmentation. Results shown for a synthetic image (Section IV-D) and real medical volumes (Section V) agree with theoretical expectations of system performance. Section V illustrated two qualitatively different situations: (1) gradual expansion of the boundary to bound the entire femur fragment and (2) prevention of "bleedthrough" or over-segmentation with the patellar tendon. In both situations, the user is able to drive the segmentation to a desired steady state and to do so with much less effort in terms of actuated voxels than manual segmentation. In summary, the PDE control formulation enables us to guarantee a user's ability to reach a reference segmentation state while also absolving them of the need to understand mathematical details or use precise mouse movement.
Successful use of the closed-loop algorithm by medical students motivates several future extensions. If a single image contains several objects of interest, they would need to be extracted sequentially in the current framework. Such a sequential de-coupled approach does not address natural constraints of the geometry and involves re-editing common boundaries. A coupled formulation using an open-loop system of PDEs such as in [20] together with a vector of control inputs would prevent region overlap and reduce the user's effort when segmenting multiple regions. Informative visualization is vital for efficient interaction, since performance of any interactive segmentation method is limited by how quickly and accurately a user can infer the segmentation state [56]- [58]. An interesting extension to the theory would consider the feedback between visualization and the creation of user input; for example, it may be desirable to confine movement of the boundary to regions that are observable from the user's viewpoint in 3D. Segmentation by minimizing a meaningful image-dependent functional is not sufficient when the desired anatomic boundary is not actually a minimizer (left). An expert user would typically desire to make some corrections (right) that contradict the functional's minimizer. Block diagram of the proposed control formulation. Feedback compensates for deficiencies in automatic segmentation by exploiting the human expert's interpretation of complex imagery. The interior of a segmented region satisfies ϕ(x) > 0 while its exterior has ϕ(x) < 0. Desired segmentation and {ϕ(x, t) = 0} level-set overlayed on image I(x). A user would like to segment the "left ventricle" in this synthetic image that resembles an MRI scan of the brain. Evolving ϕ according to Eq-9 shrinks E(t) successfully (Fig4).
However, the open-loop system fails to segment the desired region. After initialization, the inner loop of Fig-2 updates ϕ and ψ. Input from a human user applies impulses at times t k that accumulate as U(x, t). Between times t k , the inner loop changes steady-state in response to updated U(x, t). The user stops applying input when the visualization of ϕ(x, t) is satisfactory. Regulating the input-integration with nonlinear diffusion keeps U smooth and bounded. Diffusion occurs when inputs u k occur in excess of U M ; here, U M = 10. Segmentation of shattered hip bone fragments in a CT scan. The image volume is 156 × 162 × 229 voxels with a 0.7mm grid spacing. In (a)-(d), regions of user input are shown as markers on the progressing segmentation (dark) overlayed on user's reference boundary (light). The segmentation in Fig-11a is the steady state of the open-loop system. In Fig-11d, the segmentation agrees with the desired reference boundary due to the closed-loop system's incorporation of user input. Tissues within one MRI slice have overlapping intensity histograms while a single tissue across slices has a varying histogram. Separation of regions must consider the spatially-varying image statistics.
Figure 13.
A segmentation of the patella and patellar tendon in MRI, part of a study on graft selection for anterior cruciate ligament (ACL) repair. The image volume is 512 × 512 × 224 voxels with a 0.4mm grid spacing. Regions of significant user input shown together with the changing segmentation. Locations where U > 0 and U < 0 correspond to red and blue markers, respectively. The open-loop system's tendency towards "bleed-through" near the insertion points is handled in the closed-loop system by incorporating user input with negative U. Comparison of actuated voxels over time, after initialization. The proposed algorithm has both a lower mean actuated count and tighter clustering across repeated segmentations. Comparison of dynamic response to user input; data points and linear fit lines are shown. Points below the dashed green line indicate wasted user effort since more additional voxels were actuated than reclassified. | 8,964.2 | 2013-07-24T00:00:00.000 | [
"Mathematics"
] |
N′-(3-Hydroxybenzylidene)-4-methylbenzohydrazide
The title compound, C15H14N2O2, was obtained from the reaction of 3-hydroxybenzaldhyde and 4-methylbenzohydrazide in methanol. In the molecule, the benzene rings form a dihedral angle of 2.9 (3)°. In the crystal, N—H⋯O and O—H⋯O hydrogen bonds link the molecules into layers parallel to (101). The crystal packing also exhibits π–π interactions between the aromatic rings [centroid–centroid distance = 3.686 (4) Å].
N′-(3-Hydroxybenzylidene)-4-methylbenzohydrazide
Ji-Lai Liu, Ming-Hui Sun and Jing-Jun Ma Comment Benzohydrazide compounds are well known for their biological activities (El-Sayed et al., 2011;Horiuchi et al., 2009). In addition, benzohydrazide compounds have also been used as versatile ligands in coordination chemistry (El-Dissouky et al., 2010, Zhang et al., 2010. As a contribution to a structural study on hydrazone compounds, we present here the crystal structure of the title compound (I) obtained in the reaction of 3-hydroxybenzaldehyde with 4-methylbenzohydrazide in methanol.
Experimental
To a methanol solution (20 ml) of 3-hydroxybenzaldehyde (0.1 mmol, 12.2 mg) and 4-methylbenzohydrazide (0.1 mmol, 15.0 mg), a few drops of acetic acid were added. The mixture was refluxed for 1 h and then cooled to room temperature.
The white crystalline solid was collected by filtration, washed with cold methanol and dried in air. Single crystals, suitable for X-ray diffraction, were obtained by slow evaporation of a methanol solution of the product in air.
Refinement
The amino H-atom was located in a difference Fourier map and was refined with a distance restraint, N-H = 0.90 (1) Å.
Computing details
Data collection: SMART (Bruker, 2007); cell refinement: SAINT (Bruker, 2007); data reduction: SAINT (Bruker, 2007); program(s) used to solve structure: SHELXS97 (Sheldrick, 2008); program(s) used to refine structure: SHELXL97 The molecular structure of (I), with the numbering scheme and displacement ellipsoids drawn at the 30% probability level. Special details Geometry. All e.s.d.'s (except the e.s.d. in the dihedral angle between two l.s. planes) are estimated using the full covariance matrix. The cell e.s.d.'s are taken into account individually in the estimation of e.s.d.'s in distances, angles and torsion angles; correlations between e.s.d.'s in cell parameters are only used when they are defined by crystal symmetry. An approximate (isotropic) treatment of cell e.s.d.'s is used for estimating e.s.d.'s involving l.s. planes. Refinement. Refinement of F 2 against ALL reflections. The weighted R-factor wR and goodness of fit S are based on F 2 , conventional R-factors R are based on F, with F set to zero for negative F 2 . The threshold expression of F 2 > σ(F 2 ) is used only for calculating R-factors(gt) etc. and is not relevant to the choice of reflections for refinement. R-factors based on F 2 are statistically about twice as large as those based on F, and R-factors based on ALL data will be even larger. | 637 | 2012-05-19T00:00:00.000 | [
"Chemistry"
] |
Mining the Information Content of Member Galaxies in Halo Mass Modeling
Motivated by previous findings that the magnitude gap between certain satellite galaxies and the central galaxy can be used to improve the estimation of halo mass, we carry out a systematic study of the information content of different member galaxies in the modeling of the host halo mass using a machine-learning approach. We employ data from the hydrodynamical simulation IllustrisTNG and train a random forest algorithm to predict a halo mass from the stellar masses of its member galaxies. Exhaustive feature selection is adopted to disentangle the importance of different galaxy members. We confirm that an additional satellite does improve the halo mass estimation compared to that estimated by the central alone. However, the magnitude of this improvement does not differ significantly using different satellite galaxies. When three galaxies are used in the halo mass prediction, the best combination is always that of the central galaxy with the most massive satellite and the smallest satellite. Furthermore, among the top seven galaxies, the combination of a central galaxy and two or three satellite galaxies gives a near-optimal estimation of halo mass, and further addition of galaxies does not raise the precision of the prediction. We demonstrate that these dependences can be understood from the shape variation of the conditional satellite distribution, with different member galaxies accounting for distinct halo-dependent features in different parts of the cumulative stellar mass function.
INTRODUCTION
According to the standard cosmological paradigm, galaxies are believed to form and evolve in dark matter halos (White & Rees 1978).Hence the properties of galaxies are tightly linked to the properties of their dark haloes, giving rise to the so-called the galaxy-halo connection.Studying this connection is of great value in understanding the mechanisms of galaxy formation and evolution, while also providing a way to infer the properties of dark matter haloes through galaxy observations.
Some recent studies have shown that the magnitude or mass difference (a.k.a.gap) between the central galaxy and some satellite galaxies may contain information about the assembly history of their host halo (e.g.Harrison et al. 2012;Deason et al. 2013;Solanes et al. 2016;Kang et al. 2016) and thus could be used to tighten the SHMR.The magnitude gap between the brightest central galaxy (BCG) and the second brightest galaxy (M12) was originally studied as a diagnosis for selecting fossil groups (Ponman et al. 1994;Jones et al. 2003;Sales et al. 2007;von Benda-Beckmann et al. 2008).Later, Dariush et al. (2010) and Tavasoli et al. (2011) proposed that the magnitude gap between the BCG and the fourth brightest galaxy (M14) is a better indicator for Zhou & Han selecting fossil groups.Subsequent studies have shown that using these gaps, in addition to the central luminosity, can indeed substantially reduce the scatter in the halo mass estimation (e.g.More 2012;Hearin et al. 2013;Shen et al. 2014;Lu et al. 2015;Golden-Marx & Miller 2018, 2019, 2021;Wang et al. 2021a).However, it is still not known whether the improvement in the halo mass estimation is equally effective for gaps between BCG and different ranked satellite galaxies, and which gap can optimally constrain the dark halo mass, or equivalently, which satellite galaxy provides the most information in constraining the halo mass.
A related question is how many satellites are needed to optimally constrain the halo mass.Taking information from all member galaxies would certainly improve the halo mass constraint.For example, as satellite galaxies are expected to trace the dark matter distribution in a halo (e.g.Han et al. 2016), the total stellar mass or total luminosity can be used as a good proxy for halo mass (e.g.Zaritsky et al. 1997;Prada et al. 2003;Yang et al. 2005;Conroy et al. 2007;Han et al. 2015;Wang et al. 2021b).However, the complete population of all member galaxies are usually not available observationally.Bradshaw et al. (2020) demonstrated that employing the sum of the stellar mass of the central and the N most massive satellites (cen + N ) as a new halo mass estimator can effectively reduce the scatter compared to using only the stellar mass of the central galaxy (M * ,cen ).They also showed that the scatter adopting this estimator already approaches that using the total stellar mass of all galaxy members (M * ,tot ).However, it should be noted that the total stellar mass alone could miss information from the relative mass distribution of different satellites, and thus may not be optimal itself in estimating the halo mass.Thus it remains to be seen which of the satellite galaxies play a major role and how to best combine the galaxy information to maximize the accuracy of halo mass prediction when only a few satellite galaxies are available.
In this work, we seek to clarify the roles played by different satellites in the estimation of the halo mass.The data we employed is from the hydrodynamic simulation IllustrisTNG.We make use of a machine learning technique called Random Forest (RF) regression to model the nonlinear joint connections between halo mass and the first few satellites, while sorting out the relative importances of different satellite combinations using the exhaustive feature selection method.We find that there is indeed an optimal combination of satellites that can lead to a nearly saturated improvement in the halo mass constraint.We further examine the results in the context of the conditional satellite distribution, Table 1.Parameters of the TNG100-1 and TNG300-1.From left to right: side length of the simulation box, the number of dark matter particles, and the masses of dark matter and baryonic particles.
The paper is organized as follows: In section 2 we introduce the IllustrisTNG simulation on which our analysis is based, and the process of data processing and filtering.In section 3 we describe the machine learning method and the training process.The main results from the machine learning analysis are presented in section 4. In section 5 we examine the results in the context of the conditional satellite distribution.Summary and conclusions are presented in section 6.
IllustrisTNG
Our analysis is based on data from the IllustrisTNG,1 a suite of state-of-the-art magnetohydrodynamical cosmological simulations (Naiman et al. 2018;Springel et al. 2018;Pillepich et al. 2018;Nelson et al. 2018;Marinacci et al. 2018) run with the moving-mesh code Arepo (Springel 2010).TNG is the successor of the original Illustris simulation, while improving in many aspects of its galaxy formation recipes.TNG follows the Λ Cold Dark Matter cosmology adopting parameters from the Planck observations (Planck Collaboration et al. 2016), with Ω Λ = 0.6911, Ω m = 0.3089, Ω b = 0.0486, σ 8 = 0.8159 and h = 0.6774.The full TNG suit consist of simulations run in three different boxsizes of roughly 50, 100 and 300 Mpc, referred to as TNG50, TNG100 and TNG300 respectively, each of which are also run at three or four levels of resolutions.In this work, we choose data from the highest resolution runs of TNG100 and TNG300 (named TNG100-1 and TNG300-1 in the data release), as TNG300 has the largest volume and therefore provides a large sample, while TNG100 has a higher resolution compared to TNG300.Given that the cosmologies are the same for both simulations, the data from the two are joined together in order to cover a larger halo mass range.More details on TNG100 and TNG300 are provided in Table 1.
The halo and galaxy sample
Our study focuses on the sample at redshift z = 0.The halo mass is defined as M 200mean , the total mass in a sphere around the halo centre with an enclosed density of 200 times the mean density of the universe.We define satellite galaxies as those located within the virial radius, R 200mean , of the host halo except the central galaxy.As the stellar mass function of the simulations becomes incomplete at ∼ 10 6 M (∼ 10 7 M ) for TNG100 (TNG300), we only consider galaxies above these stellar mass limits respectively.
We further demand that each halo contains at least 7 member galaxies in the halo mass range studied.As lower mass halos are typically resolved with fewer number of satellites, this richness cut translates to a cut in the halo mass for our sample.In Fig. 1 we plot the stellar mass-halo mass relation for the top 7 most massive galaxies in each halo, as well as the fraction of halos with more than 7 members as a function of halo mass.As can be seen from Fig. 1, limiting the halo mass to M > 10 12.3 M in TNG100 ensures that the top 7 galaxies are well resolved with stellar masses above 10 6 M .For the TNG300 sample, a corresponding halo mass limit of 10 12.8 M can be found.Under these selection criteria, we are left with 1235 valid halos in TNG100 and 8413 halos in TNG300, spanning a combined mass range of 10 12.3 < M halo /M 10 15.3 .It is known that the galaxy properties do not fully converge between TNG100 and TNG300 due to the resolution dependence of the hydrodynamical solver used.To correct this, we simply multiply the stellar masses in TNG300 by a constant factor of 1.4 before combining it with TNG100, following Pillepich et al. (2018).
METHOD
We employ the Random Forest (RF) algorithm (Breiman 2001) as implemented in scikitlearn (Pedregosa et al. 2011) to analyze the relation between halo mass and the masses of member galaxies.RF is a supervised machine learning algorithm that can be trained to map out the relation between the input and output data in a non-parametric way.It is an ensemble method that aggregates many base estimators called decision trees via the bagging approach.This enables the RF to overcome the common problem of overfitting faced by a single decision tree and improve the generalization ability of the model.Due to its simplicity and efficiency, RF has been widely used in many recent studies in astrophysics (e.g., Hoyle et al. 2015 TNG300-1 Figure 1.Stellar mass distributions of member galaxies in TNG100 (upper panel) and TNG300 (bottom panel).Each coloured solid curve shows the median stellar mass to halo mass relation for galaxies of a given rank as labelled, while the corresponding shaded region is bounded by the 16 th and 84 th percentiles in the stellar mass distribution.The light purple points in the background show the distribution of all the member galaxies.The black dashed line shows the fraction of halos with more than 7 members as a function of halo mass.Man et al. 2019;Petulante et al. 2021;Shi et al. 2021).
In the following we explain the algorithm in more detail.
Decision tree
As the basic unit of a RF, a decision tree is a tree-like decision model.For a given input parameter space, a decision tree aims at partitioning the parameter space into multiple nodes such that each node is mapped to a single prediction.The partitioning is done by splitting the parameter space along one dimension at a time according to a certain criterion, forming a tree-like structure after multiple operations.The complete input parameter space forms the root node of the tree, while nodes that no longer split are called leaf nodes.The predictions in the leaf nodes can be either discrete classes or Zhou & Han continuous values, corresponding to a classification or a regression tree.For this analysis we use regression trees.
Consider a given data set consisting of n observations, D = {( x 1 , y 1 ), ( x 2 , y 2 ), ..., ( x i , y i ), ...( x n , y n )}, where x i is an m-dimensional vector with m input features, y i is the target feature we want to predict, and i = 1..n represents n observations.Expressing the decision tree as a function f ( x), the goal of the regression is to find a f ( x) that minimizes the Mean Squared Error (MSE) of the data set (also referred to as impurity) This is achieved by choosing an appropriate division at each step to minimize the MSE in each node, with f ( x) replaced by the mean value of y in the node.More specifically, starting from the root node, we recursively divide each node into two child nodes R 1 (j, s) = {x|x (j) ≤ s} and R 2 (j, s) = {x|x (j) > s} according to a feature j and a threshold s.To minimize the MSE of the final tree, we choose (j, s) to minimize the MSE of each division, where c 1 and c 2 correspond to the averages of the labels y i in R 1 and R 2 .The division continues till some stopping criteria regarding the depth of the tree or the size of the leaf node are satisfied, which we specify in section 3.4.Once a tree is constructed, it is straightforward to make predictions with it.A new input observation x new can be inserted into a leaf node through a tree walk, and the corresponding prediction is found as the average y of the training data in the leaf.
Random forest
A decision tree can easily overfit the data.For example, when a leaf node contains only one observation, any noise in the observation will be inherited by the model prediction.To overcome this problem, a random forest works by combining the predictions of many trees each constructed from a bootstrap realization of the original data.
For each tree in the forest, when selecting splitting features on a node, a further pooling step is added to restrict the selection to a random subset of the original feature set.This randomness further enhances the generalization ability of the model.The final prediction is obtained by combining (averaging in the case of regression) the predictions of all trees.
Feature importance and Exhaustive Feature Selection
A random forest can not only serve as a predictive model that fits the data.It can also output an importance score for each feature quantifying its relative contribution in the prediction, which is of great significance in feature selection and helps to understand the underlying model construction process.In the scikitlearn (Pedregosa et al. 2011) package, RF feature importance ranking is based on the Mean Decrease Impurity (MDI), which quantifies the average reduction in MSE contributed by the tree divisions in each feature.We provide the detailed definition of the MDI importance in Appendix A.
Despite that the feature importance in random forests based on MDI is widely used for feature selection, it has been shown in the literature that such importances may produce misleading results (Strobl et al. 2007;Louppe 2014;Scornet 2020).For completely independent variables and in absence of variable interactions, MDI provides a variance decomposition of the output.However, for partially redundant variables that carry similar information, which almost always happen in practice, the one with slightly more information may always stand out in the feature selection process of node splitting, leaving little MDI to the others.For this reason, we can not completely rely on the importance ranking given by the random forest.Hence we take the strategy of still using the random forest as the regression model while combining it with an exhaustive method for feature selection.
Specifically, we try all possible feature combinations and train one model using each combination.The performances of the models are then compared to select the best combination of features for each number of features.The best feature combinations at each step are selected according to the R 2 score, which is used to evaluate the performance of regression models.The R 2 score is defined as where y i is the true target variable with ȳ being its mean value in the sample, and ŷi is the predicted value for observation i.This approach enables us to identify the most important feature combinations without having to worry about feature correlations.The evolution of the performance with the addition of features can also be used to understand the unique contributions of features to the model improvement.
Tuning the hyperparameters
To achieve the best performance of a machine learning model, it is crucial to tune the hyperparameters of the model.For RF in scikit-learn, there are several major hyperparameters to be tuned: n estimators, max depth, min samples leaf and min samples split.We start the tuning process from n estimators, and obtain the number of trees in a range that makes the model perform best.The remaining parameters are further tuned one by one after fixing previous parameters to their optimal values.The final set of adopted hyperparameters are presented in table (2).
Model Training and Performance
Our fiducial model is the random forest model that adopts the hyperparameters given in Table (2), taking the logarithmic stellar masses of the top seven galaxies in the mass ranking as the input feature variables, and the target variable to be predicted is the logarithmic dark matter halo mass.Cross-validation was employed to evaluate the model performance, dividing the dataset into a training set and a test set, which are used for training and testing respectively.Figure 2 shows the relation between the predicted and true halo masses in the test set of our model.Overall, the model can unbiasedly predict the true halo mass accross the entire mass range, with a fairly small total MSE of 0.01 and a R 2 score of 0.946.The deviation of the few data points at the highest mass end is due to the limited number of haloes in this mass range that get allocated into a single leaf node.
Besides the fiducial model, we further train additional models using subsets of the available features as input.
In Fig. 3, we compare the residual distributions for models involving the central and another satellite in the halo mass prediction.As can be observed, the inclusion of a satellite galaxy leads to a more accurate prediction compared to that using only the central galaxy, indicating that satellite galaxies can indeed provide additional information for the halo mass estimation.However, the residual distributions involving different satellites are all very close to each other, suggesting that there is not an outstanding satellite that improves the prediction much more than the others.We will come back to this conclusion later.
In the following we explore the roles played by different galaxies in more detail using feature importance and exhaustive feature selection.
Importance Ranking
The MDI based importance ranking given by RF is presented in Figure 4 deviations of the importances from individual trees in the forest.As expected, the most important feature is the stellar mass of the central galaxy.The second important feature is the stellar mass of the 7th massive galaxy, which is also the least massive galaxy in the data we used.
As mentioned above, Exhaustive Feature Selection (EFS) was introduced considering that the default feature importance ranking provided by random forest might carry some bias.We train our model using all possible feature combinations, and list the top four best scoring combinations for each number of features in Table 3.A 5-fold cross validation is adopted in this process.Specifically, we split the dataset into 5 equal subsets (a.k.a., folds) with each subset used once as validation while the 4 remaining folds forming the training set.The error of R 2 score is calculated as where s i is the R 2 score of fold i with s being the mean score, and k is the fold numbers.
In Fig 5, we plot the performance of the models trained by different combinations versus the number of features.It is seen that the scores of models trained with two features improve compared to model trained only by the central galaxy in Table 3.It verifies once again that satellite galaxies have an extra contribution to the prediction of halo mass.However, by observing the case when only two features are available, we can see that the scores of the different combinations do not differ significantly and that no combination is outstanding.That is, the satellite galaxies of different orders alone play a similar role as complements to the central galaxy in the prediction of halo mass.For the case when only three features are input, the [127] (stellar masses of the first, second and seventh galaxies) combination gives the highest model score and is almost as high as the highest score attainable.This means instead of the whole population, we can use the information from only the first, second and seventh (here the least massive) galaxies as a high precision probe of halo mass.Moreover, once the input feature number reaches 4, the improvement of the model goes less noticeable and even almost absent with further increase in feature number.This result indicates that information of only a few satellite galaxy members is sufficient to make high-precision predictions of the halo mass regardless of a complete galaxy population.
The roles of different galaxies
In order to disentangle the role played by different ranked galaxies for the prediction of the halo mass in more detail, we choose the first, second, fourth and the seventh ranked galaxies to analyse.
We first consider the combinations of the central galaxy with one satellite galaxy, i.e., [12],[14] and [17], and train a model for each combination.Then we plot the predicted as well as true values of these models as functions of two stellar masses at a time in Figure 6.Overall, the contours that represent the halo masses are roughly perpendicular to the axes corresponding to the stellar mass of central galaxy, suggesting that the halo mass can be mostly determined by the stellar mass of the central galaxy.However, the slight inclination towards the x-axis indicates that it also depends on the satellite galaxies.
Taking the top row of the figure as an example, as the model is trained by the stellar mass of central and the second galaxy, the predicted and true values match well in the sm1-sm2 plane (left panel), confirming that the model have sucessfully learned the mapping between halo mass and these two stellar masses.While for the rest right figures, obvious misalignment exists between the true and predict contours.This reflects the difference in the information provided by the different galaxies for predicting halo mass.It implies that although the inclusion of different satellite galaxies provides roughly equivalent improvement to the halo mass estimation with central galaxy alone, the supplementary information relative to the central galaxy that they carry is different.The deviation between the predicted and true masses is larger in the rightmost panel, indicating a larger information difference between (sm2, sm7) than that between (sm2, sm4).This could also explain why the combination [127] is the best when we use only three features.
It is interesting to note that the galaxies [127] also appear in the top combinations involving larger numbers of features in Table 3.The substantial extra information carried by the 7th satellite relative to the 2nd may be because it is the least massive satellite in our halos.In other words, the largest differences exist between the satellite galaxies with the largest ranking separation.To further verify this interpretation, we perform the same analysis on complete samples containing 5 and 6 galaxy members respectively.The results are consistent: the best combination of three galaxies is always the first, second and smallest galaxies, as shown in Figure 7.
Mass range independence
To guarantee that our results are not dependent on the mass range, we examine the residuals of different models at different central galaxy masses in Figure 8.The residuals are concentrated around 0 in the full mass range, indicating that the models are unbiased over the whole mass range.Note the large deviation at the highest masses is due to the rarity of halos there.In addition, for the same model, the scatter is also similar over various masses.This suggests that our results have no dependence on mass and are valid throughout the mass range.Comparing the dispersions in the residuals, the previous conclusions can also be seen, that the inclusion of satellite galaxies helps to improve the accuracy of the halo mass estimation, and that the precision of the model constructed using the three best combinations is already comparable to that using all the features.
DISCUSSION: UNDERSTANDING THE GAPS WITH THE CONDITIONAL GALAXY DISTRIBUTION
The magnitude or stellar mass gaps, and the galaxy combinations studied here are all constructed based on the ranks of galaxies.Such ranks and their corresponding sizes naturally appear as function values and random variables in the cumulative mass or luminosity functions.Such a connection have been exploited before to derive the distribution of magnitude gaps as well as that of the BCGs by drawing from the global or conditional luminosity functions (More 2012;Paranjape & Sheth 2012;Hearin et al. 2013;Shen et al. 2014;Paul et al. 2017).
Figure 6.Relationship between the predicted halo masses from different models and stellar masses of different galaxies.The axes are the logarithmic stellar masses of the central (sm1), second (sm2), fourth (sm4) and seventh (sm7) most massive galaxies.The grey filled contours show the signal-to-noise level of the data at each location, which reflects the number of halos within each bin.The thin coloured lines are the contours of the true halo masses, while the thick light lines are the contours of the predicted halo masses of the corresponding model labelled in each panel.
Unlike previous studies, our machine learning results allow us to explore this connection in a reverse manner, to directly identify where and how much information on halo mass is stored in the cumulative galaxy distribution.In this context, galaxies with different mass ranks control different segments of the cumulative stellar mass function (CSMF).Those with ranks 1 and 2 control the shape of the curve at the massive end, while those with rank 7 control the shape of the curve at the more distant end, i.e. the low mass end.The connection of the gap or rank statistics to halo mass can then be understood as the variation in the relevant segments of the conditional cumulative stellar mass function (CCSMF) with halo mass, φ(> M |M h ).The finite number of informative features then reflects the limited number of distinct mass-dependent features in the CCSMF, or the universality of CCSMF subject to a few mass-dependent parameters.
To verify this conjecture, we plot the CCSMF for halos with the same predicted values but different true halo mass values in Figure 9.For a given model, fixing the prediction is equivalent to fixing the values of the input features and the corresponding segments in the CCSMF.The remaining differences in the CCSMF curves for different true halo masses then reflect the contributions of features outside the combination used.We test the CC-SMF using the RF models constructed respectively with combinations [1], [12], [17] and [127], and show the results for three representative mass ranges centered at log(M/M ) = 12.5, 13.5 and 14 in predicted mass.
It can be seen from this plot that for the model trained with only the central galaxies (first row), the CSMF curves for different true halo masses are noticeably different at fixed prediction, and only converge at the most massive end where the central mass is fixed.In the second row, adding the second rank galaxies to the model, the curves show a further strong tendency to bunch up at the massive end, but still with a clean separation in the low mass region.Correspondingly, in the third row, adding the seventh rank galaxies to the central galaxies, one of the focal points of the curves shift to the relatively lower mass end at N = 7, and yet some discrepancies of the curves can be seen between and beyond the two focal points.Finally adding both the second and seventh galaxies as supplements to the central galaxy (last row), it is seen that the CSMF curves already approach complete overlap in the region of our concern (N ≤ 7), exhausting the distinct features in the top 7 galaxies.All these results are consistent with what we speculated earlier.When only the central galaxy is controlled, the apparent divergence between the curves implies that there is additional information beyond the central galaxy that we can utilise in the estimation of the halo mass.Further constraining both the central galaxy and the second or seventh galaxy, the previous divergence converges further at the corresponding massive and low mass end, and the halo mass is tightened further, suggesting that the inclusion of satellite galaxies improves the prediction and that the second and seventh galaxies contribute distinctively to their host halo.After restricting both the central galaxy and the second and seventh galaxies together, the separation between the curves in the area we considered almost disappears, indicating that the extraction of the information required for the prediction is almost maximised, which is consistent with the result in previous sections that the precision of the estimations is nearly saturated after the inclusion of the three best combined features.From top to bottom, models are trained with galaxies [1], [12], [17] and [127] as input features respectively.From left to right, the predicted value of halo mass are binned around 10 12.5 M , 10 13.5 M and 10 14 M respectively, with a bin width of 0.5 dex.Each coloured curve represents the CSMF for a given true halo mass as labelled in the colour bars.
Zhou & Han
It is interesting to notice that for the middle and right columns where more satellites can be resolved in a halo, the CCSMF still diverges at the lowest mass end even in the [127] model.This means the low mass end distribution still carries extra information that can be used to further constrain the halo mass, in addition to that already explored in the top 7 members.It is also consistent with our previous finding that the least massive satellite in the sample can contribute significantly to the halo mass estimation, instead of galaxy 7 being special.
This can be equivalently understood as the least massive satellite controls the overall amplitude of the faint end mass function or the richness of the halo, which is known to be tightly connected to halo mass.
CONCLUSIONS
In this work we have explored the connection of galaxy population to the host halo mass, to clarify the roles played by different galaxies on the halo mass estimation and to understand the information content of galaxy mass distribution on the halo mass.To this end we extract halos with at least 7 satellite galaxies from the IllustrisTNG simulation, and train a random forest algorithm to systematically assess the importances of different galaxy mass combinations in the prediction of halo mass.The results are further examined in the context of the conditional stellar mass function.
Our findings and conclusions are summarised as follows.
• When only one galaxy is used, we confirm that the central galaxy is the most informative single feature in estimating the halo mass.
• Compared with models that only use the central galaxy mass, the inclusion of satellite galaxy masses does improve the estimation of the halo mass, and the most informative binary features are always the central galaxy mass combined with another satellite mass.
• For the case of a combination of only two galaxies, the difference between the improvement of the model by adding any of the satellite galaxies to the central galaxy is not significant.This means there is not an outstanding satellite galaxy which contributes much more than the others to the halo mass estimation.In other words, we do not find an obviously "optimal" mass gap to be used in mass estimation.
• For combinations of three galaxies, the best combination is always that of the central galaxy with the second and the least massive galaxies.This conclusion holds when examining the top 7, 6 or 5 galaxies, and may be generalised to a larger number of available galaxies.It suggests that the biggest and smallest satellite galaxies provide the greatest differential information.
• For the seven member galaxies studied, the combination of a central galaxy and 2 or 3 satellite galaxies gives a near-optimal model performance, and continued addition of feature variables barely improves the model performance further.In other words, only a few galaxies are required to build a model with comparable accuracy to that using the whole member galaxy population.
• The different roles played by differently ranked galaxies can be directly mapped to the variation of different segments of the CCSMF with the halo mass.While the central galaxy controls the starting point of the CCSMF, the second massive galaxy controls the variation at the high mass end, and the least massive galaxy controls the amplitude or shape at the low mass end.Once these 3 galaxies are controlled, the CCSMF, that is, the full mass distribution of all the member galaxies, become largely determined in the studied mass range, with little extra variations that can inform about halo mass.However, we notice that the CSMF still contains extra variation at even lower masses beyond the 7th galaxy, which could be used to further constrain the halo mass.
The physical mechanism responsible for the information in the second and least massive galaxies might be that the former is related to recent as well as major merger events (Deason et al. 2013), while the latter characterises the total mass accretion of the halo.Recent and major mergers can significantly influence the mass distribution around the halo, causing it to deviate from common galaxy-halo connections.Moreover, the second massive galaxy and the smallest satellite galaxy in the satellite population have the greatest difference in the time of entry into the host halo, and therefore the greatest gap in the information they can provide.
Our findings can provide insights into how to choose members to obtain the most information about the halo mass when the available galaxy population in the halo is limited.The direct visualisation of the CSMF dependence on halo mass also has implications on how to describe the CSMF, to maximize the information it carries about halo mass.It remains interesting to check whether current CCSMF or similarly conditional luminosity function models (e.g., Yang et al. 2003;Guo et al. 2018) can fully capture these information.It is also straightforward to apply the analysis in this work to study the galaxy population-halo connection in other datasets such as the galaxy magnitude data and those from semi-analytical models, before applying the results to real observations.It is also worth extending these explorations to alternative halo mass definitions, given recently new understandings on the physical boundaries of halos such as the splashback radius (Diemer & Kravtsov 2014;Adhikari et al. 2014;Shi 2016) and the depletion radius (Fong & Han 2021;Li & Han 2021).
ACKNOWLEDGMENTS
We acknowledge helpful discussions with Wenting Wang, Rui Shi and Qingyang Li.JH benefited from discussions with Houjun Mo and many others at the assembly bias workshop at SJTU in 2019 which motivated this study.This work is supported by National Key Basic Research and Development Program of China (No. 2018YFA0404504), NSFC (11973032, 11890691, 11621303), 111 project (No.B20019), and the science
Figure 2 .Figure 3 .
Figure2.Relationship between the predicted halo mass and true halo mass values.The diagonal dashed line is the line represented y = x, and red line shows the median relation between true value and predict value.Red shaded region represents the 1σ percentile.The data points are spread uniformly and concentrated on both sides of the line, indicating that the difference between the predicted and true values is quite small and that random forest makes a good prediction of the halo mass.The fitted R 2 score is 0.946 and the Mean Square Error (MSE) value is 0.01.
Figure 4 .
Figure 4. MDI based feature importances given by the random forest.Histogram height represents the relative importance and the error bar is the standard deviation of the importance from individual trees in the forest.
Figure 5 .
Figure 5. Performances of the top four scoring feature combinations for each number of features.The data points are the R 2 scores of the 5-fold cross-validation and the errorbars are their standard deviations.Different colours represent different rankings, slightly offset horizontally for better visibility.The detailed combination names are labelled next to each point.The scores of solo features are not plotted to reduce the dynamical range of the figure.
Figure 9 .
Figure9.The cumulative stellar mass function (CSMF) for halos with the same predicted mass but different true halo mass.From top to bottom, models are trained with galaxies [1], [12],[17] and [127] as input features respectively.From left to right, the predicted value of halo mass are binned around 10 12.5 M , 10 13.5 M and 10 14 M respectively, with a bin width of 0.5 dex.Each coloured curve represents the CSMF for a given true halo mass as labelled in the colour bars.
Table 2 .
The optimal RF hyperparameters in our model.From left to right: number of trees within the forest, maximum growth depth of decision tree, minimum number of samples of leaves and minimum number of samples of branch nodes to split.
Table 3 .
The top four scoring feature combinations from exhaustive feature selection.The numbers in the brackets specify the stellar mass ranks (1 for central and 2-7 for satellites) of the constituting galaxies.The scores are the R 2 scores of the corresponding model. | 8,563.8 | 2022-03-29T00:00:00.000 | [
"Physics"
] |
Residence Time vs. Adjustment Time of Carbon Dioxide in the Atmosphere
We study the concepts of residence time vs. adjustment time time for carbon dioxide in the atmosphere. The system is analyzed with a two-box first-order model. Using this model, we reach three important conclusions: (1) The adjustment time is never larger than the residence time and can, thus, not be longer than about 5 years. (2) The idea of the atmosphere being stable at 280 ppm in pre-industrial times is untenable. (3) Nearly 90% of all anthropogenic carbon dioxide has already been removed from the atmosphere.
Introduction
One of the major points in discussion of the anthropogenic global warming (AGW) scenario is the time the added carbon dioxide (CO 2 ) stays in the atmosphere. In an extensive study, Solomon concluded that the residence time of carbon atoms in the atmosphere is of the order of 10 years [1], see Table 1. Such a short time would undermine the prime tenet of AGW, since a molecule of CO 2 will not have time to contribute to any greenhouse effect before it disappears to sinks where it cannot do any thermal harm. Just as water, a molecule that has orders of magnitude larger greenhouse potency, is irrelevant in the AGW discussion, because any water produced by (non-carbon-only) fossil fuels will rapidly equilibrate and the effect is zero. At best, it will raise ocean levels by some micrometers. As such, if the residence time is below 30 years (the climate window), injections of CO 2 in the atmosphere would, just like water, not affect the climate. Or as the IPCC writes in their upcoming report about another atmospheric constituent, "[Water], because of its residence time in the atmosphere averages just 8-10 day, its atmospheric concentration is largely governed by temperature", the value of 8-10 day coming from Ent [2].
However, some claim that the residence time (the amount of time a molecule on average spends in the atmosphere before it disappears from it) is not relevant for this discussion; what matters is the adjustment time (or relaxation time or (re)-equilibration time), the time it takes for a new equilibrium to establish, the time constant seen in the observed transient, and, allegedly, these two are different. In a recent work, Cawley explains it as [3] . . . natural fluxes into and out of the atmosphere are closely balanced and, hence, comparatively small anthropogenic fluxes can have a substantial effect on atmospheric concentrations.
Before we continue and address these points, initially, we need to provide the definitions. According to the IPCC (p. 1457 of Ref. [4]): Turnover time . . . is the ratio of the mass of a reservoir (e.g., a gaseous compound in the atmosphere) and the total rate of removal from the reservoir. Adjustment time or response time is the time scale characterising the decay of an instantaneous pulse input into the reservoir. The term adjustment time is also used to characterise the adjustment of the mass of a reservoir following a step change in the source strength. Half-life or decay constant is used to quantify a first-order exponential decay process.
In the current work, we use these exact two concepts, with turnover time called residence time. We also focus on first-order systems mentioned here by the IPCC. We discuss the difference between residence time on the one hand, and adjustment time on the other hand, and test the hypothesis that the adjustment time can be longer than the residence time by mathematical methods. After having addressed this core point, we perform a calculation based on the available data to see how they fit. In what follows, we will use a simple two-box first-order model, see Figure 1. The atmosphere has a mass of carbon dioxide equal to A. CO 2 molecules can be captured into a sink and this occurs at a certain rate, a fraction of the molecules being trapped per time unit. Each individual molecule has a certain probability to be captured over time. In other words, a molecule has a residence time τ a in the atmosphere (also sometimes called the 'turnover time'), which is the reciprocal of the rate, k a . Likewise, in the sink, there is a carbon dioxide mass equal to S, where molecules have a residence time τ s ; an individual molecule has a certain probability over time to be released by the sink into the atmosphere, or a rate k s . This then defines natural fluxes going out of the atmosphere into the sink and vice versa, in a first-order model given by, respectively Or in chemistry notation, with k s = 1/τ s and k a = 1/τ a . Note that these two time constants are considered constant, independent of time and temperature. At equilibrium, the two fluxes are equal and this links the equilibrium masses to the residence times, Or, to put it in thermodynamic terms, at equilibrium the change in the Gibbs free energy G is zero [5], with T temperature and R the gas constant. Gt per year to the system. Nature adds F n+ and takes away F n− to a sink represented by the bottom box. That sink has a total CO 2 mass equal to S. The residence time in the atmosphere, τ a is well known and estimated to be 5 years, the residence time in the sink τ s is not well known.
Humans add an extra flux into the atmosphere labeled F h . On the basis of this, we can determine the adjustment time τ of the atmosphere in terms of the residence times. This requires solving a simple mathematical differential equation; we do not have to worry at this moment about the thermodynamics and explain why the reaction constants are what they are. The questions we ask are, if we add an amount of carbon dioxide ∆A to it:
1.
What are the new equilibrium values of A and S? 2.
How long does it take to establish this new equilibrium?
The first question is readily answered. According to Equation (3), the new equilibrium mass in the atmosphere at t = ∞ is given by where the mass before the injection is indicated by the subscript '0', e.g., A 0 . A similar equation can be found for the new amount in the sink (swapping τ a and τ s in the expression). For the adjustment time, or relaxation time τ, we use the differential equation At equilibrium, this derivative is zero and the masses obey the ratio found in Equation (3). If we use the fact that the sum of masses after injection is the sum of masses before injection plus ∆A, and after the injection at t = 0 the mass stays constant, then, for t > 0 Substituting this in the equation before, results in The solution of this differential equation is an exponential decay with the new equilibrium value A ∞ given by Equation (5), and an adjustment time τ that is the parallel sum of residence times, rather than the residence time of only the atmosphere As we know from the analogue of parallel electronic resistors, the dominant time constant in this case is the smallest one, and the resulting time constant shorter than the shortest residence time. In other words, the adjustment time of the atmosphere is shorter than the residence time of carbon in the atmosphere. To give an example, if the residence time in the atmosphere is 10 years, and the residence time in the sink is 100 years, the adjustment time is 9.1 years. The statement is also true for non-first-order kinetics; no transient can be slowed down by adding a reflux back to the box under study, it would only change the equilibrium value while decreasing the time to reach that. Note: sometimes, the concept of 'half-life' is also used. It is clear that the time at which half of the perturbation has disappeared is given by t 1/2 = τ ln(2). The same reasoning used for τ obviously also applies to t 1/2 , with the found time constants multiplied by about 0.69. Figure 2 shows a simulation of such a two-box system. For better visibility, it is a symmetric system of equal atmosphere and sink with equal residence times. Both atmosphere and sink initially have 100.0 units, and before the first iteration 100.0 units are added to the atmosphere. At each iteration, A/τ a is moved from the atmosphere to the sink and S/τ s is moved from the sink to the atmosphere. As can be seen, the observed adjustment time is half of the individual residence times, and follows Equation (9). The new equilibrium reached (150 in both atmosphere and sink) is governed by the total amount in the sink and atmosphere after injection (300) and the ratio of kinetic constants (or reciprocal residence times) k a = 1/τ a and k s = 1/τ s . Note that the old equilibrium, with equal amounts of 100 in each box, will never be reached, not even after an infinite amount of time. The characteristic time of the added amount to be 'processed' and to reach the new equilibrium is what is defined as the adjustment time as, according also to the definition of the IPCC given at the beginning of the text, the time it takes for 1/e fraction of the surplus amount, the 'disturbance' relative to the new (not old) equilibrium, to disappear. This adjustment time is not defined as the time it takes for all added amounts to disappear from the atmosphere. Had we used this latter definition, the adjustment time would be infinite for any value of residence time in the atmosphere and this definition would, thus, be rather meaningless. Figure 1. Before injection of 100 into the atmosphere, the atmosphere-sink system was in equilibrium at 100 each, with the residence 'times' in both atmosphere and sink 1000 iterations. At each iteration A/τ a is moved from atmosphere to sink and S/τ s moved from sink to atmosphere. As can be seen, the observed adjustment time (relaxation time) of the system is 500 iterations, as predicted by Equation (9). After 500 iterations, the surplus quantity in the atmosphere relative to the new equilibrium has been reduced to 1/e, a level indicated by a horizontal dashed line. Further, a half-life can be defined, a time at which half of the transient amplitude has passed, t 1/2 = τ ln(2) = 347. This is indicated by a dotted line. (b) The adjustment time τ, as a function of the sink residence time τ s , normalized by the atmospheric residence time τ a . The dot indicates the value of the plot in (a), τ s = τ a , resulting in τ = τ a /2. As can be seen, the adjustment time is shorter than the atmospheric residence time for all values of the sink residence time, with, for large τ s , the adjustment time τ approaching the atmospheric residence time τ a .
We, thus, refute the claim of the climate-skeptics-skeptics [6] that individual carbon dioxide molecules have a short life time of around 5 years in the atmosphere. However, when they leave the atmosphere, they are simply swapping places with carbon dioxide in the ocean. The final amount of extra CO 2 that remains in the atmosphere stays there on a time scale of centuries.
Their flawed reasoning is that the adjustment time (relaxation time) is the mass perturbation in the atmosphere divided by the flux balance, and, so goes the reasoning, while fluxes can be great (and the residence time short), the balance is close to zero and the relaxation time can then approach infinity. Anthropogenic carbon would, thus, be able to stay a long time in the atmosphere. The work of Cawley mentioned before reasons along similar lines, albeit in a more obfuscated way. Equation (5) of that work is similar to the above equation with F n+ actually constant. It assumes a non-linear (non-first-order) function of A (oddly called a "linear function" in the work; their Equation (3)) for the outflux F n− , a function that is not justified and, moreover, does not make sense; it would imply a non-zero outflux F n− of a system with zero mass A. Moreover, the same equation uses the outflux rate (their k e , reciprocal residence time) later as the reciprocal adjustment time. They mixed everything up. However, it leads to a relaxation time that can take on any value and could conveniently support a century-scale adjustment time in the presence of a sub-decade residence time, something that physically does not make sense.
In fact, as shown here, the reality is that, if molecules have a residence time in the atmosphere of 5 years, surplus CO 2 remains in the atmosphere less than 5 years, albeit not much less if the residence time in the sink is much longer. Since that seems to be the case, for all purposes, we can take the residence time as the adjustment time. In fact, we suspect the residence times of Table 1 are actually adjustment times τ, since these are the time constants easily found in a transient, and determining the residence times τ a from the transients requires more knowledge of the system.
Before we continue, we finish this section by mentioning that the mass ratio of the sink and atmosphere in equilibrium can be estimated from the transient if the injection value of ∆A is known as well as the end value A ∞ . From that, then the residence time in the sink τ s can also be established. Looking at Figure 2, or Equation (5), we see that, if A ∞ , A 0 , ∆A and τ a are known, τ s can be estimated, and then the equilibrium ratio is S/A.
Scenarios
We can now do a more detailed analysis based on the available data. (Note: For easy reading, the pre-industrial values are marked by an asterisk, as in F * n+ , etc.). We start off with some facts. The pressure at the bottom of the atmosphere is 1020 mbar or hPa (1.02 × 10 5 N/m 2 in S.I. units). This force, divided by the gravitational constant (9.81 m/s 2 ) results in a mass density of 1.04 × 10 4 kg/m 2 . The total surface area of the planet is 510,072,000 km 2 ; this translates into a total mass of the atmosphere as 5.304 × 10 18 kg. Using a mixture of 20% oxygen (15.999 g/mol) and 80% nitrogen (14.007 g/mol), the average molar mass of air molecules is 28.81 g/mol. The atmosphere, thus, has 1.8408 × 10 20 mol. At this moment, there is a concentration of about [CO 2 ] = 420 ppm (parts-per-million mole fraction) of carbon dioxide in the atmosphere; that is, then, 7.73 × 10 16 mol of CO 2 . CO 2 has a molecular mass of 44.0095 g/mol, so that is a total of A = 3.403 × 10 15 kg. In a similar way, we can say that 1 ppm equals 8.1 × 10 12 kg. A tonne (t) being a thousand kilos, that means 1 ppm is equivalent to 8.1 Gt and there is a total of 3403 Gt in the atmosphere (see Table 2 for factual data on atmospheric carbon dioxide). It also has to be noted that there is sometimes confusion caused by the difference between carbon and carbon dioxide when talking about tonnage. It is clear that a tonne of carbon dioxide is only 273 kg-0.273 tC-of carbon atoms, the rest are oxygen atoms. Table 2. Carbon dioxide facts, with the natural outflux F n− derived from the mass in the atmosphere and the residence time. Other important parameters, influx F n+ , sink mass S, and sink residence time τ s , are less well known and should be considered adjustable.
Quantity
Parameter Value The pre-industrial 'equilibrium' (axiomatically assuming it indeed was in equilibrium before we started our industry) was 280 ppm. At this moment, every year we inject F h = 38 Gt/a into the atmosphere (see Figure 3a). However, the year-on-year increase in A is only about 20 Gt/a [7]. Apparently, about half (47%) immediately disappears, so that there is a net natural flux balance of −18 Gt/a. In our two-box model, the flux goes into the sink without considering the details. The residence time in the atmosphere can be estimated quite well from the aboveground atomic bomb tests [1], which makes us happy that these at least served the purpose of advancing atmospheric science, if nothing else. The best estimate is about τ a = 5 years [9]. Other references mention different times, with the IPCC mentioning the shortest (4 years) in their 5th Assessment Report (p. 1457 of Ref. [4]), showing that this value is not settled yet; we will use 5 years in this work. The equilibrium amount of carbon dioxide in the atmosphere is open for debate, but, for this purpose, we might use the consensus value of 280 ppm (A * = 2250 Gt). To estimate the amount of CO 2 in the sink is very difficult. However, there seems to be a general view that it is fifty times more than in the atmosphere, S = 50A = 113,400 Gt (relatively unchanged since pre-industrial times). Using the combination of these values does not allow for consistent bookkeeping, as the reader can easily verify. Something has to yield. In what follows, we will try out some scenarios based on specific assumptions.
Scenario: Pre-Industrial Atmosphere Was at Equilibrium
First we assume that the pre-industrial level of 280 ppm was indeed an equilibrium value with influx equal to outflux in the absence of human flux, as we are wont to believe, but that the mass in the sink S and the residence time τ s in the sink are unknown. Atmospheric carbon dioxide has increased 50% since these pre-industrial times (from 280 to 420 ppm). Since we are dealing with first-order kinetics (Equation (1)), the natural outflux F n− has, thus, also increased 50% from pre-industrial times. The current natural outflux is very well determined at F n− = A/τ a = 681 Gt/a (both parameters are well-established); in pre-industrial times, it must, thus, have been 33% less, at F * n− = 454 Gt/a. If we maintain the idea that in pre-industrial times the system was at equilibrium, then the natural influx F * n+ must have been equal to this outflux F * n− at 454 Gt/a in pre-industrial times and is now found by the flux balance, F n+ = 681 Gt/a − 18 Gt/a = 663 Gt/a (46% gain). The residence time of carbon in the sink cannot have changed, so the sink itself must have gained 46% in mass-a conclusion that is highly unlikely since it would imply a rather small carbon buffer in the sink if such tiny flux imbalances can disturb the buffer to such a large extent. In this scenario, the amount of carbon in the sink must be about equal to the amount in the atmosphere.
Yet, as mentioned before, we can obtain a good estimate from the sink mass in equilibrium from the transients. An especially good tool is the 14 C released in the atmosphere by atomic-bomb tests, since this isotope of carbon has very low natural abundance, enabling an accurate estimation of ∆A. Such a partial analysis, with a subset of carbon atoms, is possible, as discussed later. We can actually take the fraction of 14 C of all carbon as a measure of the total mass A of this subset. Moreover, note that the half-life of nuclear decay of 14 C is 5730 years and, thus, no significant decay took place during the experiment; all carbon-14 disappeared from the atmosphere by transfer to the sink. Figure 4 shows an example of investigations carried out by Enting and Nydal [10] (data of Enting from a work by Perruchoud et al. [11]; extracted by WebPlotDigitizer [12]). Using A 0 equal to zero, from a fit (shown as a dashed line), we find an adjustment time of τ = 14.0 a, an amplitude of ∆A = 740, and A ∞ = 30. From this, we derive τ s = 344 a, and τ a = 14.6 a. We find an equilibrium sink-to-atmosphere mass ratio of 24. This analysis assumes a delta-Dirac insertion of 14 C in 1965. From the figure we can see that the 14 C-injection already took place earlier and we, thus, underestimate ∆A, and, therefore, underestimate the S/A mass ratio. However, even this lower-estimate can be used to debunk the idea that the sink buffer S is small, and, thus, debunk the idea that the atmosphere was stable in pre-industrial times (in this model). . This enables stating that the sink must be at least 24 times larger than the atmosphere. Data from Enting (blue) found in a work of Perruchoud [11] and Nydal et al. [10] (green), extracted with WebPlotDigitizer [12].
It seems that the idea of the pre-industrial level stable at 280 ppm (with F n+ = F n− at 280 ppm) is untenable. It seems very likely that the sink was already off-balance and emitting amounts of carbon dioxide at the beginning of the industrial era and the increase in the atmospheric CO 2 at any time in human history is not solely due to human activity. This would also explain the large pre-Mauna-Loa values found with chemical methods summarized by Beck [13] and Slocum [14]. For instance, values of 500 ppm have been observed around 1940. Ignoring these facts, on the other hand, would be equivalent to throwing entire generations of scientists under the bus.
Scenario: The Sink Is Fifty Times Larger Than the Atmosphere
Next, we adopt the assumption that the sink at this moment really has 50 times more carbon than the atmosphere, in other words, S = 50A = 170,000 Gt, and release the restriction that the atmosphere was stable at 280 ppm; in pre-industrial times there can have been a flux imbalance. We can first make an estimate of the residence time in the sink by noting that the natural outflux F n− = A/τ a = 681 Gt/a at this moment is not fully compensated by influx from the sink. An imbalance of 18 Gt exists, so F n+ = 663 Gt/a. Given the sink mass, this results in a residence time of τ s = S/F n+ = 256 a. Most of the 1696.5 Gt that we have produced from burning fossil fuels (Figure 3) must have disappeared into the sink. However, that did not make a big dent. In pre-industrial times, the sink mass must have been only 168,000 Gt. The emissions from that sink at that time must have been F n+ = S/τ s = 657.4 Gt/a. The outflux then at 280 ppm (A * = 2250 Gt) was F * n− = A * /τ a = 450 Gt/a. We see indeed a tremendous outgassing from the sink in preindustrial times. The system was far from equilibrium, with an imbalance being a net influx of F * n+ − F * n− = 207 Gt/a. Where, at the moment, there is a net natural flux of 18 Gt/a out of the atmosphere, in pre-industrial times, in this two-box first-order model with a sink 50 times larger than the atmosphere, there was a net natural influx of 207 Gt/a. Somewhere, we must have passed the equilibrium value and, considering the above numbers, this value must be rather close to today's concentration of 420 ppm.
Scenario: Residence Time in the Sink Is Much Larger Than in the Atmosphere
If we only assume that the residence time in the sink is much larger than in the atmosphere, τ s τ a , then we can get a good idea of what has happened to our anthropogenic contribution to the carbon in the atmosphere, F h , based on the two-box model. Because it is first-order, with all fluxes linearly depending on masses, in our analysis, the carbon dioxide can be decomposed into anthropogenic and natural and each treated separately. In a statistical physics/mathematics analogy, as if one were yellow balls and the other red, and we are constantly randomly taking a fixed fraction (not a fixed number) of balls from one of the boxes and putting them in the other, the chances of getting a yellow or red ball are proportional to the number of balls of that color in the box. A very important observation: adding yellow balls to the system does not change anything about the dynamics of the red balls. Some may think that adding yellow balls in one box (atmosphere) influences the amount of red balls in it, but that is not the case in first-order kinetics; the yellow and red ball subsystems are fully independent and can be analyzed separately, even if the observer is colorblind (such that carbon dioxide molecules are indistinguishable). For instance, if the red balls (natural CO 2 ) were at equilibrium before the yellow balls were added, no net flow of red balls will take place after adding them. In other words, we can analyze the anthropogenic and natural CO 2 entirely separately and, at the end, simply add them together. The amount of anthropogenic CO 2 in the atmosphere does not influence the amount of natural CO 2 in the atmosphere and vice versa. We can, thus, analyze how much of the anthropogenic CO 2 still remains in the atmosphere by simply analyzing it with our two-box model. (In the case where the atoms can be distinguished, equivalent to being able to see the colors of the balls, we can determine the kinetics parameters by looking at only one type, only one color). Figure 3 shows the yearly carbon dioxide emissions into the atmosphere (left panel; data source: Our World In Data [8]). The total amount so far emitted is 1696.5 Gt. The right panel shows the cumulative emissions, ∑ year i F h (i). If at every year we apply the fluxes according to Equation (1), then we can see at each year how much of the anthropogenic CO 2 is still in the atmosphere. The right panel of Figure 3 shows this for τ s = 50τ a . We see that only 202.3 Gt of the total injected 1696.5 Gt is still in the atmosphere. In these years, the amount of CO 2 in the atmosphere has risen from 280 ppm (2268 Gt) to 420 ppm (3403 Gt), an increment of 1135 Gt. Of these, 202.3 Gt (17.8%) would be attributable to humans and the rest, 932.7 Gt (82.2%), must be from natural sources. In view of this, curbing carbon emissions seems rather fruitless; even if we destroy the fossil-fuel-based economy (and human wealth with it), we would only delay the inevitable natural scenario by a couple of years.
Scenario: Abandoning Constant Residence Times
We have seen here how the first-order-kinetics two-box model results in conclusions contrary to data. We could, of course, change our model. We could abandon the idea of first-order kinetics (where flux is proportional to mass), but that would be problematic to justify with physics. Yet, some authors do that and, in that case, one can add parameters to the system until it has the desired property of having a stable atmosphere at A * = 280 ppm. The chemical measurements described by scientists such as Beck and Slocum mentioned above still remain to be explained. How could we have had very large concentrations in recent history?
We could also add more boxes to the system, distinguishing the sinks, or differentiating between deep ocean and shallow ocean, dissolved carbon dioxide gas, CO 2 (aq), and dissolved organic carbon (sea-shells), or between CO 2 disappearing in the oceans and being sequestered in biological matter on land, etc. Then each box can have its own kinetics; as an example, plant growth is sublinear with CO 2 concentration. We leave it for further work to formally analyze the adjustment time in higher-order kinetics systems with any number of boxes.
However, we expect the most likely improvement to the model to come from abandoning the idea that the residence times τ a and τ s are constant. They, in fact, are very much dependent on temperature. As an example, the ratio between the two that tells us the concentrations (and, thus, the masses) between carbon dioxide in the atmosphere and in the sink, if we assume this sink to be the oceans, is governed by Henry's Law, and this concentration ratio is then dependent on temperature. When including such effects, we might even conclude that the entire concentration of carbon dioxide in the atmosphere is fully governed by such environmental parameters and fully independent of human injections into the system. A is simply a function of many parameters, including the temperature T, but not F h . It is as if the relaxation time is extremely short and any disturbances introduced by humans, or by other means, rapidly disappear, rapidly reaching the equilibrium determined by nature.
This fits very nicely with the recent finding that the stalling of the economy and the accompanying severe reduction in carbon emissions during the Covid pandemic had no visible impact on the dynamics of the atmosphere whatsoever [15]. The result of that research, the hypothesis that the carbon dioxide increments in the atmosphere were fully due to natural causes and not humans, fits the experimental data very well, and the hypothesis that humans are fully responsible for the increments can equally be rejected scientifically. This then also agrees with the conclusions of Segalstad that "The rising atmospheric CO 2 is the outcome of rising temperature rather than vice versa" [16]. The pre-industrial atmosphere might indeed have been in equilibrium, and we are currently also in, or close to, equilibrium. That seems to us to be the most likely scenario. Once we admit the possibility of non-anthropogenic sources of carbon dioxide, we can start finding out what they might be. Examples such as volcanic sources, planetary and solar cycles spring to mind. It might well be that the climate puzzle is solved in such areas as the link between solar activity and seismic activity and climate [17]. This is, however, not the focus of this work. We conclude here by summarizing the major findings of this analysis using a first-order-kinetics two-box model: (1) The adjustment time is never larger than the residence time and is less than 5 years. (2) The idea of the atmosphere being stable at 280 ppm in pre-industrial times is untenable. (3) Nearly 90% of all anthropogenic carbon dioxide has already been removed from the atmosphere. Institutional Review Board Statement: Not applicable.
Conflicts of Interest:
The author declares no conflict of interest.
Abbreviations
The following abbreviations are used in this manuscript: AGW Anthropogenic Global Warming IPCC Intergovernmental Panel on Climate Change | 7,267.4 | 2023-02-01T00:00:00.000 | [
"Environmental Science",
"Physics"
] |
Blockchain technology: a novel approach to enhance ecology conservation and management
. Blockchain technology has shown promise in various industries, but its application in ecology has been limited. This study explores the potential of blockchain in addressing ecological challenges, improving transparency, and promoting sustainable practices. A comprehensive literature review is conducted, followed by a qualitative analysis of relevant case studies. The results demonstrate the effectiveness of blockchain in enhancing data security, traceability, and stakeholder collaboration. The study concludes that blockchain technology has the potential to revolutionize ecological conservation and management, ultimately leading to more sustainable ecosystems.
Introduction
Ecology, the study of interactions between organisms and their environment, plays a critical role in understanding and addressing environmental issues, such as habitat loss, climate change, and pollution.Effective ecological management and conservation require accurate and up-to-date data, as well as efficient collaboration between stakeholders.However, traditional methods often suffer from lack of transparency, data tampering, and delayed information sharing, leading to suboptimal decisions and resource allocation.
Blockchain technology, a decentralized digital ledger that records transactions in a secure and transparent manner, has the potential to address these challenges.By leveraging blockchain, stakeholders can enhance data security, traceability, and collaboration, ultimately leading to more sustainable ecosystems.This study aims to investigate the applicability and effectiveness of blockchain technology in ecology, filling a crucial gap in the current literature.
In this study, we consider methods and algorithms to improve or most effectively use neural networks when working with small flight devices for agricultural purposes in heating and electrical networks.
Bibliographic reviews
The application of blockchain technology in ecology is an emerging area of research, with limited but growing literature.Most existing studies focus on the theoretical benefits of blockchain, such as improved data security and transparency (Kumar et al., 2020).However, empirical evidence on the practical application and effectiveness of blockchain in ecology remains scarce.
Recent studies have explored the potential of blockchain in addressing specific ecological challenges, such as illegal wildlife trade (Alam et al., 2020) and deforestation (León et al., 2020).These studies highlight the potential of blockchain to enhance traceability and accountability in supply chains, thereby promoting sustainable practices.
In addition, a few studies have examined the potential of blockchain to facilitate data sharing and collaboration among stakeholders in ecological management (Li et al., 2019).By providing a secure and transparent platform for data exchange, blockchain can improve decision-making and resource allocation in ecological conservation.
Despite the growing interest in the potential of blockchain for ecology, there is a need for more comprehensive research that examines the effectiveness of blockchain in various ecological contexts, as well as the challenges and opportunities associated with its implementation.
Materials and methods
To investigate the potential of blockchain technology in ecology, a comprehensive literature review was conducted using relevant databases, including Web of Science, Scopus, and Google Scholar.The search terms included "blockchain," "ecology," "conservation," "sustainability," and related keywords.The selection criteria included peer-reviewed articles, book chapters, and conference papers published in English between 2015 and 2022.The review focused on studies that examined the practical application and effectiveness of blockchain in addressing ecological challenges.In addition to the literature review, a qualitative analysis of relevant case studies was conducted to assess the real-world implementation of blockchain in ecology.These case studies were selected based on their relevance to ecological management and conservation, as well as the availability of detailed information on their blockchain implementation.The analysis focused on the benefits, challenges, and lessons learned from these case studies, providing valuable insights into the applicability and effectiveness of blockchain in ecology.
Results
The literature review and case study analysis revealed several key benefits of blockchain technology in ecology, including: a) Enhanced data security: Blockchain's cryptographic features ensure that ecological data remains secure and tamper-proof, reducing the risk of data manipulation or unauthorized access (Kumar et al., 2020).This increased data integrity is crucial for ecological management, where accurate and reliable data is essential for informed decision-making.b) Improved traceability: Blockchain enables the tracking of ecological data and resources along supply chains, allowing stakeholders to monitor and verify the origin, movement, and impact of goods and resources on ecosystems (Alam et al., 2020;León et al., 2020).This enhanced traceability can help identify and combat illegal or unsustainable practices, ultimately promoting ecological conservation.c) Facilitated collaboration: Blockchain provides a decentralized and transparent platform for data sharing, enabling stakeholders to efficiently collaborate and coordinate their efforts in ecological management (Li et al., 2019).This improved collaboration can lead to more effective decision-making and resource allocation, ultimately benefiting ecosystems.Despite these challenges, the case studies analyzed in this study demonstrate the potential of blockchain technology to address various ecological challenges and promote sustainable practices.These cases highlight the importance of tailoring blockchain solutions to specific ecological contexts and addressing the technological, scalability, and privacy challenges associated with their implementation.
Conclusion
This study provides a comprehensive examination of the potential of blockchain technology in ecology, including its benefits, challenges, and practical applications.The results demonstrate that blockchain can significantly enhance data security, traceability, and stakeholder collaboration in ecological management and conservation.While challenges such as technological barriers, scalability, and privacy concerns need to be addressed, the analyzed case studies offer valuable insights into the successful implementation of blockchain in various ecological contexts.The study concludes that blockchain technology has the potential to revolutionize ecological conservation and management, ultimately leading to more sustainable ecosystems.Future research should continue to explore the practical application of blockchain in ecology, focusing on the development of tailored solutions that address the unique challenges and opportunities of specific ecological contexts.Additionally, interdisciplinary collaborations between ecologists, technologists, and policymakers are essential to ensure the successful integration of blockchain technology in ecological conservation and management efforts.
Fig 1 .
Fig 1. Global offer chains.Ecosystem ecology.However, the implementation of blockchain in ecology also faces several challenges, including: a) Technological barriers: The adoption of blockchain technology requires significant technical expertise and infrastructure, which may be lacking in some ecological contexts, particularly in developing countries or remote areas.b) Scalability and energy consumption: Blockchain networks, particularly those based on proof-of-work consensus mechanisms, can consume significant amounts of energy and may struggle to scale effectively, potentially limiting their applicability in large-scale ecological initiatives.c)Privacy concerns: While blockchain can enhance transparency, it may also raise privacy concerns, as sensitive ecological data or stakeholder information could become publicly accessible.Despite these challenges, the case studies analyzed in this study demonstrate the potential of blockchain technology to address various ecological challenges and promote sustainable practices.These cases highlight the importance of tailoring blockchain solutions to specific ecological contexts and addressing the technological, scalability, and privacy challenges associated with their implementation. | 1,519 | 2023-01-01T00:00:00.000 | [
"Environmental Science",
"Computer Science",
"Business"
] |
Use of principal states of polarization of a liquid crystal device to achieve a dynamical modulation of broadband beams
A spatially resolved polarization switcher operating over a bandwidth of 200 nm is demonstrated. The system is based on liquid crystal technology and no specific-purpose birefringent element is required. The procedure is founded on the polarization mode dispersion theory of optical fibers, which provides a convenient framework for the design of broadband polarization systems. Our device benefits from the high resolution of off-the-shelf twisted nematic liquid crystal displays and is well suited for spatial modulation of the intensity of broadband beams, such as those coming from few-cycle femtosecond lasers. © 2009 Optical Society of America OCIS codes: 230.3720, 260.5430, 230.6120.
Several proposals have been reported in recent years to use the liquid crystal (LC) technology in the implementation of broadband polarization devices, such as achromatic phase retarders or polarization switchers [1][2][3][4][5].These elements are potentially striking in the applications that require a dynamical modulation of beams coming from femtosecond lasers or incoherent sources [6,7].Much of the reported broadband devices combine several LC cells of nematic type.The wavelength dependence of such systems is minimized by optimizing the design parameters of each LC component [3][4][5].
In this Letter, we present an unusual approach in the field of LC devices to modulate the state of polarization (SOP) of a broadband beam.Our procedure is borrowed from the well-established polarization mode dispersion (PMD) theory in optical fibers [8].PMD is a form of modal dispersion that has its origin in optical birefringence.This phenomenon is modeled by means of the so-called principal states of polarization (PSPs).For any concatenation of birefringent elements, the PSP model points out the existence of two input SOPs for which the corresponding output SOPs are stationary to first order in frequency [9].
Here, the PSP model is exploited to flatten the spectral response of an LC device.As a proof of concept, we have considered a system composed of a twisted nematic liquid crystal display (TNLCD) sandwiched between two liquid crystal variable retarders (LCVRs).In contrast to previous works, we have employed commercially available devices, so a control over the design parameters of LC elements is not required.Just by adjusting the configuration of the LCVRs, we have developed a switchable linear polarization rotator with a bandwidth spreading over 200 nm.A binary intensity modulation is then attained by inserting the LC concatenation between a pair of crossed linear polarizers.A device composed of similar elements has been previously reported, but with the aim of rotating, not switching, the input polarization [2].Furthermore, we have taken advan-tage of the pixelated structure of a TNLCD, which enables a spatial control of the input light intensity.In this way, the whole system behaves as a broadband binary intensity spatial light modulator (SLM).
Let us briefly review some results of the PMD theory useful for our purpose.We follow the notation and the theoretical discussion presented in [8].In the absence of polarization-dependent loss, the change in polarization when light is transmitted through a birefringent medium can be described by a unitary Jones matrix U,
͑1͒
where a and b are complex quantities and the asterisk symbolizes the complex conjugate.The output PSPs of the medium are the eigenstates of the operator jU U † , where j is the imaginary unit, the dagger stands for the Hermitian conjugate, and the subscript denotes differentiation at a certain angular frequency 0 , ͑d/d͒ 0 .The Jones vectors corresponding to the PSPs, written in the Dirac bracket notation, are determined by the eigenvalue problem In Eq. ( 2), is the differential group delay (DGD) between the slow and fast PSPs, respectively labeled by a positive and a negative sign, respectively.Both vectors are orthogonal; i.e., their inner product is zero, Using the PSP model, PMD can be characterized in the three-dimensional Stokes space by means of the PMD vector , defined as where p ˆis a 3ϫ 1 unit vector that points in the direction of the slow output PSP, whereas −p ˆrepresents the fast output PSP.Note that Eq. ( 3) is written in a right-circular Stokes space.The components i of the PMD vector, as well as the DGD , are determined through the equations In the above expressions Im͑ ͒ and Re͑ ͒ denote, respectively, the imaginary and real parts of complex quantities.
The connection between an input PSP s ˆand the corresponding output one p ˆis governed by where M R is the 3 ϫ 3 Mueller rotation matrix isomorphic to U. Equation ( 5) describes a rotation on the surface of the Poincaré sphere that transforms the input vector s ˆinto an output vector p ˆindependent to first order in frequency.
We have applied the above analysis to the LC arrangement delimited in Fig. 1 with a dotted rectangle.LCVRs are parallel-aligned LC devices without spatial structure, which provide wavelengthdependent retardances 2␦ 1 and 2␦ 2 and have their slow axes oriented at angles 1 and 2 from the x axis, respectively.The retardance 2␦ k is given by 2␦ k = ͑ / c͒⌬C k , where c is the speed of light in vacuum and ⌬C k is the voltage-controlled optical path difference between the extraordinary and ordinary components of light.The polarization properties of LCVRs are described by conventional waveplate Jones matrices J k (k = 1,2).In our system, LCVRs are configured to provide the same retardance for each frequency ͑␦ 1 = ␦ 2 = ␦͒ and are oriented at normal directions ͑ 2 = 1 ± /2͒.In this way, J 1 and J 2 are inverse matrices of each other.In its turn, the TNLCD is a pixelated device constituted by cells with a molecular twist angle , an optical birefringence ⌬n, and a thickness d.In the off state, when no signal is addressed to the cells, their polarization properties are described by a Jones matrix J TN ͑off͒ [10], where R is a 2ϫ 2 rotation matrix, ⌿ is a global phase, X = cos ␥, Y = ͑ / ␥͒sin ␥, and Z = ͑ / ␥͒sin ␥, with ␥ = ͱ 2 +  2 .In these expressions,  is the bire- fringence angle defined as  = ͑ /2c͒⌬nd.In the on state, when a maximum voltage is applied to the cells, they ideally behave as a transparent isotropic media, i.e., J TN ͑on͒ = I for all the spectral components.
The operation principle of the LC modulator is explained as follows.The Jones matrix J of the complete LC concatenation is given by the matrix product J = J 2 J TN J 1 .When a cell of the TNLCD is in the on state, the matrix J is simply the identity matrix, since the action of LCVRs over light polarization is mutually canceled.On contrary, when no signal is sent to a cell of the TNLCD, J is a unitary matrix that can be written, aside from global phase factors, as the matrix U of Eq. (1).By means of Eqs. ( 3) and ( 4), we can determine the components p i = p i ͑⌬C , 1 ͒ (i = 1,2,3) of the slow output PSP for a given frequency 0 .The corresponding components s i ͑⌬C , 1 ͒ of the slow input PSP can be obtained though Eq. ( 5).If p ând s ˆare contained in the equatorial plane of the sphere, the system behaves as a switchable linear polarization rotator.Consequently, a search algorithm can be designed to find the optimal configuration of LCVRs that simultaneously minimizes the absolute value of the parameters p 3 ͑⌬C , 1 ͒ and s 3 ͑⌬C , 1 ͒.
The LC concatenation used here includes a commercial TNLCD (Sony LCX016AL with 832ϫ 624 sized pixels 32 m) and two LCVRs (from Meadowlark Optics) calibrated for the visible spectrum.The TNLCD is composed of cells with a twist angle = −1.594rad and d⌬n = 0.47 m at 550 nm.The application of a voltage to the cells is performed by sending a gray-level image to the device.Concerning the LCVRs, their slow axis can only be oriented at two orthogonal directions with respect the x axis ( 1 = 0, 90°).The configuration of the LCVRs that minimizes the merit function F͑⌬C , 1 ͒ = ͑p 3 ͒ 2 + ͑s 3 ͒ 2 at 0 = 3.42ϫ 10 15 s −1 ͑ 0 = 550 nm͒ is (⌬C = 0.4 m, 1 = 0°).The azimuths of the input and output PSPs are, respectively, 1 = −6°and 2 = −85.5°,and the ellipticity angle is less than 0.2°in both cases.The switchable SOP rotation leads to a binary intensity modulation by inserting the LC concatenation between two orthogonal polarizers, with the first one oriented in the direction of either the slow or the fast input PSP.As is shown in Fig. 1, the second polarizer must be ori- ented at 1 + 90°to ensure a minimum transmission when the TNLCD is in the on state.In this way, SLM presents two operation modes (dark and bright) depending on the signal addressed to the TNLCD.Figure 2 shows the transmittance T for the bright state versus the detuning parameter , defined as = ⌬ / 0 with ⌬ = − 0 .A mean transmission of 96.5% is achieved with a residual intensity variation ⌬T lower than 2.2%.Such an intensity response is explained by the small wavelength dependence of the SOP impinging onto the analyzer, which has an azimuth that varies 2°from 450 to 650 nm, with a mean ellipticity angle of 0.3°.These results clearly improve those obtained from a single TNLCD, which produces an intensity variation of the order of 10% when it is inserted between crossed polarizers [2].Furthermore, our system has the advantage of not requiring the fabrication of specific LC structures, although specially designed devices present a wider achromatic response [3][4][5].
The PMD vector is stationary only to the first order, so a residual chromatism still remains.Secondorder PMD is defined as the frequency derivative of , =d /d.In accordance with Eq. ( 3), can be written as the sum of two perpendicular vectors, = p + p ˆ.Here we are interested only in the angular rate of rotation of the vector , ͉p ˆ͉ =d⌽ /d, where ⌽ is the angle between ͑ 0 + ⌬͒ and ͑ 0 ͒ [8].In Fig. 2 the variation in ⌽ with the parameter is depicted for our system.A comparison with T͑͒ shows that for −0.03ϽϽ0.14(from 485 to 570 nm) ⌽Ͻ1°and T is practically constant.Out of this range, ͉p ˆ͉ takes a nonnegligible value, so ⌽ continually increases leading to a more significant intensity variation.
The above results have been experimentally verified with the optical setup shown in Fig. 1.We used as a white light source (S) a xenon arc lamp followed by a spatial filter (SF) and a collimating lens ͑L 1 ͒.The LC concatenation was sandwiched between two linear polarizers (P).Spectral intensity was measured with the aid of a focusing lens ͑L 2 ͒ and a spectrophotometer (SP).Experimental data were normalized to unit by measuring the total light intensity impinging onto the detector.Figure 3(a) shows the SLM transmission T in the dark state (mean extinction ratio of 3 ϫ 10 −3 ) and in the bright state (mean transmission of about 96.6% with ⌬T Ͻ 2.5%).To take into account losses caused by internal reflections and by the TNLCD pixelated structure, we show in Fig. 3(b) the transmittance of the SLM in the bright state, defined as = 10 log͑I 1 / I 0 ͒, where I 0 and I 1 are, respectively, the light intensity after the first and second polarizers.The slope of in the short wavelength range is a consequence of the light absorption by the TNLCD.The transmittance of the display (without polarizers) in the off state, TN , is also included in Fig. 3(b).The mean distance between both curves is 2 dB, with a standard deviation of 0.15 dB.
In summary, we have demonstrated that the PSP model, usually limited to characterizing polarization effects in optical fibers, constitutes an efficient tool to flatten the spectral response of a LC device.As a proof of concept, we have implemented a binary intensity SLM that includes only commercially available elements.Experimental results show a flat bright response over a spectral band of 200 nm in the visible region (root mean square error lower than 6 ϫ 10 −3 ) with an extinction ratio of about 1:1000 using high-quality polarizers.
Fig. 2 .
Fig. 2. (Color online) Modeled transmission in the bright state and rate of rotation of vector as a function of the detuning parameter , which covers the spectral range from 450 to 650 nm.
Fig. 1 .
Fig. 1. (Color online) Scheme of the optical setup.The x axis is parallel to the molecular director at the entrance side of the TNLCD. | 2,977.8 | 2009-08-15T00:00:00.000 | [
"Engineering",
"Physics"
] |
Conformal Nets V: Dualizability
We prove that finite-index conformal nets are fully dualizable objects in the 3-category of conformal nets. Therefore, assuming the cobordism hypothesis applies, there exists a local framed topological field theory whose value on the point is any finite-index conformal net. Along the way, we prove a Peter–Weyl theorem for defects between conformal nets, namely that the annular sector of a finite defect is the sum of every sector tensored with its dual.
Introduction
A finite-dimensional Hilbert space H is dualizable in the sense that there is a Hilbert space H * together with evaluation and coevaluation morphisms ev : H ⊗ H * → C and coev : C → H * ⊗ H such that the identity id H can be recovered as the composite (coev ⊗ id H ) • (id H ⊗ ev), and the identity id H * can be recovered as a similar composite; indeed, every dualizable Hilbert space is finite-dimensional.
The 2-category vN of von Neumann algebras deloops the category Hilb of Hilbert spaces in the sense that Hom vN (1, 1) ∼ = Hilb.If a von Neumann algebra A is a finite direct sum of type I factors, then it is fully dualizable in the sense that there is a von Neumann algebra A op together with evaluation bimodule A⊗A op H C and coevaluation bimodule C H A op ⊗A such that the identity bimodule A L 2 (A) A can be recovered as a composite of the evaluation and coevaluation (and the identity bimodule for A op can be similarly recovered), and such that the evaluation and coevaluation bimodules themselves admit adjoints.A fully dualizable von Neumann algebra is in fact necessarily a finite direct sum of type I factors.More generally, full dualizability functions as a strong finiteness condition on the objects of a higher category.
The 3-category CN of conformal nets deloops the 2-category vN of von Neumann algebras, in the sense that Hom CN (1, 1) ∼ = vN [BDH19, Prop.1.22].In this paper, the fifth in a series [BDH15, BDH17, BDH19, BDH18] concerning the 3-category of conformal nets, we investigate the dualizability properties of conformal nets and their defects and sectors.Our main result is that a conformal net is fully dualizable if (Theorem B below) and only if (Theorem C below) it has finite index.
Dualizability.
Recall that two i-morphisms F : A → B and G : B → A in an n-category (i < n) are called adjoint (or dual ), denoted F ⊣ G, if there exist (i + 1)morphisms, the unit s : id B → G • F and the counit r : F • G → id A such that the composite (id G •r)•(s•id G ) is equivalent to id G and the composite (r•id F )•(id F •s) is equivalent to id F ; we say that F admits G as its right adjoint, or equivalently that G admits F as its left adjoint.Similarly, two objects f and g in a symmetric monoidal n-category are called dual if there exist 1-morphisms, the coevaluation s : 1 → g ⊗f and the evaluation r : f ⊗ g → 1, such that the composite (id g ⊗r) • (s ⊗ id g ) is equivalent to id g and the composite (r ⊗ id f ) • (id f ⊗s) is equivalent to id f .An i-morphism F : A → B in an n-category (i < n) is called fully dualizable if there is an infinite chain of adjunctions such that every unit and counit morphism in each of the adjunctions in that chain itself admits a similar infinite chain of adjunctions, such that every unit and counit morphism in each of the adjunctions in all of those chains in turn admits an infinite chain of adjunctions, and so on until one reaches a chain of (n − 1)-morphisms, at which point the conditions stop.(We refer to an (n − 1)-morphism that has an infinite chain of left and right adjoints, and is therefore fully dualizable, simply as 'dualizable'.)Similarly, an object in a symmetric monoidal n-category is fully dualizable (also called 'n-dualizable') if it admits a dual and the coevaluation and evaluation morphisms are fully dualizable.An n-category is said to have all duals if every object is fully dualizable and every i-morphism (i < n) is fully dualizable.(Note that the notions of fully dualizable and of having all duals do not depend on the exact model one chooses for symmetric monoidal n-categories, because the dualizability conditions can be phrased entirely in terms of homotopy 2-categories canonically associated to the n-category.For a more detailed discussion of the notion of dualizability, see [DSPS17,Appendix A].) The cobordism hypothesis [BD95,Lur09,AF17] ensures that for any fully dualizable object c in a symmetric monoidal n-category C, there is a local framed topological field theory F c : Bord fr n → C whose value on the positively framed point is c. 1 In particular, for any such object, there is an associated framed n-manifold invariant.
Finiteness.We will investigate the dualizability of objects and morphisms in the symmetric monoidal 3-category of conformal nets.To that end, we introduce notions of 'finiteness' for nets, defects, and sectors, arranged in such a way that finiteness ensures both the existence of a dual (or adjoint) and in turn the finiteness of the coevaluation and evaluation (or unit and counit) morphisms.We will therefore be able to successively establish that finiteness implies dualizability for sectors, defects, and conformal nets.
Consider the following subintervals of the standard circle: 1 See the section on 'Manifold invariants' below, and in particular Footnote 4, for a discussion of the applicability of the cobordism hypothesis to the symmetric monoidal 3-category of conformal nets.
Moreover, let I 1 , . . ., I 4 ⊂ S 1 be the subintervals indicated here: (0.1) When appropriate, we equip the standard circle S 1 with its standard bicoloring ⊣ , and give I 1 , . . ., I 4 the induced bicoloring, so that I 1 and I 3 are genuinely bicolored, I 2 is white, and I 4 is black.
We henceforth assume that all conformal nets and defects are semisimple, that is finite direct sums of irreducible ones; (a conformal net or defect is irreducible if it does not admit a non-trivial direct sum decomposition).Definition 0.2.
op is dualizable as a morphism in the 2-category of von Neumann algebras.
Note that, because there is a contravariant involution on the 2-morphisms of the 2-category of von Neumann algebras (namely the adjoint map of Hilbert spaces), a left adjoint bimodule is also a right adjoint bimodule and vice versa; thus for a bimodule to be dualizable it suffices that it admit a single adjoint.
Statement of results.
In order to construct adjunctions for defects, we will need to understand the Hilbert space assigned by a defect to a bicolored annulus.To that end, we prove the following Peter-Weyl annular decomposition theorem for defects, generalizing the Kawahigashi-Longo-Müger theorem for conformal nets [KLM01,Thm 9].Given a bicolored annulus A = and a defect D, let H ann (D) denote the associated Hilbert space, considered as a '∂A-sector', that is, a representation of the collection of algebras {D(I)} for I a subinterval of the boundary ∂A, subject to the following isotony and locality axioms: Let ∆ D be the set of isomorphism classes of irreducible D-D-sectors, with the sector associated to λ ∈ ∆ D denoted H λ .Let λ denote the dual isomorphism class, and let H λ ⊗ Hλ denote the ∂A-sector where one circle acts on H λ and the other circle acts on Hλ.
Theorem A (Peter-Weyl for defects).For a finite irreducible defect D, the annular sector H ann (D) is non-canonically isomorphic to the sum λ∈∆D H λ ⊗Hλ of every sector tensor its dual.
2 If A is irreducible, then this condition is equivalent to the conformal net having finite index, as follows.Recall from [BDH15, Def 3.1] that the index of a conformal net A is defined as the minimal index of the inclusion A(I 1 ∪ I 3 ) ⊂ A(I 2 ∪ I 4 ) ′ .By [BDH14, Prop 7.5], if this minimal index is finite, then the bimodule A(I 1 ∪I 3 ) H 0 (A) A(I 2 ∪I 4 ) op is dualizable.Conversely, if that bimodule is dualizable, then, by [BDH14, Def 5.1], its statistical dimension is finite and thus, by [BDH14, Def 5.10 & Prop 7.3], the corresponding minimal index is finite.This is proven as Theorem 1.13 in the text.We may depict this result as Equipped with this and other results about defect annular sectors, we proceed to our main topic of dualizability properties of conformal nets.We show that finite sectors are dualizable; that finite defects are dualizable with finite unit and counit sectors (and hence are fully dualizable); and that finite conformal nets are dualizable with finite evaluation and coevaluation defects (and hence are fully dualizable).Altogether this implies that the collection of finite conformal nets, finite defects, finite sectors, and intertwiners forms a sub-3-category of the 3-category of conformal nets, and establishes the following: Theorem B (Dualizability of finite nets, defects, and sectors).The 3-category of finite semisimple conformal nets, finite semisimple defects, finite sectors, and intertwiners has all duals.This result is summarized as Theorem 2.20 in the text, collecting the results of Proposition 2.11, Corollary 2.12, Proposition 2.14, Corollary 2.16, Theorem 2.17, and Corollary 2.19.
Having established that finiteness implies full dualizability, we conversely establish that full dualizability ensures finiteness. 3heorem C (Finiteness of dualizable nets, defects, and sectors).A fully dualizable conformal net, defect, or sector is necessarily finite.
See Corollary 2.12, Proposition 2.22, Theorem 2.25, and Scholium 2.31 in the text for the precise statements and proofs.
Manifold invariants. By Theorem B and under the (overwhelmingly plausible
but not yet proven) assumption that the cobordism hypothesis applies to the symmetric monoidal 3-category of conformal nets constructed in [BDH18] 4 , associated to any finite conformal net there is a 3-dimensional local framed topological field theory whose value on a point is the conformal net.Naturally, one wonders what manifold invariants are given by this topological field theory.
For 1-dimensional manifolds, the conformal net field theory invariants are given, projectively (that is, up to tensoring by an invertible von Neumann algebra), by the extension, constructed in [BDH17, Thm 1.3], of the conformal net to a functor from 1-manifolds to the category of von Neumann algebras.In particular, the invariant of a circle is the direct sum over irreducible representations of the algebra of bounded operators on the underlying representation space (see [BDH17, Thm do not know that the composition of two defects between non-finite nets is again a defect); however the notion of dualizability is still well defined for an arbitrary not-necessarily-finite net (namely as the condition that the canonical evaluation and coevaluation defects both have ambidextrous adjoints with dualizable unit and counit sectors), and therefore it makes sense to claim and prove as we do that a dualizable net is finite.
4 As the cobordism hypothesis applies most immediately to symmetric monoidal n-categories modeled as Γ-objects in complete n-fold Segal spaces [Lur09,CS15], this assumption can be made precise in the form of the following conjecture: there exists a Γ-object in complete 3-fold Segal spaces CN ′ together with an equivalence of tricategories E : [CN] → [CN ′ ]; here CN denotes the symmetric monoidal 3-category of finite conformal nets constructed as an internal dicategory in symmetric monoidal categories [BDH18], and the brackets [−] denote the tricategory associated to either the internal dicategory in symmetric monoidal categories or the Γ-object in complete 3-fold Segal spaces.
1.20]
).One may also express the invariant of a circle as the colimit in the category of von Neumann algebras of the value of the conformal net on all the subintervals of the circle (see [BDH17, Prop 1.25]).
For 2-dimensional manifolds, the conformal net field theory invariants are given, projectively (that is, up to tensoring by an invertible Hilbert space), by the functor constructed in [BDH17, Thm 2.18], from 2-manifolds to Hilbert spaces.In particular, the invariant of a closed 2-manifold is given, projectively, by the space of conformal blocks associated to that surface.
For any finite-index conformal net A, under the aforementioned assumption that the cobordism hypothesis applies, our results provide a complex-valued invariant Z A (M ) of any closed framed 3-manifold M .When the conformal net is N G,k , the one associated to a central extension of the loop group LG (and assuming this net is indeed of finite-index), the category Rep(N G,k ) of representations of the net is thought to be isomorphic to the category Rep(LG, k) of representations of the loop group LG at level k; see [Hen17] for a discussion of this comparison problem and [Gui18, Sec 5.1] for progress towards a solution.Provided the representation categories of the conformal net and of the loop group are indeed isomorphic as modular tensor categories, then we expect the 3-manifold invariant Z N G,k (M ) determined by the conformal-net-valued local field theory is the Reshetikhin-Turaev invariant of M associated to the modular tensor category Rep(LG, k) of representations of the associated loop group.
Defect algebras acting on annuli and discs
We will, later in Section 2, interpret the fusion of a defect and its adjoint as associating an algebra to an interval with not just a single transition point from white to black, but instead two: one from white to black, and then one back to white.To construct the unit and counit of the adjunction, we will need an action of this larger algebra on the vacuum sector of the original defect.We will construct such an action by first constructing an action on a Hilbert space associated to an annulus and then "plugging the hole" of the annulus with a vacuum sector.
Working up to those constructions, in this section we study the Hilbert space associated to a bicolored annulus; we prove a Peter-Weyl theorem decomposing the defect annular Hilbert space as a sum of tensor products of sectors and their duals, and we define the algebras associated to arbitrary bicolored 1-manifolds.
1.a.The Hilbert space for a bicolored annulus.Given a finite defect D between finite conformal nets, the bimodule ) op is always dualizable (see [BDH19,Prop. 3.18] and Footnote 2).Here S is a bicolored circle decomposed into intervals I 1 , . . ., I 4 , as in (0.1), and the vacuum sector H 0 (S, D) is described in [BDH19, Notation 1.14].Let −S, −I 1 , . . ., −I 4 be the same manifolds with the reverse orientations.The following result explicitly identifies the dual, generalizing the corresponding result for conformal nets [BDH15, Lemma 3.4]: Lemma 1.2.Under the canonical identifications (D(−I 1 ) ⊗ D(−I 3 )) op ∼ = D(I 1 ) ⊗ D(I 3 ) and (A(−I 2 ) ⊗ B(−I 4 )) op ∼ = A(I 2 ) ⊗ B(I 4 ), the dual of the bimodule (1.1) is given by Proof.We assume without loss of generality that S is the standard bicolored circle.Let us write and let j be the reflection that exchanges S 1 ⊤ and S 1 ⊥ .For any interval I, we abbreviate D(j) : D(I) → D(j(I)) op by j * and let A := D(S 1 ⊤ ).By definition, H 0 (S, D) = L 2 (A) with actions Here a op ∈ A op is the element a ∈ A viewed as an element of A op .By [BDH14, Cor 6.12], the dual of On the other hand, H 0 (−S, D) Using the canonical identification η → η op between L 2 (A op ) and L 2 (A) that exchanges the left A op -module structure with the right A-module structure and the right A op -module structure with the left A-module structure, this becomes Finally, the isomorphism intertwining (1.4) and (1.5) is given by the modular conjugation J : L 2 (A) → L 2 (A).
We now investigate what happens when we glue two vacuum sectors along a pair of intervals.Instead of viewing the vacuum sector H 0 (S, D) as being associated to a bicolored circle S as in [BDH19, Notation 1.14], we shall think of it as being associated to a bicolored disk: via the diffeomorphism, we can then associate a Hilbert space to the annulus (1.6) as follows: Consider now the slightly different situation where I 2 , I 4 , I 6 , I 8 are genuinely bicolored, I 1 , I 5 are white, and I 3 , I 7 are black.Once again, we glue D l to D r along two diffeomorphisms I 1 ↔ I 5 and and we associate a Hilbert space to the annulus: Lemma 1.8.Let A D B be a finite irreducible defect, and let S l , S r , I 1 , . . ., I 8 be either as in (1.6) or as in (1.7).Let also S b := I 2 ∪ I 8 and S m := I 4 ∪ I 6 .Then H 0 (S m , D) ⊗ H 0 (S b , D) is a direct summand of Since I 2 , I 4 , I 6 , I 8 cover S m ∪ S b and D is (and if needed A and B are) irreducible, the Hilbert space H m ⊗ H b is an irreducible A-C-bimodule.We need to show that Since D is a finite defect, the bimodule A H lB is dualizable.By Lemma 1.2, its dual is then Ȟl := H 0 (−S l ).By the fundamental property of duals (Frobenius reciprocity), we can therefore rewrite (1.9) as The above expression therefore reduces to hom B,C (H r , H r ), which is one dimensional by the irreducibility of the defect D.
1.b.A Peter-Weyl theorem for defects.We now prove that there are finitely many isomorphism classes of irreducible D-D-sectors (also referred to simply as 'Dsectors') for a finite defect D, and that every such irreducible sector is finite.This is the analog for sectors between defects of the corresponding fact for representations of conformal nets, and the proof follows the structure of the proof for nets [BDH15, Thm 3.14].
Let S be a bicolored circle.Recall that an S-sector of D is a Hilbert space equipped with actions of the algebras D(I) for all bicolored subintervals I of S, subject to the conditions (0.3).As in [BDH15, §1.B], given a D-sector K (on the standard bicolored circle) and a bicolored circle S, we write K(S) for the S-sector ϕ * K, where ϕ : S → S 1 is any bicolored diffeomorphism from S to the standard circle.This sector is well defined up to non-canonical isomorphism, by the same argument as in the proof of [BDH15, Prop.1.14].
Theorem 1.10.Let A D B be a finite irreducible defect between finite conformal nets.Then all D-sectors are direct sums of irreducible ones, and all irreducible D-sectors are finite.Moreover, there are only finitely many isomorphism classes of irreducible D-sectors.
Proof.Let S l , S r , S b , S m and I 1 , I 2 , . . ., I 8 be as follows: (1.11) (1.12) Here K 1 , . . ., K n are D-sectors, which we transfer to S b by means of an arbitrary diffeomorphism S 1 ∼ = S b .(As in the situation without defects [BDH15, Sect 3.2], a sector is called factorial if its endomorphism algebra is a factor.)Given an arbitrary factorial sector K, we now show that there exists a K i in the above list to which K is stably isomorphic, i.e., such that K ⊗ ℓ 2 ∼ = K i ⊗ ℓ 2 .Let us introduce the bicolored circles S 2 := I 2 ∪ ∂I2 −I 2 and S 4 := I 4 ∪ ∂I4 −I 4 .We have isomorphisms K(S 2 ) ⊠ A l H l ∼ = K(S l ) ∼ = H l ⊠ Am K(S 4 ) of S l -sectors (constructed as in [BDH19, Lemma 1.15]).Fusing with H r over B, we get an isomorphism By Lemma 1.8, it follows that Since D is irreducible, A m is a factor, so K(S 4 ) and L 2 A m are stably isomorphic as A m -modules, and we get the following (non-canonical) inclusion of S b -sectors of where the first equality is induced by an arbitrary Hibert space isomorphism ℓ 2 ∼ = H m ⊗ℓ 2 .The sector K(S b ) is factorial.It therefore maps to a single summand K i ⊗ ℓ 2 of H ann ⊗ ℓ 2 .It follows that K and K i are stably isomorphic.In particular, this shows that there are at most finitely many stable isomorphism classes of factorial D-sectors on S b .By Lemma A.1, any D-sector can be disintegrated into irreducible ones.As a consequence, if there existed a factorial sector of type II or III, then (as in [KLM01, Cor.58]) there would be uncountably many non-isomorphic irreducible sectors.This is impossible, and so all factorial sectors must be of type I.
We now show all irreducible D-sectors are finite.Let us go back to H ann and analyze it as a {D(I)} I∈INT S b , and the multiplicity space M i carries a residual action of {D(I)} I∈INT Sm •• .The decomposition (1.12) then becomes Since H ann is a dualizable A-C-bimodule, the bimodules A l L iCr must also be dualizable.This finishes the argument, as any irreducible D-sector on S b is isomorphic to one of the L i .
Given a finite irreducible defect D, let ∆ D be the finite set of isomorphism classes of irreducible D-sectors.For every λ ∈ ∆ D , we pick a representative H λ of the isomorphism class, which we draw as follows: λ The set ∆ D has an involution λ → λ given by sending a Hilbert space H λ to its complex conjugate Hλ ∼ = H λ , with actions of D(I) given by aξ := A(j)(a * )ξ, where j : S 1 → S 1 is the reflection in the horizontal axis (which is color preserving).Note that the isomorphism Hλ ∼ = H λ is by no means canonical-see the discussion in [BK01, Rem.2.4.2].
The following Peter-Weyl theorem for defects is analogous to a corresponding annular-sector decomposition theorem for conformal nets by Kawahigashi-Longo-Müger [KLM01, Thm 9], cf also [BDH15, Thm 3.23]: Theorem 1.13.Let D be a finite irreducible defect, let S l , S r , S m , S b be as in (1.11), and let We then have a non-canonical isomorphism of (S m ⊔ S b )-sectors.We draw this isomorphism as be as in the proof of Theorem 1.10, and let Ȟl := H 0 (−S l ) be the dual bimodule to A H l B (see Lemma 1.2).
The Hilbert space H ann = H l ⊠ B H r is a finite A-C-bimodule and therefore splits into finitely many irreducible summands.By the argument in the proof of Theorem 1.10, each irreducible summand is the tensor product of an irreducible D-sector on S m and an irreducible D-sector on S b .So we can write H ann as a direct sum Given λ, µ ∈ ∆ D , we now compute N λµ .Let K be the vertical fusion of H λ and H µ .By slight abuse of notation, we abbreviate H λ := H λ (S m ), H µ := H µ (S b ), and K := K(S r ).We then have If follows that N λµ = δλ µ .
Remark 1.15.The isomorphism (1.14) is non-canonical.Actually, it doesn't even make sense to ask whether or not it is canonical since the right hand side of the equation is only well defined up to non-canonical isomorphism.
Corollary 1.16.Let S l , S r , S b , S m and I 1 , I 2 , . . ., I 8 be as in (1.11).Then the algebra generated by D(I 4 ) and D(I 6 ) on H ann is canonically isomorphic to λ∈∆D B(H λ (S m , D)).Moreover, there is a non-canonical isomorphism (1.17) which we represent as follows: Extending defects to bicolored 1-manifolds.In [BDH17, Thm 1.3], we extended the domain of definition of a conformal net from the category of intervals to the category of all compact 1-manifolds (where the morphisms are embeddings that are either orientation preserving or orientation reversing).In [BDH19, Eq 1.34], we extended a defect to take values on disjoint unions of intervals.We now further extend a defect to all compact bicolored 1-manifolds, with an arbitrary number of color-change points.This extension will be useful when we construct the unit and counit sectors for adjunctions of defects, because the composite of a defect and its adjoint can be naturally reexpressed as the value of the defect on an interval with two color-change points.
Definition 1.18.A bicolored 1-manifold is a compact 1-manifold M (always oriented), possibly with boundary, equipped with two compact submanifolds Given a bicolored 1-manifold M , we pick a decomposition M = M 0 ∪ M 1 such that P := M 0 ∩ M 1 has finitely many points, none of which is a color-change point.Every connected component of M 0 and M 1 should be an interval, and should contain at most one color-change point.Pick local coordinates around P , and define where Q := P × [0, 1] inherits its bicoloring from P .The manifolds N i and Q are oriented so as to make the inclusions M i → N i and Q → N 1 orientation preserving; the inclusion Q → N 0 is then orientation reversing.The local coordinates around P induce a smooth structure on N i .As in [BDH19, Eq 1.34], we define the defect on a disjoint union of bicolored intervals by D(I 1 ∪ . . .∪ I n ) := D(I 1 ) ⊗ . . .⊗D(I n ).We then define the defect on any bicolored 1-manifold as follows.In [BDH17, Cor.1.13], we showed that the value of a conformal net on a 1manifold was independent of the choice of decomposition used in the definition; the same argument generalizes to the situation here, showing that the algebra (1.20) is independent (up to canonical isomorphism) of the choice of decomposition Here is an example of the above definition: In Section 2.c, this extension of a defect to take values on all bicolored 1-manifolds will allow a computationally convenient expression for the composite of a defect and its dual.By (1.17), the latter is isomorphic to H 0 (D).When D is not irreducible, write it as a sum D 1 ⊕ . . .⊕ D n of irreducible defects.We then have H 0 (D) = i H 0 (D i ), and The subalgebra in which case the first part of the proof applies and it extends to a normal action of D i (I) on H 0 (D i ).Thus the action of ) extends to a normal action of D(I).
A characterization of dualizable conformal nets
2.a.Involutions on nets, defects, sectors, and intertwiners.The 3-category CN is equipped with four antilinear involutions * , ¯, † , op , where the ith involution is contravariant at the level of (4 − i)-morphisms, and covariant at all other levels.The second and third involutions will provide adjoints for finite sectors and defects respectively, and the fourth involution will provide the dual of a conformal net-that the involutions do indeed give adjoints, respectively duals, is proven in Section 2.c.
The first involution * acts trivially on the 0, 1, and 2-morphisms, and sends a 3-morphism f : H → K to its adjoint f * : K → H (in the sense of maps between Hilbert spaces).
The second one ¯acts trivially on 0 and on 1-morphisms.It sends a D-E-sector (H, {ρ I }), where the homomorphisms ρ I are given by where ρI (a) := ρ j(I) (j * (a * )), and j : z → z is the reflection in the horizontal axis.
Here, j * stands for either A(j), E(j), B(j), or D(j).The involution ¯sends a 3-morphism f : H → K to its complex conjugate f : H → K.The third involution † acts trivially on objects.Given a bicolored interval I, let I rev denote the same interval with reversed bicoloring, that is, (I rev ) • = I • and (I rev ) • = I • .The orientation of I rev is the same as that of I, but the local coordinate is negated.The reversed defect of A D B is the defect B D † A defined by D † (I) = D(I rev ).For a D-E-sector H, the corresponding D † -E † -sector H † is the complex conjugate of H, with structure maps ,⊥ given by ρ † I (a) = ρ r(I) (r * (a * )), where r : z → −z is now the vertical reflection.3-morphisms are sent to their complex conjugates.
The fourth involution op sends A ∈ CN to the a conformal net A op (I) := A(I) op .Similarly, it sends a morphism A D B to the A op -B op -defect D op (I) := D(I) op .A D-E-sector (H, {ρ I }) is sent to the complex conjugate Hilbert space, with actions ρ op I (a op ) := ρ I (a * ).Finally, 3-morphisms go to their complex conjugates.Remark 2.2.The existence of these four involutions ensures that any duality or adjunction in CN is automatically ambidextrous, that is, it is both a left and a right duality or adjunction.(When we say 'X has ambidextrous adjoint (or dual) Y ', we mean that Y admits both the structure of a left and the structure of a right adjoint (or dual) to X.) 2.b.The snake interchange isomorphism for defects.To establish, in the next section, that the reversed defect B D † A is an (ambidextrous) adjoint of the defect A D B , we will need the following variant of the sector interchange isomorphism [BDH19, Eq 6.25].
To simplify the maneuvers involved in this interchange isomorphism, here and for the remainder of the paper, we use a model for the vertical composition of sectors that fuses sectors along one-quarter of their boundary: Lemma 2.4.There is a non-canonical unitary isomorphism equivariant with respect to A(I 4 ), D(I 5 ), E(I 6 ), C(I 7 ), A(I 8 ), G(I 9 ), and C(I 10 ).
Proof.For fixed A, B, C, D, E, G, K, the desired isomorphism (2.5) can be thought of as a natural transformation between functors of the variable H.The fact that (2.5) commutes with the action of F (I 6 ∪ I 7 ) is then encoded in the naturality of (2.6).Since H 0 (E) is a faithful E(I 2 ∪ I 3 )-module, it is enough, by [BDH19, Lemma B.24], to construct the isomorphism (2.5) for H = H 0 (E) and check that it commutes with the action of and corresponding (non-canonical) unitaries u : H 0 (D) We may assume that ϕ, ψ, and χ are chosen so that α • β = γ.The isomorphism (2.5) for H = H 0 (E) can then be written explicitly: where Ω denotes the "1 ⊠ 1-isomorphism" constructed in [BDH19, Thm 6.2].Generalizing (2.3), we now consider this situation: ⇓K ⇓H which corresponds (using Appendix B) to the the following configuration of sectors: .
Proof.Fix A, B, C, D, D, E, F .We shall construct a natural transformation where The isomorphism (2.10) is the value of that natural transformation on the object (H, K).
By [BDH19, Lemma B.24], it is enough to construct the above natural transformation for the pair In that case, it is given by H 0 (F ) .
2.c.Finite nets are dualizable.We investigate the relationship of finiteness and dualizability for, in turn, sectors, defects, and nets.
Dualizability for sectors.Recall that all defects are assumed to be semisimple.
Proposition 2.11.A sector D H E has an adjoint (necessarily ambidextrous) if and only if it is finite.In this case, the adjoint is canonically isomorphic to E HD .
Proof.If the sector D H E has an adjoint E K D , that adjoint sector provides the (ambidextrous Conversely, if H is a dualizable D(S 1 ⊤ )-E(S 1 ⊥ ) op -bimodule then, by [BDH14, Cor.6.12] and the fact that D and E are semisimple, its dual is canonically isomorphic to H, with the E(S 1 ⊥ ) op -D(S 1 ⊤ )-bimodule structure given by a ξb = b * ξa * .Identify the left action of E(S 1 ⊥ ) op with a left action of E(S 1 ⊤ ), and the right action of D(S 1 ⊤ ) with a left action of D(S 1 ⊥ ) via the isomorphisms j * : and j * : D(S 1 ⊤ ) op → D(S 1 ⊥ ); then extend these actions to the structure of an E-D-sector on H according to (2.1).The unit and counit bimodule intertwiners for the bimodule duality serve, in fact, as sector intertwiners, providing E HD with the structure of an adjoint sector to D H E .By Remark 2.2, we have the following: Corollary 2.12.A sector is dualizable if and only if it is finite.Dualizability for defects.Given a bicolored interval I, we define the following two bicolored manifolds I and I .The underlying manifold of I and of I are both given by I • ∪ [0, 1] ∪ I • , and their bicolorings are Here is an example illustrating the above concepts: Proof.By Remark 2.2, it suffices to consider just one of the two adjunctions.
Let S 1 be the bicolored manifold obtained by taking the standard circle S 1 , cutting it open at i ∈ S 1 , and then glueing in a copy of [0, 1].The black part of S 1 is the interval [0, 1] that is added on the top, and all the rest is white.Similarly, let S 1 be the bicolored manifold that is obtained by inserting a white interval at the location of −i ∈ S 1 , and coloring all the rest black.
By (2.13), a D⊛ B D † -1 A -sector is the same thing as a {D(I)} I∈INT S 1 -representation, where INT S 1 denotes the poset of subintervals I ⊂ S 1 , ∂I ∩ S 1 • = ∅, that are allowed to contain S 1 • in their interior, but that are not allowed to contain S 1 • .Pick a color preserving diffeomorphism ϕ from S 1 to the standard bicolored circle.By Proposition 1.21, we can use ϕ to induce the structure of a {D(I)} I∈INT S 1 -representation on H 0 (D).That is the counit sector r of our adjunction.Similarly, restricting H 0 (D) along a color preserving diffeomorphism from S 1 to the standard bicolored circle provides a 1 B -D † ⊛ A D -sector s, which is the unit of our adjunction.The sectors r and s are finite by the finiteness of any defect vacuum sector with respect to these boundary decompositions [BDH19, Lemma 3.17].5 We now have to show that r and s satisfy the duality equations We only check the first equation, the second one being completely analogous.Let I 1 , . . ., I 13 be as in (2.8).By Lemma 2.9 and Appendix B, the left hand side Because the fusion of two vacuum sectors for a defect is again a vacuum sector for that same defect [BDH19, Lemma 1.15], the middle term r ⊠ D † (I3) s is the vacuum sector of D associated to I 1 ∪I 10 ∪I 11 ∪ Ī4 ∪ Ī5 ∪I 9 ∪I 8 ∪I 2 .By two more applications of that same lemma, we identify (2.15) with the identity sector on D.
Recall that all conformal nets and defects are assumed to be semisimple.Combining the above proposition with Corollary 2.12, we have the following.
Dualizability for conformal nets.In [BDH18], we constructed a 3-category whose objects are finite conformal nets, whose morphisms are defects, whose 2-morphisms are sectors, and whose 3-morphisms are intertwiners. 6If A and B are conformal nets that are not-necessarily finite, then, even though we do not know that they live in a 3-category, we can still make sense of A and B being dual: specifically, B is the right dual of A if there exist unit and counit defects A⊗B r C and C s B⊗A such that (1 A ⊗ s) ⊛ A⊗B⊗A (r ⊗ 1 A ) and (s ⊗ 1 B ) ⊛ B⊗A⊗B (1 B ⊗ r) are defects, and are equivalent (in the 2-category of A-A-defects or B-B-defects) to the identity defects on A and B, respectively.
Theorem 2.17.An arbitrary conformal net A has ambidextrous dual A op .If A is finite, then the unit and counit defects of both the left and right dualities are themselves finite.
Proof.By Remark 2.2, it is enough to discuss just one of the two dualities.We show that A ⊣ A op .Given a bicolored interval, let I •• stands for I • ∩ I • .It consists of one point if I is genuinely bicolored, and it is empty otherwise.The counit defect A⊗A op r C and the unit defect C s A op ⊗A are defined by (2.18) r : 6 Insisting that the conformal nets be finite allowed us to prove that the composition of two defects is again a defect; we do not know if the composition of defects between arbitrary conformal nets is a defect, in particular whether the composite satisfies the vacuum sector axiom [BDH19, Def.1.7, axiom (iv)].
where the bar stands for orientation reversal.In pictures, this is: We now verify the two duality equations for r and s.We need to show that the fusions (1 A ⊗ s) ⊛ A⊗A op ⊗A (r ⊗ 1 A ) and (s ⊗ 1 A op ) ⊛ A op ⊗A⊗A op (1 A op ⊗ r) are indeed defects, and are equivalent to identity defects on A and A op , respectively.Let I be genuinely bicolored interval.By [BDH17, Lem.1.12], the definition of the above fusions reduces to or perhaps more clearly → A and → A .
These are isomorphic to the weak units on A and A op , and therefore are defects; they are equivalent to identity defects by [BDH19, Remark 1.40 & Example 3.5].
Assuming A is finite, we now proceed to show that the unit and counit defects are finite.Let I 1 , . . ., I 4 be as in (0.1); the intervals I 1 and I 3 are genuinely bicolored, I 2 is white, and I 4 is black.The actions of r(I 1 ), r(I 2 ), r(I 3 ), r(I 4 ) on the vacuum sector H 0 (r) are conjugate to the actions of A(I 1 ), A(I 2 ) ⊗ A(I 4 ), A(I 3 ), and C on H 0 (A).The condition of Definition 0.2 then holds by the split property of A.
By the same argument, one also shows that s is a finite defect.
From this theorem and Corollary 2.16, we have the following: Corollary 2.19.A finite conformal net is fully dualizable.
In any n-category, a composition of fully dualizable 1-morphisms is again fully dualizable; similarly a composition (either vertical or horizontal) of fully dualizable 2-morphisms is again fully dualizable.Thus, by Corollary 2.12, the collection of finite sectors is closed under composition, and by Corollary 2.16 and Proposition 2.22 below, the collection of finite defects is closed under composition.By direct inspection, the collection of finite conformal nets is closed under tensor product.Altogether we see that the collection of finite conformal nets, finite defects, finite sectors, and intertwiners forms a sub-symmetric-monoidal-3-category of the symmetric-monoidal-3-category of conformal nets.
Applying the cobordism hypothesis (as before under the assumption that it applies to the symmetric monoidal 3-category of conformal nets-see Footnote 4), we obtain the corresponding topological field theories: Corollary 2.21.Associated to any finite conformal net A, there is a 3-dimensional local framed topological field theory with target the 3-category of conformal nets, whose value on the positively framed point is the conformal net A.
2.d.
Dualizable nets are finite.In the preceding section we saw that the subcategory of finite conformal nets, finite defects, finite sectors, and intertwiners has all duals.In this section, we prove that that this subcategory is in fact the maximal subcategory of the 3-category of conformal nets that has all duals.We already saw in Corollary 2.12 that a dualizable sector is necessarily finite.We now show that a fully dualizable defect must be finite: Proposition 2.22.Let A and B be finite conformal nets, and let A D B be a defect.If D has an adjoint, then D is finite.
Proof.Let D ∨ be the dual of D, and let r and s be the counit and unit sectors, so that In other words, with I 1 , . . ., I 13 arranged as before We check that D is finite by showing that the action on (2.23) of the algebra D(I 7 ) ⊗ alg D(I 12 ) extends to D(I 7 ) ⊗ D(I 12 ).The Hilbert space r is invertible as a D(I 1 ) ∨ D ∨ (I 3 ) op ∨ A(I 4 ) op -(D(I 1 ) ∨ D ∨ (I 3 ) op ∨A(I 4 ) op ) ′ -bimodule.Similarly, the Hilbert space s is an invertible B(I 2 )∨ D ∨ (I 3 ) ∨ D(I 5 ) op -(B(I 2 ) ∨ D ∨ (I 3 ) ∨ D(I 5 ) op ) ′ -bimodule.Fusing (2.23) with the inverse bimodules r and s, and using the (non-canonical) isomorphisms we get the Hilbert space The latter is isomorphic to by the interchange isomorphism [BDH19, Sec 6.D].To be precise, letting J 1 , J 2 . . ., J 10 be as in the following figure , the Hilbert space (2.24) is given by H 0 (D) ⊠ B(J1) H 0 (D ∨ ) ⊠ A(J2) H 0 (D).The intervals J 6 and J 10 correspond to I 7 and I 12 , respectively.Note that H 0 (D ∨ ) is split as a B(J 1 )-A(J 2 )-bimodule.Since the fusion of a split bimodule with any bimodule is always split, it follows that (2.24) is split as a D(J 6 )∨A(J 7 )∨D(J 8 )-(D(J 10 ) ∨ B(J 3 ) ∨ D(J 4 )) op -bimodule.In particular, it is split as a D(J 6 )-D(J 10 ) opbimodule.In other words, the completion of D(J 6 ) ⊗ alg D(J 10 ) is isomorphic to the spatial tensor product D(J 6 ) ⊗ D(J 10 ).
Finally, we show that fully dualizable conformal nets must be finite.Even though we do not have at hand a 3-category of all (not-necessarily-finite) conformal nets, we do have enough of the structure of that hypothetical 3-category to make sense of the notion of an arbitrary conformal net being fully dualizable, and therefore to make sense of the statement that a fully dualizable not-necessarily-finite conformal net must in fact be finite.
Recall from Theorem 2.17 that any (not-necessarily-finite) conformal net A has an ambidextrous dual A op with evaluation defect A⊗A op r C and coevaluation defect C s A op ⊗A .We call such a conformal net dualizable if these evaluation and coevaluation defects r and s both have ambidextrous adjoints with dualizable unit and counit sectors.This definition (specifically the notion of an adjunction for the evaluation and coevaluation defects) is well posed because, for any not-necessarily-finite conformal net B and any defects Theorem 2.25.Let A be a not-necessarily-finite conformal net, and let r and s be the evaluation and coevaluation defects of the duality of A and A op , given by r : If the defect r has an adjoint, and its counit sector r⊛ C r ∨ R 1 A⊗A op is dualizable, then the conformal net A is finite.
Note that the proof of this proposition requires particular care: A is not assumed to have finite index, and so most of our previous results cannot be used here.
Proof. Recall that
By assumption, r has an adjoint.Let r ∨ be its adjoint C -(A ⊗ A op )-defect.Let also r⊛ C r ∨ R 1 A⊗A op and 1 C S r ∨ ⊛ A⊗A op r be the corresponding counit and unit sectors.We now describe the algebras that act on the Hilbert spaces R and S. Take the "standard circle" ∂[0, 1] 2 and cut it open at the point ( 1 2 , 1).Call the two resulting boundary points p and q.The resulting manifold, call it M , looks roughly like this: p q .Now consider its doubling N := M ∪ {p,q} M : (2.26) , and let κ : N → N be the orientation reversing involution that exchanges M and M and fixes p and q.Given a κ-invariant neighborhood J of q, let J κ := [0, 1] ∪ J/κ be the bicolored interval with bicoloring given by (J κ ) By definition of (r ⊛ C r ∨ )-(1 A⊗A op )-sector, the Hilbert space R has actions of A(J) for every subinterval J ⊂ N that avoids q, and actions of r ∨ (J κ ) for every κinvariant interval J that contains q.The algebras acting on S are somewhat easier to describe.Consider the double D := [0, 1] ∪ {0,1} [0, 1] of the standard interval [0, 1], and let κ : D → D be the involution that exchanges the two copies of [0, 1].The Hilbert space S has an action of A(J) for every subinterval J ⊂ D that avoids the point 0, and an action of r ∨ (J κ ) for every κ-invariant interval that contains 0.
We find it convenient to think of R as being associated to a saddle, and of S as being associated to a cap: Let us name I 1 , . . ., I 6 the intervals that appear in (2.29) I5 .
Let κ be the reflection in the horizontal axis, and let K := (I 2 ) κ = [0, 1] ∪ I 2 /κ be as in (2.27), bicolored by K • = [0, 1] and K • = I 2 /κ.We also abbreviate H 0 (I 3 ∪ I 4 ∪ I 5 ∪ I 6 , A) by H 0 (A).The left hand side of (2.29) stands for the fusion of S with R ⊠ A(I6∪I4) H 0 (A) along the algebra where we identify (A ⊗ A op )(I 6 ) with A(I 6 ∪ I 4 ) using the reflection κ : Ī6 In other words, it is finite as an where we again draw our intervals as in (2.26).We know from our previous discussion that R is invertible as an r ∨ -A -bimodule.
Let Q be the inverse bimodule.Twisting it by a diffeomorphism ∼ = , we may treat Q as an By definition, it then satisfies We then also have (applying [BDH17, Lemma A.4]) (2.30) it follows from (2.30) and the finiteness of R that H 0 , A is finite as an The latter is the definition of what it means for A to be finite.
The above theorem implies that in a hypothetical 3-category of strongly additive not-necessarily-finite-index conformal nets, a fully dualizable conformal net is necessarily finite-index.We expect that even more is true, namely, that in a hypothetical 3-category of not-necessarily-finite-index and not-necessarily-strongly-additive conformal nets, a fully dualizable conformal net is finite-index (and hence strongly additive, by [LX04]).
Appendix A. Disintegrating sectors between finite defects
Sectors between conformal nets disintegrate into irreducibles [KLM01]; in this section we generalize that result to the case of sectors between defects, provided the defects are finite.Proof.Pick a countable collection 7 of pairs of bicolored subintervals {I − i ⊂ I + i } i∈I of the standard bicolored circle, with the closure of I − i contained in the interior of I + i , satisfying the following conditions: -I − i is genuinely bicolored if and only if I + i is genuinely bicolored; -for all p, q ∈ S 1 , either a. there exists an i ∈ I such that p, q ∈ I − i , or b. there exist i, j ∈ I such that p ∈ I − i , q ∈ I − j , and , or E(I ± i ) depending on whether I ± i is white, black, contains the top defect point, or contains the bottom defect point, respectively.Because D and E are finite, there exists, for each i ∈ I, a type I factor N i such that denote the ideal of compact operators in N i .For each i, j ∈ I such that I + i ∩I + j = ∅, let R ij ⊂ K i * K j be the kernel of the projection K i * K j → K i ⊗K j from the free product C * -algebra to the tensor product C * -algebra.For each i, j ∈ I such that I + i ⊂ I − j , let S ij ⊂ K i * K j be the kernel of the map K i * K j → K i ∨ Nj K j , where K i ∨ Nj K j is the subalgebra of N j generated by K i and K j .Now define A := ( * i K i )/I 7 In fact this collection can be chosen to be finite.
where I is the norm-closed ideal generated by R ij for i, j ∈ I such that I + i ∩I + j = ∅, and S ij for i, j ∈ I such that I + i ⊂ I − j .By Lemma A.2, the category of D-E-sectors is equivalent to the category of representations of A whose restriction to each K i is nondegenerate.
Because A is a separable C * -algebra, the category Rep(A) admits direct integral decompositions.We need to show that given a representation H of A whose restriction to each K i is nondegenerate, and a direct integral decomposition (H, ρ) ∼ = x∈X (H x , ρ x )dx, almost all of the integrands (H x , ρ x ) again have the property that their restriction to each K i is nondegenerate.Pick an increasing sequence of projections p i n ∈ K i , n ∈ N, that forms an approximate unit.By Lemma A.3, we have that 1 = sup ρ i (p i n ) = sup ⊕ ρ x i (p i n ) = ⊕ sup p x i (p i n ).This implies that for almost all x, we have sup ρ x i (p i n ) = 1.
Lemma A.2.The category of representation of A whose restriction to each K i is nondegenerate is equivalent to the category of D-E-sectors.
Proof.By construction, every D-E-sector yields an appropriate representation of A. Now suppose that we have a representation of A on a Hilbert space H whose restriction to each K i is nondegenerate.By the classification of the representations of compact operators, the action of K i extends uniquely to a normal action ρ i : N i → B(H).For every i, j ∈ I such that I + i ∩ I + j = ∅, the action of K i * K j descends to an action of K i ⊗ K j ; by the ultraweak density of K i in N i , the actions of N i and N j commute.Now, for every i, j ∈ I such that I + i ⊂ I − j , the action of K i * K j descends to an action of K i ∨ Nj K j .By [KLM01, Cor 53], that action of K i ∨ Nj K j extends uniquely to a normal action ρj : N j → B(H), which agrees with ρ j by the ultraweak density of K j inside N j .We therefore have a diagram where all triangles are known to commute except possibly the triangle with edge N i → N j .The missing triangle commutes because K i is ultraweakly dense in N i .Therefore, by [BDH19, Lem 2.5], the actions ρ i | A − i assemble into a D-E-sector structure on H. Lemma A.3.Let H x be a measurable family of Hilbert spaces over a probability space X.For each n ∈ N, let p n,x ∈ B(H x ) be a measurable family of projections indexed by the points of X. Assume furthermore that for every x ∈ X, the sequence {p n,x } n∈N is increasing.Then Lemma B.3.Let D H E and E K F be sectors, and let ϕ be a diffeomorphism from the standard circle to the larger circle, as above.Then the vertical fusion H ⊠ E(S 1 ⊤ ) K from (B.1) is (non-canonically) isomorphic, as a D-F -sector, to the alternative fusion ϕ * (H ⊠ E(I) K) from (B.2).
Proof.Let ψ 1 : S 1 → S 1 be a diffeomorphism which maps the lower semi-circle S 1 ⊥ to the lower quarter-circle (drawn here as an edge of a square) and satisfies ϕ| S 1 ⊤ = ψ 1 | S 1 ⊤ , let ψ 2 : S 1 → S 1 be a diffeomorphism which maps the upper semi-circle S 1 ⊥ to the upper quarter-circle and satisfies ϕ| S 1 ⊥ = ψ 2 | S 1 ⊥ , and let u ψ1 and u ψ2 be unitaries implementing these diffeomorphisms (these exist by [BDH19, Prop.1.10]).We assume without loss of generality that ψ 2 = j • ψ 1 • j, where j is the reflection along the horizontal axis of symmetry.Then u ψ1 ⊠ u ψ2 maps H ⊠ E(S 1 ⊤ ) K to H ⊠ E(I) K, and is an isomorphism of D-F -sectors H ⊠ E(S 1 ⊤ ) K ∼ = ϕ * (H ⊠ E(I) K).
a change of notation, not of content.(Note that, as in[BDH19], the Hilbert space H 0 (S, D) is only well defined up to non-canonical isomorphism.)Giventwo genuinely bicolored disks D l , D r , we investigate two ways of gluing them together into annulus.Decompose each of their boundaries into four intervals S l := ∂D l = I 1 ∪ . . .∪ I 4 and S r := ∂D r = I 5 ∪ . . .∪ I 8 , where I 1 , I 3 , I 5 , I 7 are genuinely bicolored, I 4 , I 6 are white, and I 2 , I 8 are black.If we glue D l to D r along diffeomorphisms I 1 ↔ I 5 and I 3 ↔ I 7 , we get the following bicolored annulus: If D is a finite defect, then the action of D(I 1 )⊗ alg D(I 3 ) on H 0 (S l , D) extends to the spatial tensor product D(I 1 ) ⊗ D(I 3 ).Similarly, the action of D(I 5 ) ⊗ alg D(I 7 ) on H 0 (S r , D) extends to D(I 5 ) ⊗ D(I 7 ).Identifying D(I 5 ) ⊗D(I 7 ) with D(I 1 ) ⊗D(I 3 ) op in a way compatible with the actions of D(J) for all J ⊂ S b and J ⊂ S m .Moreover, H 0 (S m , D) ⊗ H 0 (S b , D) appears with multiplicity 1 inside H ann .(In the case of situation (1.6), by definition H 0 (S m , D) = H 0 (S m , A) and H 0 (S b , D) = H 0 (S b , B); in this case, we also require that A and B be irreducible.)Proof.Let A := D(I 2 ) ⊗ D(I 4 ), B := (D(I 1 ) ⊗ D(I 3 )) op ∼ = D(I 5 ) ⊗ D(I 7 ), and C := (D(I 6 ) ⊗ D(I 8 )) op , and let us abbreviate S band let H l = H 0 (S l , D), H r = H 0 (S r , D), H b = H 0 (S b , D), H m = H 0 (S m , D), andH ann := H l ⊠ A(I5) H r ⊠ B(I3) .Let also A := D(I 2 ∪ I 4 ), B := (A(I 1 ) ⊗ B(I 3 )) op ∼ = A(I 5 ) ⊗ B(I 7 ), C := D(I 6 ∪ I 8 ) op , A l := D(I 2 ), A m := D(I 4 ) op , C m := D(I 6 ) op , C r := D(I 8 ).Since A H lB and B H rC are dualizable bimodules, H ann = H l ⊠ B H r is dualizable as an A-C-bimodule.It therefore splits into finitely many irreducible summands [BDH14, Lemma 4.10].Let us now consider H ann with its actions of D(I) for I ∈ INT S b •• .The von Neumann algebra generated by those algebras on H ann has a finite dimensional center, since otherwise would contradict the fact that A H ann C splits into finitely many irreducible summands.We can thus write H ann as a direct sum of finitely many factorial S b -sectors of D:
-
{D(I)} I∈INT Sm •• -representation.Since each summand K i (S b ) in the decomposition (1.12) is a type I factorial D-sector, we can write it as L i ⊗ M i , where L i is an irreducible representation of {D(I)} I∈INT S b •• Definition 1.19.Given a defect D and a bicolored 1-manifold M , we define the value of D on M to be (1.20)D(M ) := D(N 0 ) ⊛ D(Q) D(N 1 ).(See [BDH19, Sec 1.E & App B.IV] for discussion and the definition of the relative fusion product ⊛ of von Neumann algebras.)
Proposition 1. 21 .
Let A D B be a finite defect.Let S 1 be the standard bicolored circle, let I ⊂ S 1 be the following bicolored manifoldI S 1and let D(I) be as in 1.19.Then the natural action of D(I ∩ S 1 ⊤ ) ⊗ alg D(I ∩ S 1 ⊥ ) on H 0 (D) extends to a normal (that is, ultraweakly continuous) action of D(I).Proof.We first address the case when D is irreducible.By definition, the algebra D(I) acts (normally) on Fusing in , we can use the fact that a vacuum sector of a conformal net fuses with a vacuum sector of a defect to a vacuum sector of the defect [BDH19, Lemma 1.15] and the fact that cyclic fusion is cyclically invariant [BDH17, App.A] to see that D(I) also acts on H ann := .By Corollary 1.16, the algebra generated by D and D in B(H ann ) admits a natural right action on .Since the action of D(I) on H ann commutes with that of D ∨ D , the algebra D(I) also acts on ⊠ D ∨D op
HK..
This is by contrast with the model we used previously, in[BDH19], which involved fusing along half of the boundary of each sector.The equivalence between these two fusions is discussed in Appendix B. Let A, B, C be conformal nets, let A D B , B E C , B F C , A G C be defects, let H be an F -E-sector, and let K be a D⊛ B E -G -sector.We are interested in two ways of evaluating Let us name and orient the relevant intervals I 1 , I 2 , . .., I 10 as indicated here: All of them are copies of the standard interval [0, 1].Let also S l := Ī1 ∪ Ī2 ∪ I 5 ∪ I 4 , S r := I 2 ∪ I 3 ∪ I 7 ∪ I 6 , S b := I 8 ∪ I 9 ∪ I 10 ∪ Ī3 ∪ I 1 , S lr := Ī1 ∪ I 3 ∪ I 7 ∪ I 6 ∪ I 5 ∪ I 4 , S lb := I 8 ∪ I 9 ∪ I 10 ∪ Ī3 ∪ Ī2 ∪ I 5 ∪ I 4 , and S lrb := I 8 ∪ I 9 ∪ I 10 ∪ I 7 ∪ I 6 ∪ I 5 ∪ I 4 , where we have used bars to indicate reverse orientation.
.
Let D be an A-B-defect.Definition 1.19 is made so as to provide an easy description of D ⊛ D † and D † ⊛ D. They are given by (2.13) D ⊛ B D † (I) = D(I ) and D † ⊛ A D (I) = D(I ), essentially by definition.Proposition 2.14.Let A and B be finite conformal nets.Every finite defect A D B has ambidextrous adjoint B D † A , and the unit and counit sectors of both the left and right adjunctions are finite.
B D C and C E B , the fusion products D ⊛ C E and E ⊛ B D are indeed defects (the first one by [BDH19, Thm.1.44]; the second one because a C-C-defect is just a von Neumann algebra [BDH19, Prop.1.22]).
hand side stands for the fusion of the Hilbert spaces S ⊛ (A⊗A op )( ) A associated to the manifold r ∨ , and stands for 1 r .The upper left in (2.28) does not change anything, and so it can be safely ignored [BDH17, Lemma A.4]. Equation (2.28) then becomes R
Lemma A. 1 .
Let A and B be conformal nets.Let A D B and A E B be irreducible finite defects.Then any D-E-sector disintegrates into a direct integral of irreducible D-E-sectors.
⊕.
sup p n,x = sup ⊕ p n,x .Proof.Let M ⊂ B(H) be the abelian von Neumann algebra onH := ⊕ H x generated by ⊕ f (x)p n,x for all f ∈ L ∞ (X) and n ∈ N. Note that M ∼ = L ∞ (Y )for some measure space Y .Since L ∞ (X) ⊂ M , we have a measurable map π : Y → X and we can writeM = ⊕ X M x , where M x = L ∞ (π −1 (x)).The projections p n,x ∈ M x correspond to measurable subsets Z n,x ∈ π −1 (x), and the equation ⊕ sup p n,x = sup ⊕ p n,x follows from the fact that x n Z n,x = n x Z n,x .Appendix B. A variant vertical compositionIn [BDH19, §2.C], we defined the vertical composition of two sectors D H E and E K F to be the fusion along half of each 'circle', H ⊠ E(S 1 ⊤ ) K, with the evident remaining actions of D and F :(B.1) D H ⊠ E K F = fusion v A BAn alternative definition would be to fuse along a 'quarter-circle':(B.2) H Kand to equip the resulting Hilbert space with the structure of a D-F -sector by means of a diffeomorphism ϕ : local coordinates around the color-change points.Specifically, the resulting sector is ϕ * (H ⊠ E(I) K), where I is the top quarter of the circle (associated to the sector K), or equivalently the bottom quarter of the circle (associated to the sector H). | 14,905.8 | 2019-05-09T00:00:00.000 | [
"Mathematics",
"Computer Science"
] |
The role of the students’ and teachers’ activities in the adoption and continued use of an e-learning platform
– This paper identifies and examines an impact of students’ and teachers’ activities on possibility of using and adapting e-learning platform in postgraduate studies. The paper aims at experimental survey of students’ satisfaction level, their opinions concerning implementing e-learning at work as well as correlation students’ activity, teachers’ activity and e-learning results. Our hypotheses are tested with 160 students of postgraduate studies using e-learning educational platform
Introduction
The development of new technologies is followed by their progressing influence on all forms of human activity including education. Rapid development of information and communicative technologies involves the development of educational new techniques and methods, for example e-learning (on-line learning).
E-learning problem is a very important issue from the viewpoint of various areas: economics, education, sociology, psychology and computer science.
E-learning problems concern many aspects beginning with students' motivation and ending in psychological aspects [1], functionality as well as opportunity to develop a given information system.
From the informational point of view e-learning reality is based upon certain CMS which disposes with some functionalities, opportunities and effectiveness to mange with educational process efficiently. Moodle [3] is an example of a free CMS platform.
Taking into account possible development of educational platforms functionality to gain new opportunities or progress existing activities, it is necessary to examine the level of students' satisfaction as well as the role and impact of different activities, the components of a given course on the knowledge level. Paper [2] empirically examined an impact of interactive video on student's teaching results and the level of student's satisfaction using some e-learning tools. The conclusion suggests that educational platforms should create opportunities to apply interactive video instructions which is a very important element.
The authors of this paper experimentally examine an impact of e-learning education on students' satisfaction level (adults) and also on an opportunity to implement elearning in their work places. Another issue surveyed by the authors is searching for correlations between students' and teachers' activities as well as an impact of those factors on knowledge acquisition.
The survey was conducted for 160 adult students of postgraduate studies at Maria Cure-Sklodowska University in Lublin with consortium of Warsaw University concering "Preparation of teaching staff for e-learning continuous education". Section 2 describes theoretical background, survey model and hypothesis. Section 3 deals with methodology, Section 4 contains results interpretation and Section 5conclusions.
Survey methods and theoretical background
The survey was conducted using the sample of 160 adult students ( age 25-62) of the postgraduate studies. The student groups of the postgraduate studies were: • Teachers of different specializations: from schools for adults, continuous education centers and practical education centers; • Teachers -consultants from Centers of Teachers' Education; • Methodical advisors; • Pedagogical staff from schools for adults and continuous education centers; • Teachers employed in public schools.
During the project accomplishment the university provided students with access to: • Two e-learning platforms. The first platform conducted 8 on-line courses and communicative forum via-the internet (for teacher-student contact) as well as communicative forum accessible to all students and teachers. The other platform was for students and it had character of the area where students developed their own courses (they had rights to develop courses), for example courses during learning the subject "Information educational tools" and had to pass the courses necessary to complete study.
161
• Disk area to treasure materials created by groups or individuals (both during on-line classes and traditional ones).
Materials and teaching aids were collected in knowledge base which was accessible via the above mentioned educational platforms. All problems discussed during both stationary and on-line courses were widely documented on the educational platform. Material contents were richer than direct demands to pass a given subject. The questions asked on the internet forum and chats allowed to presume that a large number of students eagerly used the expanded knowledge contents. On-line courses lasted 185 hours which makes 66% of all courses but the traditional method took 95 hours. In semester II e-learning educational techniques were used in 80%.
Applied educational methods and activities
During the e-learning education the following methods were used: • Lecture, aiming at acquainting students with basic information concerning a given problem. During the courses, 46 lectures were available for students which accounted 23% of all components number. • Discussion forum accessible on platform in every course. There were forums with the moderator where moderators answered students' questions or informed about the subject organization and there were educational forums during which students were to express their opinions about the subject proposed by the mentor. The internet forums registered over 10910 questions and answers between students and mentors. Forum was a component used very eagerly and accounted about 36% of all components number. • Quizes (tests) -to examine knowledge. During the education process 37 tests were done which accounted 19% of all components number. • Assignments pointed in picking up of practical skills. The task comprised three stages: preparing the contents by the teacher and putting task on the platform, solving off-line by students and putting file with the answer on the platform and the last stage-checking and marking the tasks by teachers. 31 tasks were done which accounted 16% of all components number. • Other activities like; chat, voting, notions vocabulary. Chats allowed to communicate students with mentors. These chats were very intensive and students could obtain answers to their questions concerning material accomplishment especially at the beginning of study in "Introduction to e-learning education" where communication through chats was very intensive late in the evening. The number of all entries -13860. Voting allowed to fix chat hours by teachers. Notions were prepared by teachers for students to facilitate and accelerate learning. The components called "others" accounted 6% of all. • Consultations via e-mails on the educational platform -through that information channel students could ask questions and obtain answers. The number of e-mails on the educational platform was 6982 which means that every student wrote on the average 43 e-mails. It is impossible to count the number of e-mails written by students to teachers using private e-mail boxes. • During the study over 55349 entries were registered. That number means logging on the average once a day.
Hypotheses
Interesting hypotheses are formulated in the following way: H1: The level of students' satisfaction shows proper use and adaptation of e-learning educational platform. H2: The postgraduate study participants (owning to the acquired knowledge and skills) notice the necessity and opportunity of implementing e-learning in their own work environment H3: Teachers' activity influences the education results. H4: The higher level of participants' activity on the e-learning platform, the better education results.
Survey methodology
Questionnaires were used to examine an impact of education form on students' satisfaction level. Questionnaires were complated by students at the end of III semester (the last one). Students answered the questions as follows: "What extend did e-learning education satisfy you to?" (respondents could choose on the scale one of the answers; "didn't satisfy", "little", "rather", "satisfied") and also "Did participation in the postgraduate study and tasks accomplishment influence your opinion towards e-learning education?" (respondents could choose one of the following answers: "my enthusiasm rose", "my enthusiasm dropped", "it didn't have any impact on my attitude").
The authors used the questionnaires to examine an opportunity to apply and adapt the e-learning platform by students in their work environment. Their respondents had to answer the question " How do you estimate the chances of e-learning implementation in your work environment, at school, center?" (respondents could choose one of answers; "very good", "good", "rather good", "small", "rather small", "very amall", "difficult to say").
Then activities of students and teachers were read directly from the Moodle platform statistics. The first part of the data making the basis of survey includes both partial and final grades (resulting from the sum of partial grades) put on the platform. The full grades list is accessible in every course through the reference "Grades" and next clicking "Download in ODS format" in the format of spread sheet -OASIS -Open Document Format for Office Applications -ODS files Other formats are also accessible -text format or MS Excel sheet.
The other data which were used in the survey are collected during the Moodle service work in the form of logs in the data base in the table mdl_log. All detailed information Pobrane z czasopisma Annales AI-Informatica http://ai.annales.umcs.pl Data: 06/04/2022 14:02:18 U M C S about each activity of the user is collected here and is accessible through direct access to the data base through properly written SELECT queries of the language SQL.
The other data which were used in the survey are collected during the Moodle service work in the form of logs in the data base in the table mdl_log. All detailed information about each activity of the user is collected here and is accessible through direct access to the data base through properly written SELECT queries of the language SQL.
The above mentioned subjects were chosen for the survey because they were in majority accomplished by e-learning.. For every course there were calculated the following characteristics of students' and teachers' activities (number of clicks during subject accomplishment): mean values, standard devotions (SD), sum, minimum, maximum, quartile 1 (Q1), median, quartile 3 (Q3) in Statistica 8.0 program based upon the data generated in files in the ODS format.
Moreover the program Statistica 8.0 was used to calculate the Pearson's correlation coefficients between the students' activity and the education results and next their relevance was surveyed using Student t test.
Results analysis
The truth of hypothesis H1 (The level of students' satisfaction shows the right use and adaptation of e-learning platform) is confirmed based upon students' answers to the question, "What extent did e-learning education satisfy you?" 80% of respondents were fully satisfied participating in the postgraduate study using the e-learning platform (60% chose the answer "satisfied", 20% chose "very satisfied", 19% -"rather satisfied" and only 1% ( 1 student) -"small satisfied". That thesis is acknowledged by the students' answers the following question; "Did participation in the postgraduate study and tasks accomplishment influence your opinion towards e-learning education?". For 78% of the respondents, participation in the e-learning education meant growth of enthusiastic attitudes towards this kind of education, for 15% e-learning had no impact on their attitudes but only 7% of the respondents were not satisfied.
The truth of hypothesis H2 (Postgraduate study participants based on the acquired knowledge notice the necessity and opportunity to implement e-learning in their work environments) is confirmed based upon the students' answers to the question "How do you estimate the chances of e-learning implementation in your work environment, at school, center?" 69% of the respondents chose the answer "rather good" or "very good" concerning the e-learning implementation in their environments, 42% of the respondents chose the answer "rather good", 17% -"good" and 10% -"very good", 26% -"small" or "rather small" but the other 5%-chose the answer -"very small". The truth of hypothesis H3 (Students' activity influences education results) is confirmed based upon the answers to the open question: "What were you motivated by to undertake e-learning study?" About 38% (most students) of the respondents mentioned teacher kindness and professionalism as enhancing and giving satisfaction factors. Table 1 presents the characteristics of teachers' activities compareed to those of students' activities based on some subject examples. We can notice great differences among students' activities (e.g. the smallest value for WdKnO accounted 98 and the largest -4325) and also among teachers' activities which was influenced by the number of on-line laboratories conducted by teachers, for example one of the teachers conducted simultaneously the on-line lecture (put the prepared materials and elaborated task on the platform). As far as the WdKnO course is concerned, 1 teacher's activity fell on the average 5.69 student's activities, as for the OMM and OPKnO courses, 5.70 and 4.98 student's activities respectively. It should be noted that teacher's activities influenced the student's activities having impact on education effects which will be shown in hypothesis H 4.
The truth of hypothesis H4 (The larger student's activity on the educational platform, the best education results) is confirmed based upon large (larger than 0.5) and significantly larger than the zero Pearson's correlation coefficients. For the WdKnO course, the Pearson's correlation coefficient between the student's activities (clicks number) and the education effects (final points ) accounted 0.52 and based upon t-student test that coefficient appeared to be significantly different (larger) than zero ( computer significance level p<0.0000). A similar value (coincidence or dependence?) was obtained for the Pearson's correlation coefficients for the other subjects: OMM -0.52, OPKnO -0.53 which appeared to be statistically significant (n both examples: p< 0.0000). Additionally, the coefficients were calculated (correlation coefficient square), for WdKnO -0.27 9 (it means that education efforts in 27% were explained by student's activities), for OMM and OPKnO-0.27 and 0.29 respectively. As follows from the survey results with very active persons characterized with very high grades high education effects could be obtained at average activity, so correlation between the activity and the education effects appeared only large, not very large (larger than 0.7).
Conclusions
The results of survey show that the e-learning educational platform can be adapted to continuous education of adults which is confirmed by students' satisfaction level as well as their intentions to adapt e-learning in their work environments. E-learning platform influences positively education effects, especially teacher's kindness and professionalism enhance students' activity.
The educational platform Moodle would be worth expanding by statistics concerning teachers' activities and useful data like: logs number, sent mails number, materials number, answers number on forums and a number of checked tasks off-line. | 3,193.4 | 2010-01-01T00:00:00.000 | [
"Education",
"Computer Science"
] |
Operational significance of nonclassicality in nonequilibrium Gaussian quantum thermometry
We provide new operational significance of nonclassicality in nonequilibrium temperature estimation of bosonic baths with Gaussian probe states and Gaussian dynamics. We find a bound on the thermometry performance using classical probe states. Then we show that by using nonclassical probe states, single-mode and two-mode squeezed vacuum states, one can profoundly improve the classical limit. Interestingly, we observe that this improvement can also be achieved by using Gaussian measurements. Hence, we propose a fully Gaussian protocol for enhanced thermometry, which can simply be realized and used in quantum optics platforms.
I. INTRODUCTION
Temperature estimation is essential in characterizing and engineering quantum systems for any technological applications.Given finite experimental resources, quantum thermometry designs probes with maximum information gain [1,2].Along with the number of probes, time is an important resource.Without any time restriction, it is often best to let a temperature probe thermalise with the system before reading it out.In this procedure, known as equilibrium thermometry, the initial state of the probe does not play a role and therefore the potential of quantum resources in the state preparation is not used.Nonetheless, the impact of initial probe states in a nonequilibrium (dynamical) scenario can be important and has been the subject of previous studies [3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18] Continuous-variable systems in Gaussian states are the relevant model in many physical platforms for thermometry such as quantum optics, quantum gases, Josephson junctions, and mechanical resonators [19].In this context, a framework for using Gaussian measurements was recently proposed with some preliminary results at thermal equilibrium [20].On the other hand, in a recent paper [18], the limits of nonequilibrium thermometry in the Markovian environments were established.The results of [18] are based on bounds in terms of operator norm that are only applicable to probe systems with a finite-dimensional Hilbert space.Therefore, of particular interest is to understand the limits and advantages of continuous-variable probe systems with infinite-dimensional Hilbert spaces-that are readily available in the laboratory in Gaussian states-in nonequilibrium thermometry.
In this paper, we propose a fully Gaussian setup with simple physical realizations that exploits the power of nonclassicality for nonequilibrium thermometry.Specifically, in our setup, the interaction between a continuousvariable probe system and a thermal bath is described by a class of Gaussian dynamics that can be described as a Brownian motion [5,15,21].We first formalize the limit of classical probes for thermometry and show that this limit can be profoundly improved with the use of nonclassical Gaussian states and measurements.Our formalism provides a new insight into the operational significance of nonclassical states in nonequilibrium thermometry, and can readily be employed e.g., in quantum optical platforms.
The structure of the paper is as follows.In Section II we formalise the problem, the setup, and the figure of merit.In Section III, we review the Gaussian formalism, including the Gaussian dynamics.We then proceed with our main results in Section IV.Finally, in Section V, we close the paper with some remarks.Derivations and details of some results, as well as more simulations, are presented in the appendices in order to keep the paper's main part coherent.
FIG. 1.Our setting for estimating the temperature of a system: the probe is prepared in a state, which may or may not be correlated to auxiliary units, and then interacts with the sample for some time t.A measurement on the probe (and possibly the auxiliary units) will reveal some information about the temperature.One then resets the probe's state and repeats this process M = τ /t times, aiming to maximize the acquired information.
II. THE SETUP
As shown in Fig. 1, the probe mode, which can be in an entangled state with an auxiliary mode, interacts with a bath at temperature T .After some interaction time t, the probe system is measured to infer the temperature of the bath.We assume that the total time of running the experiment is fixed to τ .During this time, one can reset and repeat the whole process M = τ /t times.In what follows we assume that the time required for preparation and measurement protocols is not relevant compared to the time required for the parametrization (the probe's dynamics).Given that the density matrix of the system right after parametrization reads E t (ρ), one performs a POVM measurement Π with elements {Π k }.After M rounds, we can collect the outcomes to the dataset x = {x 1 , . . ., x M }.An estimator function T (x) maps the dataset into an estimate of the true parameter.From the Cramér-Rao bound we know that the mean square error of any unbiased estimator is bounded from below [22,23] where the bound is saturable by choosing the maximum likelihood estimator.Here, T 0 is the mean value of the estimator over all measurement outcomes, say T 0 = ⟨ T (x)⟩ x ; F C (ρ; Π; t) is the classical Fisher information (CFI) associated with the measurement {Π} and the initial state ρ.The CFI is upper bounded by the quantum Fisher information (QFI) defined as F Q (ρ; t) . .= max Π F C (ρ; Π; t) [24][25][26][27][28]. Motivated by (1) and following [5,18,29,30] we set the rate of CFI, FC (ρ; Π; t) . .= F C (ρ; Π; t)/t, and the rate of QFI, FQ (ρ; t) . .= F Q (ρ; t)/t, as our figures of merit.
There are three main variables to optimize over, the initial state ρ, the interrogation time t, and the measurement Π.
Our main aims are two folded (I) to understand the impact of Gaussian, nonclassical probe states and (II) to study the performance of Gaussian measurements.These are motivated by experimental feasibility; in quantum optics, preparation of Gaussian states is always possible via single-mode displacement and squeezing operations together with linear-optical networks, and Gaussian measurements are readily available via homodyne detection and a linearoptical network as well.We show that nonclassicality in Gaussian states, namely single-mode squeezing and two-mode squeezing can significantly improve the classical bound on thermometry precision.Moreover, Gaussian measurements often perform as well as the optimal measurement, especially in short time scales.
III. GAUSSIAN FORMALISM
We assume that probes are initially prepared in Gaussian states and their marginal states remain Gaussian through the interaction.Let us denote the vector of canonical operators by R = (x 1 , p 1 , . . ., x m , p m ) T , which also defines the matrix elements of the symplectic form [R j , R k ] =: iℏΩ jk .A Gaussian state is fully described by the vector of the mean values d . .= Tr[ρR] and the covariance matrix (ℏ = 1) where we use the notation R • R T ≡ RR T + (RR T ) T with RR T being a matrix of operators that is not symmetric.
In the most general case, a Lindbladian Gaussian dynamics E t evolves any probe state as [31][32][33][34] Thus, the double ( ) fully characterise the dynamics.They satisfy Y T t = Y t and Ω + Y + X t ΩX T t ⩾ 0 to guarantee complete positivity of the dynamics.In our model, the quadrature mean values tend to zero (dashed black arrow) in a temperature-independent manner.The temperature rather determines how much noise is being added to the state (blue arrows).(b) If the probe is in a highly squeezed-vacuum state, the squeezed quadrature (narrow dark blue region) will be highly sensitive to temperature, while the temperature information imprinted in the conjugate quadrature is almost negligible compared with the initial large uncertainty.Therefore, homodyne measurement on the squeezed quadraturewhich is a Gaussian measurement-is the optimal measurement.
For the standard Brownian motion the temperature information enters the dynamics only through the noise matrix Y t while the drift matrix X t is independent of temperature.This is the case in some situations such as the damped harmonic oscillator or loss of an optical cavity and has been previously used to study various problems in quantum thermometry [5,15,21].In Appendix A, we provide a microscopic derivation of this master equation where we prove the term X t is indeed temperature independent.More specifically, such Gaussian dynamical evolution can be described by X t = exp(−γt/2)O t , and Y t = (1 − exp(−γt))σ T , where O t is an orthogonal matrix and σ T = coth(ω/2T )I 2 is the covariance matrix of a bosonic mode with frequency ω at thermal equilibrium with temperature T .Here, I 2 is the 2 × 2 identity matrix.The parameter γ = J(ω) is related to the spectral density of the environment J(ω) and generally depends on the system's bare frequency ω; however, it is independent of the environment temperature.Also, in our model, the orthogonal transformation O t corresponds to the coherent dynamics of the probe system that is independent of temperature.Hence, we can always work in the interaction picture with O t = I 2 and incorporate the phase rotation in time into the measurement.This also implies that, without loss of generality, we can assume that the initial covariance matrix is diagonal, as E t (U (θ)ρU † (θ)) = U (θ)E t (ρ)U † (θ) and the phase rotation U (θ) can further be absorbed into the measurement.In this case, the covariance matrix σ t remains diagonal and is given by where ν = coth(ω/2T ).
Having the Gaussian state of the probe at the interrogation time t, one can calculate the QFI [35][36][37][38]] where ∂ T denotes the partial derivative with respect to temperature, and we used the vectorization notation Furthermore, if we perform a Gaussian measurement described by the covariance matrix σ M , then the corresponding CFI reads [20,35,39] However, as X t is temperature independent, no information about the temperature is imprinted on d t ; see Fig. 2. Therefore, the first term in the above expressions for QFI and CFI vanishes.
IV. MAIN RESULTS
Our main results are as follows.(i) The precision of using classical probe states is upper bounded by that of using the vacuum probe state.(ii) This bound can be overcome by using nonclassical states, namely single and two mode-squeezed states.In particular, we prove that single-mode squeezed states perform significantly better than the vacuum state, which previously was thought to be the best reparation [5].Moreover, entanglement between the probe and an auxiliary mode (two-mode squeezed states) further improves the thermometry precision.(iii) We prove the (extent of) optimality of Gaussian measurements, specifically homodyne detection for non-equilibrium thermometry.
A. Result (i): An upper bound for classical states
To begin with, we characterize the performance of classical states.A quantum state is classical if and only if its Glauber-Sudarshan P -function is a probability density distribution, and the state can be written as a mixture of coherent states, ρ cl . .= d 2 αP (α)|α⟩⟨α| [40,41].As discussed, in our model, no information about the temperature is imprinted on the first-order moments of the probe state.Therefore, the estimation precision of using all coherent probe states (including the vacuum probe state), which have the same covariance matrix σ 1 = I 2 , are the same.Also, as the Fisher information is convex-i.e., statistical mixture reduces the Fisher information-we conclude that the performance of all classical probe states is upper bounded by that of the vacuum probe state, that is, Note that this result holds for nonGaussian classical states as well.
B. Result (ii): Beating the classical bound with nonclassical states
As the first-order moments and a phase rotation do not matter, we consider the squeezed-vacuum state with the covariance matrix σ r = diag(r, 1/r), where the squeezing parameter is between r = 0 for infinitely x-quadrature squeezed state and r = 1 for the vacuum state, as potentially optimal pure single-mode Gaussian probe state.Note that p-quadrature squeezed states can be obtained by a phase rotation from the x-quadrature squeezed states, so without loss of generality we assume 0 ⩽ r ⩽ 1.By using Equations ( 4) and ( 5), we can find an analytical expression for the QFI rate .
In Figure .3 we depict the (rate of the) QFI against time for the vacuum state and for a squeezed state.It appears that the QFI rate is optimal when we perform the measurement as quickly as possible.In the limit of short times and high squeezing (r ≪ 1) and by keeping the most relevant terms in t and r, we have It is clear from this expression that the optimal measurement time approaches zero.Also, note that the rate can get arbitrarily large if we choose γt and r sufficiently small.This implies that nonclassicality in terms of squeezing can significantly enhance precision.While the behavior of Fig. 3 depends on the temperature and the frequency (in particular the ratio ω/T is determinant through the parameter ν = coth(ω/2T )) our numerics in Appendix B show that that nonclassicality in the form of squeezing improves the QFI rate for other temperatures as well.
Let us now explore the role of entanglement.To this aim, we assume possession of a secondary mode that does not undergo the dissipative dynamics but is rather initially entangled with the probe system-see Fig. 1.The dynamics can be simply extended to include the second mode by taking X t → X t ⊕ I 2 , and Y t → Y t ⊕ 0 2 , where 0 2 is the 2 × 2 null matrix.Equation ( 4) is generalized as where A, B and C are 2×2 matrices making up the initial covariance matrix at t = 0.As we can see, at long times the correlations vanish.However, one can harness the correlations for enhanced thermometry at short evolution times.
As the initial input, we choose the two-mode squeezed vacuum state given by with 0 < r ⩽ 1 being the squeezing parameter and Z = diag(1, −1).This covariance matrix represents a pure entangled state iff r < 1.Notice that this scenario is at least as good as the one with a single mode squeezed state with the same squeezing parameter σ r .If one performs a local Gaussian measurement described by σ M on the auxiliary mode of ( 11), the probe state collapses into Gaussian states with the covariance matrix The quantum Fisher information and (b) its rate are depicted in solid.The probe is initially prepared in the vacuum or the squeezed state.The (rate of the) classical Fisher information of optimal Gaussian measurements are also plotted in dashed.All classical probe states have a precision (or rate) that lies within the shaded green area, whose border is determined by the vacuum state.One can see that squeezed states can overcome this classical bound at short times.Interestingly, this improvement can still be attained by using Gaussian measurements, which are readily available in the laboratory.The optimal measurement should be performed as quickly as possible.As for the optimal Gaussian measurement, however, one may wait until the rate reaches its maximum, before measuring the system-better seen from Fig. 4 that depicts short-time behavior.
Setting σ M = lim s→0 diag(s, 1/s) (homodyne), the covariance matrix of the probe states becomes σ PS = σ r , that is a single-mode squeezed state.However, by performing joint measurements more information may be extracted.By using Eq. ( 5), the QFI rate for the two-mode probe can be calculated and compared with the result of the single-mode probe.Our simulations depicted in Fig. 4 confirm that indeed at short times the QFI rate can gain up to a two-fold improvement over the single mode squeezed state.Note, however, that this improvement requires joint measurements, which may not be practical in general.In the following, we show that by restricting to Gaussian measurements, which are readily available, one can still exploit the nonclassicality in probe states for thermometry precision.
C. Result (iii): Optimality of Gaussian measurements
We now restrict to a fully Gaussian scenario where the measurements are Gaussian as well.In Appendix C, we analytically find the optimal Gaussian measurement for single-mode Gaussian probe states, extending some of the results of [20] to dynamical scenarios.We prove that the optimal Gaussian measurement has a diagonal covariance matrix σ M L = diag(L, 1/L), which becomes homodyne for short times.By using Eqs.( 4) and ( 6) for the single-mode squeezed vacuum probe state with σ r and homodyne measurement on the x-quadrature (L = 0), the CFI rate is given by We can check that FC (σ r≪1 ; . By comparing this with (9) and for r → 0 one sees that FQ (σ r≪1 ; t ≪ 1) ≈ FC (σ r≪1 ; σ M 0 ; t ≪ 1) ≈ (∂ T ν) 2 /(2tν 2 ) which suggests that homodyne measurement is the optimal measurement in the regime of large squeezing and at short times.For a finite squeezing parameter, unlike the QFI rate, the best interrogation time is not zero and roughly satisfies t * = r/(γν), which decreases by increasing the squeezing and the dissipation rate γ.We have FC (σ r ; σ M 0 ; t * ) ≈ γ(∂ T ν) 2 /(8rν).Let us now discuss the role of entanglement in two-mode squeezed vacuum states using joint Gaussian measurements.In general, joint Gaussian measurements can be modeled in terms of two local phase rotation operations, a beamsplitter and two single-mode Gaussian measurements [42].However, instead of optimizing over all joint Gaussian measurements, we consider a measurement that boosts the performance with respect to the single-mode scenario.
FIG. 4. Same as Fig. 3, but comparing the single-mode squeezed states with the two-mode squeezed states for the probe.By using entanglement the QFI saturates to its steady state value faster-at γt ≫ 1 the solid red curve will catch up with the solid blue one-making them even better than single mode squeezed states.Here, we examine the performance of a specific joint Gaussian measurement, consisting of a 50 : 50 beamsplitter followed by two homodyne measurements.Evidently, even this simple Gaussian measurement can exploit the improvement offered by the entangled Gaussian state, making them comparable to the single mode squeezing interrogated with optimal nonGaussian measurement.The rest of the parameters are taken similarly to Fig. 3. T = 1, γ = 0.2, r = 10 −3 , ω = 1.
Specifically, we take σ M 2,0 . .= S BS σ xp S T BS where the probe mode after the interaction overlaps with the auxiliary mode on a balanced beamsplitter with the symplectic transformation S BS .Then at the output two homodyne measurements, one on x-quadrature and one on p-quadrature with the CM σ xp = lim L→0 diag(L, 1/L, 1/L, L), are performed.In this case, the CFI rate reads (see Appendix D for details) where As depicted in Fig. 4 this fully Gaussian setting can exploit entanglement to overperform the best (nonclassical) single mode setting, including nonGaussian measurements.At short times and for small values of r, we have which shows the optimal time is t * ≈ r/(γν).We then have FC (σ 2,r ; σ M 2,0 ; t * ) ≈ 16γν 3 (∂ T ν) 2 /(r(1 + 8ν 2 ) 2 ), which roughly shows two times improvement in the CFI rate compared to the single-mode squeezed probe state and homodyne measurement.
V. DISCUSSION
Our formalism for nonequilibrium thermometry of bosonic baths by means of continuous-variable systems can be used to describe many platforms such as impurities in cold gases, mechanical resonators, Josephson junctions, and quantum-optical systems.We set a bound on the thermometric performance of all classical states (i.e., those with nonnegative P -function), based on showing that all coherent states are equally useful.We then showed that nonclassical squeezed-vacuum states can beat this bound.An interesting lesson is that, increasing the energy of probes in terms of the displacement in phase space (or the amplitude of oscillation) is not beneficial for thermometry.But rather the energy should be put into increasing the degree of squeezing.We have shown, in particular, that fully Gaussian scenarios, which can simply be implemented in the laboratory in terms of single-mode and two-mode squeezed vacuum probe states and homodyne measurements, are extremely useful in boosting thermometry precision when time is a resource.
Our results also complement recent developments in nonequilibrium thermometry [18], which describes a fully non-restricted scenario.There are three major differences between the two scenarios.(1) In [18], the probe system has a finite dimensional-Hilbert space and the optimal preparation is always the eigenstate of the Hamiltonian with the maximum energy.Extending this to the infinite-dimensional Hilbert space with unbounded operators would be non-trivial.However, in our method, we consider the Gaussian states of a continuous-variable probe system with an infinite-dimensional Hilbert space.Gaussian states can be easily prepared using linear optics, and we find the optimal probe state within the Gaussian family.An interesting future direction is to generalize the upper bound of [18] to continuous variable probes by considering a constraint on the average energy of the probe states.(2) When there is no restriction on the measurement, it is best to perform the measurement as fast as possible.However, when restricting to the Gaussian measurements, there exists an optimal measurement time-which as we have shown scales with the initial squeezing in the state of the probe.This is because while the QFI for the optimal input state grows linearly with time, the CFI of Gaussian measurements grows quadratically with time (except in the unphysical case where the probe is initially infinitely squeezed and the CFI scales linearly with time).(iii) unlike the findings of [18], there is no Lamb-shift effect in our model since the Lamb-shift is temperature independent-See Appendix A. We expect that our formalism finds applications in estimating the temperature in quantum optical platforms or within cold Bosonic gases where state-of-the-art techniques allow for the implementation of Gaussian measurements such as homodyne/heterodyne measurements or time-of-flight measurements. where In the limit of continuous bath modes, we will use the following general description for the spectral density For now, we do not assume any specific shape for the spectral density and keep the analysis as general as possible.
As we prove below, the Gorini-Kossakowski-Sudarshan-Lindblad (GKLS) master equation for ρ p the state of probe mode reads with Γ out . .= J(ω 0 )(N (ω 0 , T )+1), Γ in . .= J(ω 0 )N (ω 0 , T ), and N (ω 0 , T ) = [exp(ω 0 /T )−1] −1 .The term ∆H P represents a [Lamb] shift, which as we show below is temperature independent and thus the coherent part has no impact on the thermometry performance.In the following subsection, we derive the master equation governing the damped harmonic oscillator dynamics in two ways, and particularly prove that the shift term is independent of temperature.
Derivation of the GKLS master equation (A7)
In the first approach, one can start from the von-Neumann equation [43,44].We start by writing the evolution of the total system in the interaction picture where we use boldface letters for the operators in the interaction picture with U p (t, 0) = e −itHp and U B (t, 0) = e −itH B , and ρ(t) is the joint state of the probe and bath systems.By integrating Eq. (A8) we obtain which by recursive replacement reads If we differentiate again, we have Now, by taking partial trace over the degrees of freedom of bath we have Considering that the initial state of the total system is a product state ρ(0) = ρ p (0) ⊗ ρ B , and that the bath is initially in a state which commutes with H B such as the thermal state, i.e. ρ B = e −H B /T /Tr B (e −H B /T ), the first term of Eq. (A14) vanishes [44].In this step, we employ the Born approximation, where the coupling between the probe and the bath is considered weak such that the effect of the system on the bath can be neglected; this implies that for the sufficiently large bath one can assume the bath state is time-independent and the probe-bath remain uncorrelated at all times, ρ(s) ≈ ρ p (s) ⊗ ρ B [44,45].Thus, Eq. (A14) becomes where we have applied s → t − s in the second line; in the third line we used the general form of the interaction Hamiltonian in the Schrödinger picture as A k and B k defined in Eq. (A5), and thus A k (t) and B k (t) are corresponding operators in the interaction picture finally, in the last line, we use the Markov approximation by replacing ρ p (t − s) → ρ p (t).
To proceed further, we assume that t ≫ τ B , where the bath correlation time scale τ B is defined as the timescale beyond which the bath two-time correlation functions Tr B [B k ′ (t − s)B k (t)ρ B ] decay rapidly [45,46].As a result, we can take the upper limit of the integral to infinity.We have By using the Bosonic relations where, N (ω k , T ) is the average number of the k-th mode, we put together Eqs.(A16) and (A17) and obtain Now by using the following formula where P denotes the Cauchy principal value, and also using the spectral density function (A6), equation (A21) It is clear that for the damped harmonic oscillator we have k, k ′ ∈ {1, 2} and ω ∈ {ω 0 , −ω 0 }, thus Eq. (A32) becomes By substituting Lindblad operators and considering Γ kk ′ (ω) = γ kk ′ (ω) + iS kk ′ (ω), Eq. (A44) becomes The above equation then simplifies to The coefficients in the above equation are easily calculated to be where we have used N (−ω 0 , T ) = −N (ω 0 , T ) − 1. Finally by using [a, a † ] = I, the quantum master equation for the where 0 < L ⩽ 1 and 0 ⩽ θ < π.Thus, two parameters θ and r characterize all rank-1 Gaussian measurements.In particular, homodyne measurement on x (p) quadrature is characterized by L → 0 and θ = 0 (L → 0 and θ = π/2).heterodyne, the measurement in coherent state basis, is identified with L = 1.
The classical Fisher information (CFI) for a single-mode Gaussian probe state and Gaussian measurement, by using Eqs.(C1) and (C2), is given by where we used ∂ T d t = 0.By diagonalizing the argument, and using the eigenvalues and Eq.(C2), we get the general form of the CFI as follows (C8) There are two cases.First, for r = 1, when the initial probe state is vacuum or coherent state, by using Eq.(C1) we have [σ t ] 22 = [σ t ] 11 .Hence, the CFI (C8) is independent of θ, and it can be set to zero to have a diagonal σ M for the Gaussian measurement; in this case, Eq. (C8) becomes Second, for 0 < r < (C10) Therefore, in both cases, if the covariance matrix of the initial state is diagonal, the optimal rank-1 Gaussian measurement has a diagonal covariance matrix as well.
2. The optimal Gaussian measurement at short times is homodyne detection We start by using the CFI-Eq.(C10)-for the state given by Eq. (C1), and a Gaussian measurement with diagonal covariance matrix σ M L = diag(L, 1/L).The CFI reads
(C11)
We can now expand both the numerator and the denominator for short times, and keep the leading order terms in γt.
Note that in doing so we are not allowed to ignore γt compared to L or r, since they can be equally small.However, we can ignore terms like rγt compared to r, or ignore Lγt compared to L. We have which clearly reaches its maximum if we chose L = 0.For this choice, the Fisher information is maximised at t * ∝ r/(γν).(ii) If O(L + r) > O(νγt), then we can ignore the νγt terms in the parenthesis to get By taking derivative of this expression with respect to L, we find no solutions, i.e., the function is monotonic.A simple comparison then shows that F C (σ r ; σ M 0 ; t ≪ 1) > F C (σ r ; σ M 1 ; t ≪ 1), that is, homodyne detection is the best Gaussian measurement in this case too.For this measurement, the Fisher information is zero initially and grows quadratically with time-until O(r) ≲ O(νγt) when we have to use the expression (C13).Finally, note that when L = 0, one can use equation (C13) which covers both cases F C approx (σ r ; σ M 0 ; t ≪ 1) . .= (γt∂ T ν) 2 2(r + νγt) 2 . (C15) As depicted in Fig. 7, this approximation is in a very good agreement with the exact value of the CFI given by Eq. (C11).
FIG. 2 .
FIG.2.(a) Schematic representation of the probe's evolution in phase space.In our model, the quadrature mean values tend to zero (dashed black arrow) in a temperature-independent manner.The temperature rather determines how much noise is being added to the state (blue arrows).(b) If the probe is in a highly squeezed-vacuum state, the squeezed quadrature (narrow dark blue region) will be highly sensitive to temperature, while the temperature information imprinted in the conjugate quadrature is almost negligible compared with the initial large uncertainty.Therefore, homodyne measurement on the squeezed quadraturewhich is a Gaussian measurement-is the optimal measurement.
FIG. 6 .
FIG.6.The QFI rate for various temperatures; comparison between two-mode and single-mode squeezed probe staets.(a) T = 0.2ω corresponding to the low temperature regieme, and (b) T = 5ω corresponding to the high temperature regieme.Similar to the main text, we observe an enhancement in using entangled two-mode squeezed vacuum states.For smaller values of T /ω, the enhancement can be even larger.Here, we set the rest of the parameters to ω = 1, γ = 0.1, and r = 10 −3 . | 6,951 | 2022-07-21T00:00:00.000 | [
"Physics"
] |
INFLUENCE OF INFORMATION QUALITY, SYSTEM QUALITY, SERVICE QUALITY AND SECURITY ON USER SATISFACTION IN USING E-MONEY BASED PAYTREN APPLICATIONS
Nowadays technology grow extremely fast and it is helpful in communication and transactions, this effect to human behavior that people consider using technology intentively in daily activites cause of its facilities, one of facilities is digital economic transactions or e-money. Due to increasing of technology, there are some companies run their business form e-money based applications, one of the applications is the PayTren. The purposed of this research is to examine the satisfactory of the user who apply PayTren applications, such as the influence of information quality, system quality, service quality and security. The method of the research is based on DeLone and McLean models, then it will be modified. The research’s data is quantitative data. researchers collected 89 questionnaires from the population of PayTren application users in Batam by using a purposive sampling technique. The conclusion of the research show that there is an influence between the quality of information, service quality and security on the satisfaction of e-money users based on the PayTren application, while it is found that there is no influence between the quality of the system on the satisfaction of e-money users based on PayTren applications.
BACKGROUND
Internet-based technology has become one of the most popular business channels in this last time. Information technology has had a relevant effect on the category of service in world trade. Information technology can help improve efficiency and effectiveness of business processes in making managerial decisions quickly and accurately, so that it cab make solutions, innovation, and make a significant impact through technology.
The number of e-money users because of the often used cash has many weaknesses, such as buyers are expected to bring for the price of goods to be purchased, it is considered less practical.l. Sellers will also be hassless to give the refund so that ignores the right of the buyer to obtain an appropriate return (Adityawarman, 2014). In addition, a lot of fake money in circulating which can cause the community to fooled (Ramdani, Suryani, Gandana, Setyamarta, & Aulina, 2015). Habits of people who want to do something easily are also factors that cause a lot of use of e-money.
Electronic money is cash owned by someone. But the nominal value is converted into an electronic form. In December 2018, the number of e-money transactions in Indonesia from the data of Bank Indonesia is 167,205,578 units of transaction (Bank Indonesia, 2018). Bank Indonesia has also established 36 companies that have a license issuing electronic money, one of them is PT Veritra Sentosa International with its products called PayTren. PayTren is a micropayment that can be used for payment which is one of the appropriate and useful alternatives to facilitate the user to transact, and able to change the payment counters in general. As reports from DailySocial 2018, recorded that in the last two years the growth of fintech start-up reached 78%, and most of the focus in the payments sector. This result showed that users of based on smartphone application, PayTren entered in the top 5 is 19.27% with the number of respondents 825 people.
This study uses the Information System Success DeLone & McLean model then modified the model in this research. This is done to suit the needs and objectives of research. Based on previous research using DeLone and McLean model also made modifications that adjust their research conditions and needs, that is the research by Tovar, Almazan, and Quintero, (2017) explains that the results of user satisfaction is influenced by the quality system, the quality of information, quality of service support the sustainability of the organization. But different on the results of Stefanofic, Marjavonic, Delic, Culibrk, & Lalic, (2016), that user satisfaction not been affected by the information quality and service quality, but the quality of the system is influential. In the statement, it is known that research uses the Delone & McLean model for the basic of research theory and modified according to research needs. This research is different from previous research because of modification on DeLone & McLean model is by dividing into four independent variables, that are quality of information, systems quality, service quality and security. Then the dependent variable using the user satisfaction on PayTren applications in Batam. This research is expected to see the effect of the above factors so that it can be used development in information systems in business, especially e-money based applications. Based on the above background exposure, researchers interested in conducting research with the title "The Effect of Information Quality, System Quality, Service Quality and Security on User Satisfaction in Using E-Money based PayTren Applications".
THEORETICAL FRAMEWORK Theory of Information System Success DeLone & McLean
The results of this research were accepted by many part because it is considered valid and in accordance with the needs. The first model for the theory of Information System Success DeLone & McLean 1992 connects several parameters of measuring the success of information systems in which there are information quality, system quality, use, user satisfaction, individual impact and organizational impact.
Information System Success DeLone & McLean Model in 1992 then developed by maintaining usage variables but added the intensity of use, and added the variable of service quality in consideration that measurement of system effectiveness usually only focus on the products, compared with the service function of provider, so that the variables in the updated Information System Success Model has been supported empirically.
Information Quality
Information quality measures the quality of the output of information system, the quality produced by the information system, especially in the form of a report (DeLone & McLean, 2003). Information quality is characteristic of the output persented in an information system that includes management reports and web pages (Petter & McLean, 2009). Measuring the quality of information is affected completeness, formatting, relevance, accurate, and timeliness.
System Quality
The system quality is characteristic of the desired quality of the information system and the desired quality information is the information of product characteristics (DeLone & McLean, 2003). System quality is performance of system itself, system quality measurement is affected by ease of use, reliability, flexibility, and security.
Service Quality
The quality of service perceived by the user is measured by five indicators adapted from the fields of marketing, there are assurance, emphathy (DeLone & McLean, 2003). Quality of service is associated with a way to meet the needs and desires of the user, and how process of delivering match in accordance with user expectations.
Security
Security can be defined as the level of protection against criminal activity, harm, damage, and loss (Rainer & Prince, 2011). Following this broad definition, the security of information overload all policies and processes designed to protect organizational information and information systems of access, use, disclosure, disruption, modification, or destruction of unauthorized. Security measurement is affected by transmission mechanism, financial security, and the security system (Jin & Park, 2006).
User Satisfaction
According DeLone & McLean (2003) user satisfaction is often used as a replacement measure the effectiveness of information systems. Overall users satisfaction is affected by information quality, systems quality, and service quality. So that the instrument used to measure the level of user satisfaction is to see the level of satisfaction regarding the report or output produced and support services from system providers. Satisfaction is feeling delight or disappointed someone who comes from a comparison between her impression of the performance and outcome of a product and expectations (Kotler, 2005).
Development of Hypothesis
Influence of information quality to user satisfaction Tovar, Almazan, and Quintero, (2017) stated that the analysis results of variable information is the most important variable in determining user satisfaction, because users consider a system of information capable of providing the accuracy and availability of information as an element of success system. The better quality of information, then higher the user satisfaction rate of the system. information quality cn demonstrate the extent to which information can meet the requirements and expectations for all the users who need the information, so that users will be satisfied with system of given application. On the basis of the description, then formulated the hypothesis: H1: Information quality positively affects user satisfaction.
Influence of system quality to user satisfaction
The result of Tovar, Almazan, and Quintero, (2017) research, about the effect of the information system on organization showed that the quality system positively affected the user satisfaction, it comes from user's perception that the system is easy to use, user-friendly, fast, and compatible with other system used in the institution. Users should be able to control the information system, so they can work effectively. The quality of the system can determine user response or feeling after using an application system. Therefore, when a system is aasy to use, it will increase user satisfaction. On the basis of the description, then formulated the hypothesis is: H2: Quality system positively affects user satisfaction
Salameh
Ahmad, Zulhumadi, and Abubakar, (2017) in his research explains that the ease of use, interactivity, and the innovativeness of the website has a significant positive correlation with quality of service. As a result, the quality of service significantly affects user satisfaction. Service quality of mcommerce can bridge the communication between users and businesses, so it can give users the satisfaction to access the system anytime and anywhere that makes interaction with each other more effective, in other words m-commerce can provide convenience in handling the transactions. Good service quality will improve user satisfaction. Companies should be able to pay attention to the quality of service provided to its customer, because service is an important factor every business. Services that satisfies its customers is the service that accepted by users is appropriate and deemed exceed user expectations. On the basis of the description, then formulated the hypothesis: H3: Service quality positively affects user satisfaction
Influence of security to user satisfaction
According Raharjo in research Utami & Kusumawati, (2017), information security is how we can prevent fraud, detect any fraud in an information-based system, where information itself has no physical meaning. Utami & Kusumawati, (2017), also explains in his research that the higher the security of e-money then student's interest in using e-money higher. Users will think of impact it will occur on the security of an e-money used. Users will be tend to use applications that are deemed to provide better data, security of personal data and external data, so user does not not have to worry about executing their transactions. On the basis of the description, then formulated the hypothesis: H4: Security positively affects user satisfaction Based on the description of the development of hypotheses described, then the variables used in this research are described in models below: Source: data processing by author (2019) 3. METHODOLOGY This research uses quantitative approach using primary data. This research uses questionnaire instruments. This method is used to explain and answer the formulation of the problem. This research will test the effect of independent variables (information quality, system quality, service quality and security) on the dependent variable (user satisfaction). Determination of the number of samples using Roscoe method in Sugiyono (2014) criteria for the study sample that will perform with univariate analysis, that is minimum of 10 times the number of variables studied. In this research, there are five variables, four independent variables and one dependent variable that minimum number of members of the sample is 50, so the researchers set number of samples more than 50 and obtained as many 89 samples of research with the category of users PayTren application in Batam city. The research instrument uses a questionnaire adapted from DeLone & Mclean (1992), DeLone & Mclean (2003) and Jin and Park (2006) questionnaires. Measurements in this study used the Likert Scale. Sample selection technique is purposive sampling, with user criteria of PayTren application in Batam city that transacted at least 1 time.
Data
Processing and Analysis Techniques The data were processed using statistical tools such as SPSS 22. Data was analyzed using statistical analysis descriptif, validity and realiability, test the classical assumption of normality test, multicollinearity, and heterocedastity, linear regression analysis and statistical T test. = Security E = error
RESULTS AND DISCUSSION Respondents characteristics
Distributing of questionnaires conducted by online and directly. Google form of questionnaires using a total of 93 questionnaires, and found some of the data that can not be processed as much as 35 questionnaires. Questionnaires circulated questionnaires obtained as many as 31 and can be processed entirely. Based on this, the samples that meet the criteria totaling 89 samples, can be seen in Table 1 below: Respondents who filled out questionnaires then identified based on gender, age, education, past, type of work and long experience of using e-money PayTrenbased applications. This identification is performed to determine the general characteristics of the respondent.
Descriptive statistics
Here's a table 3 which shows the descriptive statistics of each variable to be analyzed in this study. Based on descriptive statistics test in Table 3 are described from N as many as 89 of filling questionnaire by respondent obtained the minimum value of information quality variable is 12 and the maximum value is 20, the system quality variable get minimum value of 9 and a maximum of 20, the service quality variable get minimum value of 4 and maximum of 8, security variable get minimum value of 7 and maximum value of 12 and user satisfaction variable get minimum value of 10 and maximum value of 20. Based on this descriptive test, highest average value is 17,31 followed by the highest standard deviation value is 2.530.
Validity and Reliability Test Results
Results of the calculations is all variables are valid and reliable.
Normality test
Normality test used in the study was Kolmogorov -Smirnov. In this test the decision is obtained when sig> 0.05 then data can be said to be normally distributed.
Here are the test results in Table 4:
Multicollinearity test
A regression model is said to be escaped from multicollinearity test when tolerance values is > 0.10 and the value of Variance Inflation Factor <10. Here are the test results in Table 5: Based on the above data it can be seen that each variable that meets the requirements of tolerance values> 0.1 and VIF <10. So, it can be concluded that there is no multicollinearity between variables in this regression model.
Heterokedasticity test
Testing is done by using test heteroskedasticity Glejser. If the significance value greater than 0.05, free of heterokedasitas. Based on the above data it can be seen that the value of the significance of the correlation results greater than 0.05. It shows that the variables tested free of heterokedasticity.
Hypothesis Testing Results
Simple linear regression analysis is used to determine the direction of the relationship between the dependent and independent variables. Simple linear regression calculation results can be seen in Table 7.
Information quality positively affects user satisfaction
User satisfaction = 0484 + 0146 + 0.294information quality Based on the equation the first hypothesis known values of information quality coefficient of 0.294 is positive, and the information quality has a significant value of 0048 < 0.05. It can be concluded that the first hypothesis was accepted. That is, the information quality has a positive significant influence to user satisfaction using e-money based Paytren paytren application. If the value of information quality (x1) up 1 unit, then user satisfaction (y) will increase by 0.294 assuming other variables are constant / fixed.
The results of this hypothesis in accordance with the theory of information system success DeLone & Mclean (1992) where the measurement of information quality is influenced by the characteristics of completeness, formatting, relevance, accurate, and timeliness. This demonstrates to increase user satisfaction the information given system must be qualified by presenting detailed information that covers all the necessary information of the user. Information provided by the system will greatly give satisfaction to users if the information is easy to use, in line with the facts, according to the need and can convince users to make decisions. When the information is already qualified then the user will feel confident to transact and Source: Data processing from SPSS 22 therefore it affects to user satisfaction, The higher likelihood that consumers are reconducting transactions using the Paytren app-based e-money app.
The results of this study reinforced by research conducted Tovar, Almazan, and Quintero, (2017) which shows that the quality of information directly influence user satisfaction. Supported by research conducted from Gunawan, (2018) and research Rudini, (2015) which shows the quality of information significantly positive effect on user satisfaction. This study does not support research conducted by Stefanofic, Marjavonic, Delic, Culibrk, & Lalic, (2016) who found that the quality of the information does not affect the user satisfaction.
System quality positively affects user satisfaction
User satisfaction = 0484 -0.017 system quality + 0.165 Based on the equation of the second hypothesis known values of system quality coefficient of -0.017 and has a significance value of 0.920 > 0.05. It can be concluded that the second hypothesis is rejected. This means that there is no effect on the system quality to user satisfaction in using emoney based Paytren application. If the value of the quality system (x2) up 1 unit, the value of user satisfaction (y) will reduced by 0.165, assuming other variables constant / fixed.
These results are not in accordance with the theory of information system success DeLone & Mclean (1992). System quality is performance of the system itself, a condition where one considers it important that the quality of the system can distinguish the characteristics of the product quality and output of an information system.
The results showed that the samples of respondents considered the quality of the system does not affect their satisfaction in using e-money based Paytren application. This can be indicated because the number of samples, objects, sampling sites are used differently from previous studies. The users assume that they use the application because the benefit more from other. The system of an application does not affect their satisfaction in using e-money. This is due to frequent disturptions during transactions at the time of the transaction, where when the system is under repair and upgrading from the central application, the users feel the constraints due to the application updates. he application system also does not affect the user satisfaction indicated by some users who still do not understand thoroughly about the use of the application, such as when the user will conduct transactions wrong transaction select an item that does not correspond to the remaining e-money balance in their application, so that it can be one of the factors that cause the quality of the system does not affect the user satisfaction in using PayTren.
The results support the research by Rudini, (2015) who found the system quality has negative effect and insignificant to user satisfaction. The results of this study did not support the research Tovar, Almazan, and Quintero, (2017) who found the quality system directly influence user satisfaction. Results found by Gunawan, (2018) related to system quality positive significant effect on the quality of the system is also not supported by the results of this research.
Service quality positively affects user satisfaction
User satisfaction = 0484 + 0.479service quality + 0.192 Based on the equation of the third hypothesis is known values of service quality coefficient of 0.479 is positive, and has a significance value 0.014 < 0.05. It can be concluded that the third hypothesis is accepted. This means that service quality has a positive significant influence to user satisfaction using e-money based Paytren application. If the value of quality of service (X3) up 1 unit, the value of user satisfaction (y) will increase by 0.192 assuming other variables constant / fixed.
The results of this hypothesis in accordance with the theory of information system success DeLone & Mclean (2003) where the measurement of service quality is influenced by indicators assurance and empathy. When the quality of service provided, both in terms of services able to provide assurance of risk and doubt, and how to e-money product is able to understand the purposes of its users will be able to improve the user satisfaction of the e-money products offered. Service quality is associated with fulfillment the needs and desires, as well as the accuracy of delivery in balancing user expectations. Services quality make the of Paytren application feel satisfied to use the application as a tool to transact electronically.
The results of this study reinforced by research conducted Salameh Ahmad, Zulhumadi, and Abubakar, (2017) which showed that service quality directly affects the user satisfaction, supported by research conducted Tovar, Almazan, and Quintero, (2017) and research Rudini , (2015) who found a positive effect on the quality of service user satisfaction, but does not support research conducted by Stefanofic, Marjavonic, Delic, Culibrk, & Lalic, (2016) which found that the quality of service does not affect the user satisfaction.
Security positively affects user satisfaction
User satisfaction = 0484 + 0.409security + 0.163 Based on the equation of the fourth hypothesis known values security coefficient of 0.409 is positive, and has a significance value 0.014 < 0.05. It can be concluded that the fourth hypothesis is accepted. This means that security has a significant positive influence user satisfaction using e-money based Paytren application. If the value of the security (X4) up 1 unit, the value of user satisfaction (y) will increase by 0.163 assuming other variables constant / fixed.
The results of this hypothesis is accordance with the research by Rainer & Prince (2011) which the security is defined as the level of protection against criminal activity, danger, destruction, and loss. When the system can provide both security and secure identity fraud, then one's satisfaction will increase. In this case the security related to e-money, is the user feel protected either from fault of transmission mechanism that resulted in the e-money can not be used, or felt protected from damage and theft and security assured of the system. This security is what makes the users feel confident and increasingly adds to their satisfaction in using e-money.
The results of this research reinforced by research conducted Gunawan, (2018) which found that security has apositive effect on user satisfaction. This results also supported by research Utami & Kusumawati, (2017) which indicates that the security factors affect the satisfaction and interests of students using e-money.
The following is a table of 8 summary results of hypothesis testing:
Coefficient of Determination
The coefficient of determination used are adjusted R-squared. Determination coefficient test results are shown in Table 9. Based on the above it can be seen that the value of adjusted square by 0526. It shows that 52% of the variation variables can be explained by the variable information quality, system quality, service quality and security. The remaining 48% is explained by other variables outside the research
CONCLUSION
Through the process of data processing and a series of tests performed in this research, it can be concluded. (1) A partial t test indicates that the information quality has statistical positive influence direction towards user satisfaction and also significant that the first hypothesis is supported. This means that when the information of an e-money product is highly qualified then the user satisfaction is increased to use the e-money.
(2) a partial t test indicates that the system quality has a negative effect statistically direction towards user satisfaction and significant value indicates not have a significant effect, so that the second hypothesis is not supported. This means that the quality system has important implications for the satisfaction of users using the Paytren application.
(3) a partial t test indicates that service quality has a positive influence statistically direction towards user satisfaction and also significant that the third hypothesis is supported. This means that when a service than a product of e-money is very qualified then increased user satisfaction for the use of e-money. (4) a partial t test indicates that the security has a positive influence statistically direction towards user satisfaction and also significant that the fourth hypothesis is supported. This means that when a secure e-money products that increase user satisfaction for the use of e-money. This means that when a service than a product of e-money is very qualified then increased user satisfaction for the use of e-money. (4) a partial t test indicates that the security has a positive influence statistically direction towards user satisfaction and also significant that the fourth hypothesis is supported. This means that when a secure e-money products that increase user satisfaction for the use of emoney. This means that when a service than a product of e-money is very qualified then increased user satisfaction for the use of e-money. (4) a partial t test indicates that the security has a positive influence statistically direction towards user satisfaction and also significant that the fourth hypothesis is supported. This means that when a secure e-money products that increase user satisfaction for the use of emoney. | 5,825.4 | 2019-01-01T00:00:00.000 | [
"Computer Science",
"Business"
] |
Data on physical and electrical properties of (ZrO2)1-x(Sc2O3)x(CeO2)y and (ZrO2)1-x-y-z(Sc2O3)x(CeO2)y(Y2O3)z solid solution crystals
The data presented in this article are related to the research article entitled “Phase stability and transport characteristics of (ZrO2)1-x(Sc2O3)x(СeO2)y and (ZrO2)1-x-y-z(Sc2O3)x(СeO2)y(Y2O3)z solid solution crystals” https://www.sciencedirect.com/science/article/pii/S2352340917302329 [1]. It contains data on densities and microhardness of the as-grown crystals. The data on the specific conductivity of the as-grown and annealing at 1000 °С for 400 h ScCeSZ and ScCeYSZ crystals in the temperature range 623–1173 K is also included in this article. The article describes also the growth of the (ZrO2)1-x(Sc2O3)x(СeO2)y and (ZrO2)1-x-y-z(Sc2O3)x(СeO2)y(Y2O3)z solid solution crystals using directional melt crystallization in a cold crucible.
Data
This dataset contains information about density, microhardness and specific conductivity of the scandia-ceria-and yttria-stabilized zirconia. Table 1 shows the chemical composition, brief notations, densities and microhardness of the as-grown crystals used in the further analysis. Table 2 shows the specific conductivity of the as-grown and annealing at 1000 С for 400 h ScCeSZ and ScCeYSZ crystals in the temperature range 973e1173 K. Arrhenius plot of specific bulk conductivity of as-grown and asannealed crystals ScCeSZ is shown in Fig. 1. The same plot for as-grown and as-annealed crystals ScCeYSZ is shown in Fig. 2.
Experimental design, materials, and methods
All of the samples having nominal composition (ZrO 2 ) 1-x (Sc 2 O 3 ) x (СeO 2 ) y (x ¼ 0.085e0.10; y ¼ 0.005e0.015) and (ZrO 2 ) 1-x-y-z (Sc 2 O 3 ) x (СeO 2 ) y (Y 2 O 3 ) z (x ¼ 0.07e0.10; y ¼ 0.005e0.010; z ¼ 0.005e0.020) were prepared by directional melt crystallization in a cold crucible. ZrO 2 , Sc 2 O 3 , СeO 2 , and Y 2 O 3 powders of not less than 99.99 % purity grade were the initial materials. The crystallization of the melt was carried out in a water-cooled crucible 130 mm in diameter. The RF generator (frequency 5.28 MHz, maximum output power 60 kW) was used as a power source. The charge weight was 5 kg. The directional crystallization of the melt was achieved by moving the crucible with the melt downward relative to the induction coil at a 10 mm/h rate. The weight of the ingots was 3.5e4.0 kg. After the installation was shut down the ingot cooled down spontaneously. The cooling of the ingots was monitored by measuring the temperature on the surface of the upper heat screen with a Gulton 900e1999 radiation pyrometer (above 1000 C) and a Pt/Pt-Rh thermocouple (1000 C down to 500 C). The average ingot cooling rate from the melt temperature to 1000 C was 150e200 K/min and then down to 500 C with 30 K/min. The process yielded ingots consisting of column crystals that could be mechanically separated into individual crystals. Typical dimensions of the crystals were 8e15 mm in cross-section and 30e40 mm in length.
Subject area
Materials Science More specific subject area
Solid state electrolyte
Type of data Table, graph How data was acquired Hydrostatic weighing -Sartorius hydrostatic balance (Switzerland) The microhardness -DM 8 В AUTO microhardness tester (Affri, Italy) with a 50 g load. The impedance spectroscopy -Solartron SI 1260 frequency analyzer (Solartron Analytical, United Kingdom)
Data format
Raw, filtered and analyzed. Experimental factors The crystals were annealed in a Supertherm HT04/16 high-temperature resistance furnace in air at 1000 C for 400 h.
Experimental features
All crystals were grown by directional melt crystallization in a cold crucible [2] Data source location Moscow, Russia Data accessibility Data are available with this paper Value of the data The data on oxygen/ionic conductivity of the (ZrO 2 ) 1-x (Sc 2 O 3 ) x (СeO 2 ) y and (ZrO 2 ) 1-x-y-z (Sc 2 O 3 ) x (СeO 2 ) y (Y 2 O 3 ) z solid solution crystals is very useful for development solid state electrolytes for SOFCs. The data on high-temperature degradation of conductivity to get more depth information about ionic conduction mechanism in solid state electrolytes. The present data could be helpful for researchers involved in the crystal growth of the high temperature materials.
The as grown crystals were then annealed in a Supertherm HT04/16 high-temperature resistance furnace in air at 1000 C for 400 h.
The conductivity of the zirconia base crystals was measured in the 400e900 C range using a Solartron SI 1260 frequency analyzer in the 1 Hze5 MHz range. The resistivity was measured in a measurement cell using the four-probe method in a Nabertherm high temperature furnace (Nabertherm GmbH. Germany). The measurements were carried out on crystal plates size of 7 Â 7 mm 2 and thickness of 0.5 mm with symmetrically connected Pt electrodes. Platinum electrodes were annealed in air at the temperature 950 C for 1 h. The ac amplitude applied to the sample was 24 mV. The impedance frequency spectrum was analyzed in detail using the ZView (ver.2.8) (Scribner Associates Inc., USA) software. The resistivity of the crystals was calculated based on the resultant impedance Table 1 Chemical composition, brief notations, densities and microhardness of the as-grown crystals. Part of the data is already published in Ref. [1]. Table 2 The specific conductivity of the as-grown and annealing ScCeSZ and ScCeYSZ crystals in the temperature range 973e1173 K. spectra and then the specific conductivities of the crystals were calculated taking into account the specimen dimensions. | 1,247.4 | 2019-05-25T00:00:00.000 | [
"Materials Science"
] |
Plant Based Watering System Internet of Things Arduino and Monitoring with Telegram
Watering plants is very important if plants are to grow healthy and fertile. Many plant owners do not water their plants because they are busy at work and busy activities outside the home. Watering plants in the form of a system that can work automatically is an integrated design that can help human work. The aim of this research is the application of the Internet of Things and Telegram in watering plants, as well as creating an application for monitoring plant growth and care using the Telegram application. The method used in this research is the internet of things. Internet of Things is a concept where certain objects have the ability to transfer data via a WiFi network, so this process does not require human-to-human or human-to-computer interaction. Everything is run automatically with the program. The Internet of Things is usually called LoT and this technology has developed rapidly starting from wireless technology, micro-electromechanical systems (MEMS) and the internet. The results of this research are that by using an automatic plant watering system based on the internet of things, plants can remain well maintained and the development of the plants can also be monitored via telegram. The
I. INTRODUCTION
The technological era is developing rapidly along with the need for problems which arise.These various problems that arise can be overcome with technology, both in the fields of education, agriculture, medicine and others, the role cannot be separated technology, the technology that is most needed today is the internet.Indonesia is a tropical country that has abundant natural wealth.Riches These include natural wealth in the form of fertile land.This matter causing many types of plants that are very suitable to grow in Indonesia.Plants have enormous benefits for life, including includes aesthetic functions and, plant food sources, as well as being used for medicine.Ornamental plants are one type of plant that grows abundantly in Indonesia.The increasing interest in ornamental plants creates a problem namely enthusiasts who are still beginners or are new to caring for ornamental plants really understand the care, including the watering process.In the process watering, there are still people interested in ornamental plants in the community Many people water irregularly due to busyness, ignorance or other reasons.This causes ornamental plants they like not getting enough water and wilt easily.On the other hand, currently it is a system with very complex planning It is very necessary to make it easier for humans to carry out an activity.Moreover, if the system created is driven by certain controls integrated, this is what has an impact on humans so they can design and create a form of control that is expected to be used efficiently.This also includes plantations, which include processes sprinkling.Watering plants is very important if the plants are to grow healthy and fertile.Many plant owners do not water their plants due to busyness at work and busy activities outside House.Watering plants in the form of a system that can work automatically is an integrated design that can help the work man.One thing you can do is to use the internet things.Internet of Things is a concept where certain objects have the ability to transfer data over a wifi network, so this process is not requires human-to-human or human-to-computer interaction.All has been run automatically by the program.Internet of Things is commonly called with lot.And this technology has developed rapidly starting from wireless technology, micro-electromechanical systems (MEMS) and the internet.Based on the problems above, a system is needed to watering plants automatically using the Internet of Things concept by monitoring the Telegram application.Telegram or what is often called TG is a cloud-based multi-platform instant messaging service application.Through Telegram, users can send messages, photos, videos, audio and types other files are encrypted end to end thus, the message sent completely safely from third parties, even from Telegram.Telegram makes it easy for users to access one Telegram account from different devices simultaneously.And can share the number of files unlimited up to 15GB.In this research, data collection will be used by conducting document studies and field research.Based on the background that has been explained, it will be proposed "Automatic Plant Watering System Based on the Internet of Things and Arduino and Monitoring with Telegram".
II. RELATED WORKS/LITERATURE REVIEW
Journal "Design of an Automatic Plant Waterer Based on the Internet of Things Using NodeMCU and Telegram".The aim of this research is to create an automatic plant watering tool based on the Internet of Things using NodeMCU and Telegram.an Internet of Things-based automatic plant watering system using NodeMCU as a link to the Telegram application.Then a test was carried out on the Aglaonema sp plant, the Soil moisture sensor ran well then sent a command to the NodeMCU to send a command to the Mosfet then run the water pump.Soil moisture testing is completed by inserting the sensor that has been associated and modified in the NodeMCU in the pot.When the soil humidity is below 55%, the NodeMCU will act to provide a request to the Mosfet to turn on the pump.By utilizing NodeMCU you can communicate data related to watering time.Then, at that time, in order for notifications to be sent on the Telegram application, the equipment, especially the NodeMCU must be connected to the internet.
Journal "Plant Watering Techniques Using Microcontrollers Based on the Internet of Things".The aim of this design is to hope that this tool can make it easier to care for plants in areas where sensors that detect soil moisture levels have been planted.The results obtained from this research are in the form of a tool that has been designed from several required components and the results of tests that have been carried out as described in this discussion.When the water level increases and the sensor detects water, the connected Telegram application can detect the moisture level of the soil area that has been tested.The pump will water if the soil moisture is < 65% or below and the pump will turn off if the soil humidity is > 66% and above.This automatic mode depends on the condition of the soil, if the soil is dry then the pump will water by itself without having to be controlled like manual mode, conversely if the soil is damp or wet then the pump will not water.This plant watering tool using a micro controller based on the internet of things was created to make human work easier in watering plants using a soil moisture sensor which is then processed by nodeMCU and instructed by telegram to display soil moisture values according to dry, damp or wet soil conditions.according to the readings from the soil moisture sensor in the form of values on the telegram.Based on the results of testing 100 times from manual experiments and instructions to Telegram, the test success rate was 100%.Meanwhile, the experimental results of automatic watering of plants obtained a success rate of up to 100% and were appropriate.This tool can be further developed to become a smart garden where it can control temperature, humidity, pn (acidity) and weather, so that plant growth is stable with satisfactory results.This internet of things-based plant watering tool can also be used on a large scale.
A. Internet of Things
One of the boundaries of mechanical advancement in the ongoing time and furthermore the period what's in store is dominance in the field of parts.Web. of Things is an idea that specific items can move information over a wifi organization, so this cycle doesn't expect human-to-human or human-to-PC connection.Everything is done run consequently by the program.Web of Things is regularly called with a ton.Also, this innovation has grown quickly beginning from innovation remote, miniature electromechanical frameworks (MEMS) and the web.
B. Monitoring
Observing in Indonesian is known as checking.Observing is a movement to guarantee accomplishment all authoritative and the board objectives.On different events, observing likewise characterized as a stage to survey whether an action is carried out as expected, distinguishing issues emerge with the goal that they can be tended to right away, evaluate whether the work design is and the administration utilized is proper to accomplish the targets, knowing the connection among exercises and the point of getting estimations progress.At the end of the day, observing is an inner cycle vital authoritative exercises that can decide its execution whether an association's objectives.The reason for checking is to guarantee that the fundamental undertakings of the association can run appropriately with a foreordained arrangement.
C. Arduino Software
Arduino is an open-source based gadgets stage convenience (simple to utilize) both equipment and programming.All in all, Arduino is an essential framework comprising of equipment and programming that focuses on usability.
D. Telegram
Wire or what is frequently called TG is an application cloud-based multi-stage texting administration.Through Wire, clients can send messages, photographs, recordings, sound and different kinds of documents others are start to finish encoded along these lines, messages that sent totally securely from outsiders, even from Message.Wire makes it simple for clients to get to one Message account from various gadgets all the while.What's more, can share Limitless number of records up to 15GB.
E. Black Box Testing
Black box testing is a sort of testing that treats gadgets programming whose inward presentation is obscure.So the analyzers seeing programming as a "Black Box".It's not vital to see the items, yet realize the testing system outwardly.This sort of testing just ganders at the product from the details and viewpoints needs that have been characterized toward the start of the plan.Concerning model, in the event that there is a piece of programming that is a framework stock data in an organization.So in this kind of white box testing, The product will attempt to destroy the program posting for then tried utilizing the procedures depicted already.In the mean time, in the black box testing type, the product will be executed and afterward attempted to test whether it meets the prerequisites clients are characterized toward the beginning without emptying the posting the program.
IV. RESULTS
The following is the pseudocode Program_automatasi_watering_plants: {Maintaining soil media in prime condition, namely by maintaining soil moisture by watering it automatically} The following is a display in figure 2 of the interface of the tools in designing an automatic plant watering system based on the internet of things, such as displays of the series of tools used in the design.After the respondent answers all the questionnaire questions.So the results were obtained in the form of answers from all respondents.The following is a graph of the results of all respondents' questionnaire answers: Based on the picture and table above, it can be concluded that this application is well received.Judging from the results of respondents' answers, the average answer was "Strongly Agree" with a percentage of 54%.
V. DISCUSSION
By using an automatic plant watering system based on the internet of things, plants can be well maintained and plant development can also be monitored via telegram.The results of this research can answer the problems of ornamental plant enthusiasts who are still beginners or are new to keeping ornamental plants and do not really understand their care, including the watering process.In the watering process, it turns out that many ornamental plant enthusiasts in the community still water irregularly due to busyness, ignorance or other reasons.This causes the ornamental plants they love to not get enough water and wilt easily.The comparison of this research with research that has been carried out by other people is that this research uses telegram-based monitoring so that users can control the tool anywhere and at any time.There are also suggestions for better application development, namely creating a system that can be accessed on the website, mobile iOS and desktop PC as well as creating a system that can be downloaded and installed via the Appstore and Playstore.
VI. CONCLUSIONS
Based on the results obtained in writing this thesis, the following conclusions are obtained: 1. Plant Based Watering System Internet of Things Arduino and Monitoring with Telegram which was created to work without any annoying problems or bugs.2. The results of the questionnaire respondents' answers obtained a percentage of 54% in choosing the answer "Strongly Agree".These results show that this application is easy to use and meets user needs.
1 .
Declaration: Pump Relay, Light Relay, Soil Sensor, Water Sensor, LDR = integer { declare the pin number used in esp32 } SSid, Pass, Token = string { declare information to connect wifi connection } Humidity, Water, Dry, Wet = integer { declare the wettability percentage target point of the respective function } ScheduleOn Hours, Morning Hours, Afternoon Hours = integer { declare hour description for automation schedule } { declare information to connect wifi connection } Description : Read and update time information from the NTP server.2. Read analog data from soil sensors in the soil and water tank a. Mapping data from each sensor from 0 to 100%.3.If the current clock is the same as the on-hour schedule and the soil moisture is still below the wet point, then a. Turn on the pump when there is enough water.b.Turn off when the water runs out.c.Or turn it off when the humidity has passed the wet point.4. If the current clock is between morning and afternoon hours, then a.If the LDR light is dark, then turn on the LED.b.If the LDR light is bright, then turn off the LED. 5. Read messages from Telegram bots a.If the message contains "PUMP ON", then turn on the water pump.b.If the message contains "PUMP OFF", then turn off the water pump.c.If the message contains "LIGHTS ON", then turn on the LED.d.If the message contains "LIGHTS OFF", then turn off the LED.e.If the message content is "SENSOR INFO", then send the sensor data.f.If the content of the incoming message is different from the message above, then send the program list info.6.When the pump turns on but the water tank runs out, then send the message "WATER IS OUT OF OUT, REFILL IT IMMEDIATELY!" to the saved User ID via the Telegram bot. 7. End This screen display and menu can be created via the Arduino IDE application.Here's what the program looks like Fig 1:
Fig. 1
Fig. 1 Program appearance when starting on Telegram (images obtained from personal documentation)
Fig. 2 Fig. 3
Fig. 2 Program Appearance When Executing Commands in Telegram (images obtained from personal documentation)
TABLE 1 BLACK
BOX TESTING OF AUTOMATIC PLANT WATERING SYSTEMS BASED ON THE INTERNET OF THINGS (TABLE OBTAINED FROM Fig. 4 Graph of Questionnaire Answer Results for All Respondents (images obtained from personal documentation) | 3,393.6 | 2024-04-30T00:00:00.000 | [
"Engineering",
"Environmental Science",
"Computer Science"
] |
Planar Microstrip Patch Antenna Arrays with Semi-elliptical Slotted Patch and ground Structure for 5G Broadband Communication Systems
Abstract In fifth-generation (5G) communication systems, the mm-wave (millimeter-wave) frequency band is a key aspect in overcoming the exponential increase in data traffic of the existing cellular networks, which requires compatible and low-profile antenna arrays. In this case, a microstrip patch antenna (MPA) is a viable option. However, MPA has certain limitations, such as the deterioration of the antenna’s bandwidth and radiation efficiency with its substrate thickness. In addition, its gain and directivity are too low to meet the 5G systems requirements in beamforming techniques of massive MIMO systems. Thus, to ameliorate these limitations, in this paper, a microstrip patch antenna arrays (2x2, 4x4, and 8x8) with semi-elliptical shape slotted patch and etched ground structure for 5G broadband applications is proposed. The radiator element of the antenna is designed using Rogers 5880 substrate with a dielectric constant of 2.2 and thickness of 0.3449 mm. It has been designed to operate at 28 GHz in LMDS (local multipoint block distribution service) band. The performance of the proposed structure was analyzed using the CST-MW simulator. The simulation results reveal that the gain, bandwidth, and total efficiency of the studied 2x2, 4x4, and 8×8 MPA arrays are 13.24 dBi, 16.54 dBi, 21.45 dBi; 1.33 GHz, 1.461 GHz, 1.561 GHz; 97.03 %, 81.72 %, 69 %, respectively. Likewise, the return losses of these structures are −43.321 dB, −40.665 dB, and −22.678 dB, respectively. In general, all of the proposed planar semi-elliptical slotted rectangular MPA arrays offer improved performance compared to existing works and meet the requirements of 5G systems’ antennas.
Introduction
In order to meet the ever-increasing demand of high-speed connections, in addition to the existing mobile networks, the 5G system is expected to include three usage scenarios. These are enhanced mobile broadband (eMBB), massive machine-type communications (mMTC), and ultra-reliable low latency communications (URLLC). The eMBB usage scenario enables very high-speed internet access (up to 1 Gbps). This is expected to increase the quality and efficiency of communication in the community. It will be crucial to enable services based on high-resolution multimedia, augmented reality, virtual reality, and smart city services (O'Connell et al., 2020).
The current standardization of 5 G system networks is planned to operate in three frequency bands, i.e. low, medium, and high-frequency bands depending on the characteristics of the bands. In the initial phase of 5G system deployment, the following three bands of frequencies are assumed to be used. These are 700 MHz band (694-790 MHz), 3.6 GHz band (3.4 to 3.8 GHz), and 28 GHz (27.5 to 28.35 GHz). The 28 GHz band is limited in its use in terms of the capacity of spectrum resources and radio signal propagation. It has a wide bandwidth but short-range radio signal coverage due to a high attenuation characteristic of the radio signal propagation at this band. Hence, it can be used for broadband access points and pico-cell (cMTC/URLLC), and facility of internet access via a fixed wireless access service.
To provide a high-speed internet connection to users (up to 1 Gbps), the 5G systems use two technological solutions. These are the usage of high frequencies and beamforming. The Federal Communications Commission (FCC) has made available the 28 GHz band by reallocating the LMDS which lies between 27.5 and 28.35 GHz bands and is about 850 MHz wide (Przesmycki et al., 2021). This is part of the new Upper Microwave Flexible Use License (UMFUS) to support high speed (>1 Gbps) and low latency connections. The LMDS band is further divided into two blocks: block L1 (27.5-27.925 GHz) and block , which has 425 MHz bandwidth.
several limitations, such as the reduction of the thickness of the dielectric substrate to reduce its size and weight decreases the antenna bandwidth and radiation efficiency due to the increasing surface wave, spurious radiation, and the feed-line radiations. This leads to undesired cross-polarized radiation due to feed radiation effects (Johari et al., 2018), (Balanis, 2005). In addition, the MPA suffers from losses such as a dielectric, a conductor, and radiation losses, resulting in a low in bandwidth and gain. These limitations become challenges for the MPA designers to meet the broadband and high gain requirements of 5G mm-wave communication systems (Vamsi et al., 2018), (Kavitha et al., 2020).
Extensive performance analysis of a single element MPA has been demonstrated using a U-slotted patch (Hussain et al., 2020), modifying the feeding techniques (Jeyakumar et al., 2018), (Chaitanya et al., 2019), the defected ground structure, and Y-shaped patch (Awan et al., 2019), X-shape slotted patch (Gupta et al., 2020), the introduction of multiple slots (Ghazaoui et al., 2020), (Karthikeyan et al., 2019), etched patch (Kaeib et al., 2019), various substrate material types (Subramaniam et al., 2020), a substrate integrated waveguide patch, multi-patch designs, multilayer patch by employing diverse impedance matching techniques (El_Mashade & Hegazy, 2018). But the reported simulation results reveal that the bandwidth is still narrow, the radiation pattern is relatively wide, and gain and directivity are low to be utilized in the 5G communication systems.
To improve the performance of a single MPA, various linear MPA array designs have been described in (Kavitha et al., 2020), (Jeyakumar et al., 2018), (Alwareth et al., 2020Maharjan & Choi, 2020;Mungur & Duraikannan, 2018;Shah & Singh, 2020;Gemeda & Fante, 2020). From the reported simulation results, it can be realized that the suggested linear MPA arrays improved the performance of a single element MPA. Despite this, the linear antenna arrays can scan only a onedimensional plane, i.e., either the elevation plane or azimuth plane. Thus, the main limitations of the linear MSPA array are its inability to scan the beam in more than one direction, and the high magnitude of the sidelobe level of the radiation pattern. Subsequently, to mitigate the performance limitations of the linear array and, in general, to boost a patch antenna performance, a few studies on planar MPA array designs (Keum & Choi, 2018;Mohamed et al., 2020;E Sandi et al., 2020;Gemeda & Fante, 2020) have been carried out.
The existing planar arrays have alleviated the MPA linear arrays inefficiencies in its aspect of scanning capability. However, the attained bandwidth, gain, and directivity of MPA are still low to meet the requirements of 5G wireless communication systems. There is a continued effort by the research community to improve the performance of these antennas in all aspects. To the best of our knowledge, the existing studies did not explore extensively the performance improvement of planar MPA arrays with large radiator elements by introducing the semi-elliptical shape slotted on both the radiating element and ground plane simultaneously. In this paper, the performance analysis of a planar 2x2, 4x4, and 8x8 rectangular MPA arrays for the 5G broadband access point has been proposed by introducing a semi-elliptical shape slot on a radiator patch and ground plane of the antenna.
The remaining sections of this paper are organized as follows. Section 2 presents design specifications and proposed 2x2, 4x4, and 8x8 semi-elliptical slotted MPA array. The simulation results and discussions are provided in Section 3. Section 4 describes the performance comparisons between previously reported and achieved results of this paper. Finally, the conclusions are discussed in Section 5.
Design specifications of the proposed semi-elliptical slotted MPA arrays
In this section, detailed specifications of a planar 2x2, 4x4, and 8x8 semi-elliptical slotted rectangular MPA arrays with two-dimensional beam scanning capabilities are presented. The design of the proposed antenna arrays was started with the selection of Rogers RT5880 substrate material with a 2.2 dielectric constant, the thickness of 0.3449 mm, and a resonant frequency of 28 GHz. Using these initial design parameters, the remaining physical dimensions of the semi-elliptical slotted 2x2, 4x4, and 8x8 MPA arrays were calculated using the mathematical equations given in (Balanis, 2005), .
First, a single element MPA was designed and optimized. Accordingly, the theoretically calculated physical dimensions of the basic single element MPA, which is used as a building block of all planar rectangular MPA arrays, are substrate thickness (ST) of 0.3449 mm, patch width (PW) of 4.23519 mm, patch length (PL) of 3.40451 mm, inset length (IL) of 0.89268 mm, and inset width (IW) of 0.02475 mm. The detailed design analysis of a single element MPA is given in . Using a microstrip patch antenna with the above-calculated dimension values, four semielliptical slotted single rectangular MPA elements have been used as a building block to design a planar 2x2 rectangular MPA array. These four semi-elliptically slotted rectangular MPA are grouped into a pair of 2x1 linear configurations, which are interconnected together using a 100 Ω microstrip feeder line as shown in Figure 1.
To improve the matching quality between the radiator and the feeder line, a quarter-wave impedance transformer (QWIT) is used as shown in Figure 1. The length of the microstrip transmission line which is denoted by (LLMTL) 2x2 is 0.41069 mm, the width of the last microstrip transmission line (WLMTL) 2x2 is 7.24683 mm. The width and length of the quarter-wave impedance transformer are 0.90295 mm and 0.82137 mm, respectively. The remaining theoretically calculated and tuned design parameters of a 2x2 semi-elliptical slotted MPA array shown in Figure 1 are tabulated in Table 1.
The planar 2x2 rectangular MPA would improve the antenna's performance in terms of the gain and beam-scanning ability in two directions. To further improve the performance of the antenna, a planar 4x4 rectangular MPA array was designed using a sub-array of four 2x2 MPA arrays. These elements are grouped into two pairs of 2x2 planar configurations and placed over each quadrant of the X-Y coordinate axis. Therefore, the design parameters of a single and planar 2x2 rectangular MPA have been used directly for a 4x4 MSPA array design. The calculated and optimized physical dimensions of these arrays are tabulated in Table 2 and the physical structure is shown in Figure 2.
Finally, by increasing the number of radiators in the array, the third planar semi-elliptical slotted MPA array designed in this study was an 8x8 rectangular MPA array. The proposed 8 × 8 MPA array is desired to improve the performance of both the 2x2 and 4x4 MPA arrays. To design an 8x8 MPA array, 64 antenna elements are required. Therefore, the design parameters of the 2x2 and 4x4 MPA arrays above were used as it is, and the remaining parameters are shown in Figure 3 and the respective values are given in Table 1. This shows how the increment in the antenna array size affects the performance of the antenna arrays. The comprehensive analysis of MPA with 4, 16, and 64 array elements (planar) is given in the following sections.
Where LMTL is the length of microstrip transmission line, WMTL is the width of microstrip transmission line, LPD is the length of 1:2 power divider, WPD is the width of 1:2 power divider, LPDA is the length of power divider arm, WPDA is the width of power divider arm, GPW 2x2 is ground plane width of 2x2 array, GPL 2x2 is ground plane length of 2x2 array, LFP 2x2 is the length of the 2x2 feed point, WFP 2x2 is the width of the 2x2 feed point, IS is inter-element spacing, GPW 4x4 is ground plane width of 4x4 array, GPL 4x4 is ground plane length of 4x4 array, LLMTL 4x4 is the length of last microstrip transmission line, WLMTL 4x4 is the width of last microstrip transmission line, LFP 4x4 is the length of the 4x4 feed point, WFP 4x4 is 4x4 feed point width.
Starting with the calculated dimensions, the parameters of these antenna structures were optimized. The optimization procedure in this work is based on manual tuning of the antenna in both directions of the calculated value (in the lower and higher). In the parameter tuning process, the parameters of the antennas with the best performance were chosen. The parameter optimization can be done either using metaheuristic optimization algorithms, such as genetic algorithms (GA), particles swarm optimization (PSO), etc (Rini et al., 2011) or using manual tunning . The manual antenna parameter tuning process is slower than the metaheuristicbased optimization algorithms.
Simulation results and discussions
In this section, simulation-based performance analyses of 2x2, 4x4, and 8x8 planar semi-elliptical slotted MPA arrays are presented. The frequency-domain analysis of CST-MW Studio was utilized to analyze the performance of three antenna arrays in terms of VSWR, return loss, bandwidth, radiation efficiency, gain, and directivity of the radiation pattern. It is worth noting that all discussions in the following sections use the optimized dimensions of the physical structures listed in Table 1.
The input matching network, VWSR, and bandwidth
The scattering parameters (S-parameter) of a two-port network model helps to estimate the matching quality and bandwidth of the antenna. From the S-parameter analysis, at center frequency i.e., 28 GHz, the return loss (S 11 ) of the proposed 2x2 semi-elliptical slotted MPA array is about −43.321 dB as shown in Figure 4 and the magnitude of its VSWR is 1.014 (see, Figure 5). In addition, the −10 dB bandwidth of this antenna is 1.33 GHz. Similarly, the return loss of the studied 4x4 semi-elliptical slotted MPA array at 28 GHz is −40.665 dB and its VSWR is 1.019 (see, Figure 5), which is very close to the theoretical value. The −10 dB bandwidth of this antenna array is about 1.461 GHz, which is 131 MHz higher than the bandwidth of the 2x2 MPA array.
The designed rectangular 8x8 MPA array has a return loss of −22.678 dB and a VSWR of 1.159 as shown in Figure 4 and Figure 5 respectively. The −10 dB working bandwidth of this array antenna is 1.561 GHz, which is higher than the bandwidth of the semi-elliptical slotted 2x2 and 4x4 MPA arrays by 231 MHz and 100 MHz, respectively. From the simulation results, as expected, increasing the size of the antenna array increases the bandwidth, the return loss, and VSWR values of the antenna array. The achieved result in terms of bandwidth for the 8x8 antenna array is higher than the performance deterioration in terms of return loss and VSWR because these quantities are within the acceptable limits for many applications. This means that the return loss is less than −10 dB and VSWR is less than 2 within the bandwidth of all the antenna arrays.
The directivity, gain, and radiation efficiency
From the radiation pattern plot of the proposed antennas, the directivity, gain, and efficiency of three antenna arrays were determined. The directivity, gain, and total radiation efficiency of the 2x2 rectangular MPA array, as depicted in Figure 6, are 12.81 dBi, 13.24 dBi, and −0.1310 dB (97.03 %), respectively.. From Figure 7, the gain, directivity, and overall efficiency of the proposed 4x4 MPA array are 16.54 dBi, 16.38 dBi, and −0.877 dB (81.72 %), respectively. This implies that increasing the array size from 2x2 to 4x4 improves the directivity and gain of the MPA array by 3.57 dBi and 3.3 dBi, respectively. The directivity and gain of the proposed antenna were increased as the number of array elements were increased to 16 and the ground plane dimensions were properly tuned.
The gain, directivity, and total efficiency of the proposed 8 × 8 semi-elliptical slotted MPA array are 21.45 dBi, 19.60 dBi, and −1.616 dB (69 %), correspondingly as shown in Figure 8. This antenna array improves the directivity and gain of the 2x2 MPA array by 6.79 dBi and 8.21 dBi, respectively. In the same way, the performance improvement observed using the proposed 8x8 MPA array in terms of directivity and gain is 3.22 dBi and 4.91 dBi, respectively, in comparison with the 4x4 MPA arrays. With increasing the proposed MPA array size, the gain and directivity increases, and the radiation efficiency reduce due to the increased mutual coupling between the proposed MPA array elements. This is because the mutual coupling profoundly affects the antenna's input impedance, reflection coefficients, and gain. In general, the proposed antenna arrays meet the requirements of 5G systems operating in the LMDS band. All three antenna arrays have more than 850 MHz bandwidth. For a stringent matching condition of the proposed antenna arrays (VSWR < 1.25), the proposed antenna arrays achieved wide bandwidth.
The side lobe level and cross-polarization of the 3D radiation pattern
In transmitting antennas, extreme side lobe radiation results in wastes of energy and minimum information confidentiality. On the other side, in receiving antennas, the side lobes may select the interfering signals, which increases the noise level in the receiver. The magnitude of the side lobe level (SLL) of the planar 2x2, 4x4, and 8x8 semi-elliptical slotted MPA arrays are indicated in Figure 9. Thus, from the figure, it can be observed that the SLL of the 2x2 MSPA array is −10.6 dB and a 3 dB radiation pattern has a 37.3 degrees span. Whereas the magnitude of SLL of the 4x4 rectangular MPA array is −9.1 dB and the 3 dB radiation pattern has occurred at 25.2 degrees. Nevertheless, the SLL of the proposed 8x8 planar rectangular MPA is −7.9 dB, and the 3 dB radiation pattern of this array has occurred at 12.3 degrees..
Even though accurate design parameters have been considered during design processes, the effect of mutual coupling is increased with array elements. This is because the minimum mutual coupling from each of the quadrant planes is added together to increase the overall sidelobe level of the array. Thus, as the antenna size is increased, side lobes move from the evanescent space to the visible space. Besides, inter-element spacing between array elements has profound effects on the magnitude of the SLL.
The cross-polarization far-field component which is orthogonal to co-polarized component and main lobe direction of the proposed 2x2, 4x4, and 8x8 semi-elliptical slotted MPA arrays are shown in Figure 10a, b, c respectively. As it can be realized from the graph, the gain cross-polarization of the 2x2 MSPA array is −0.7271 dBi (see, Figure 10a). Likewise, the 4x4 and 8x8 MSPA array crosspolarization magnitude are 6.634 dBi and 4.969 dBi, respectively (see, Figure 10a and b). As it can be seen from the far-field plot, the cross-polarized component is minimal as compared to that of the desired polarization.
Lastly, final simulation results for the structure of designed planar MPA arrays i.e., the magnitude of the return loss, bandwidth, gain, directivity, VSWR, and total radiation efficiency all of the designed structure is tabulated in Table 2.
As it can be seen from the table, because of the introduced defect structure on both radiating elements and ground plane, the attained performance of the designed antenna arrays is efficiently improved for the dedicated 5G wireless communication systems. For 8x8 planar MPA arrays, it outperforms both 2x2 and 4x4 in terms of bandwidth, gain, and directivity. Even though measurable performance improvement techniques are used, there is degradation in terms of return loss, radiation efficiency, and sidelobe levels as seen with 2x2 and 4x4 arrays. This is because the impact of increasing antenna elements along with the inter-element spaces is great in increasing more mutual coupling added together from each element, which rises input impedance of the patch, as a result, return loss and side lobe level increase and thereafter drops overall efficiency of the antenna as compared antenna array with minimum antenna elements. But as compared to that of existing related works and the requirement for 5G, those results achieved in this work are more prominent and comfortable for the era of 5G communication systems.
Performance comparison
At the resonance frequency (28 GHz), the performance comparison between the proposed 2x2, 4x4, 8x8 semi-elliptical slotted rectangular MPA arrays and existing designs in the scientific literature are shown in Table 3. The bandwidth of the proposed planar 2x2 MPA array outperforms the designs reported in (Johari et al., 2018), (Kavitha et al., 2020), and (Tegegn & Anlay, 2020). The bandwidth of the proposed design exceeds these designs by 380 MHz, 930 MHz, and 1.04 GHz, respectively. Similarly, in terms of radiation efficiency, the proposed design surpasses the structures proposed in (Johari et al., 2018) and (Mohamed et al., 2020) by 14.18 % and 7.24 %, respectively. In terms of gain, the proposed design outperforms similar MPA array size designs reported in (Johari et al., 2018) and (Tegegn & Anlay, 2020) by 4.847 dBi and 2.53 dBi, respectively. Furthermore, the proposed 4x4 rectangular MPA array achieved bandwidth wider than the design reported in (Tegegn & Anlay, 2020) by 1.129 GHz. Its gain exceeds the one reported in (Tegegn & Anlay, 2020) by 1.37 dBi. Similarly, it has greater gain than the MPA arrays reported in (Tegegn & Anlay, 2020) and (Keum & Choi, 2018). Moreover, the proposed 8x8 MPA array improves the bandwidth of the design demonstrated in (Tegegn & Anlay, 2020) by 1.193 GHz and in its radiation pattern's gain by 3.12dBi. However, its total efficiency is lower than the design reported in (Tegegn & Anlay, 2020). Finally, the 8x8 MPA array design proposed in this work achieved higher return loss, bandwidth, gain, and VSWR compared to the simulation results reported in (Tegegn & Anlay, 2020).
In general, the design proposed in this study offers very competitive and improved performance compared to the existing designs. These improved performances were achieved because of the defect structure introduced on each of the radiating elements as well as on the ground plane. It strongly influences the current distributions of the structure so as to increase the radiation efficiency of the proposed designs. In addition, in the feed networks, between the microstrip transmission line and the feed point, an appropriate impedance matching technique is employed. Also, the arm of the 1:2 power splitter and the edge of the patch have been meticulously designed to bring them up to a suitable level using the impedance matching technique with inset feed, quarter-wave impedance transformer, and tuning the physical dimensions of the structures.
This minimizes power losses due to the impedance mismatch, which in turn increases the radiation efficiency of the antenna and thus its bandwidth. The proposed antenna arrays are designed to operate from the frequency of 27.5 GHz to 28.35 GHz of local multipoint distribution service band, which has an 850 MHz bandwidth requirement (Przesmycki et al., 2021).
Conclusion
In this work, the performances of planar 2x2, 4x4, and 8x8 MPA arrays with semi-elliptical slotted rectangular patch and ground plane have been analyzed. The performances of the antennas were evaluated using bandwidth, gain, radiation efficiency, return loss, and VSWR. The results of the bandwidth, return loss, gain, and total efficiency of single element semi-elliptical slotted MPA presented in was 1.132 GHz (4.043 %), −37.784 dB, 7.128 dBi, and −0.05513 dB (98.74 %), respectively. Continuity of that work, in this paper, the bandwidth, gain, and total efficiency of the examined planar 2x2 and 4x4 rectangular MPA arrays are 1.33 GHz (4.75 %), 13.24 dBi, and −0.1310 dB (97.03 %) and 1.461 GHz (5.218 %), 16.54 dBi, and −0.877 dB (81.72 %), respectively. Finally, the planar 8x8 MPA array attained the gain of 21.45 dBi, the bandwidth of 1.561 GHz (5.75 %), and total efficiency of −1.616 dB (69 %).
Compared to existing related works, the obtained simulation results reveal that the proposed rectangular MPA arrays have superior performance (see , Table 3). Because the introduced etched semielliptical structure on both the radiating element and ground plane highly influences the current distribution of both structures. In addition, a good impedance matching network has been designed for the feeding network structures. Therefore, the power losses that occur due to the impedance mismatch are dramatically minimized. The key physical dimensions of the antenna have been tuned to find the values with the best performance. Consequently, the performance of the antenna is profoundly improved in terms of bandwidth, gain, and radiation efficiency. All of the designed MPA arrays provide high performance with a very compact size, which is suitable for the 5G communication systems operating in the LMDS band. | 5,581.8 | 2022-05-10T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
THEORETICAL DESCRIPTION OF EVEN-EVEN PLATINUM Pt-186 NUCLEUS USING IBM AND (VMI) MODELS
The aim of this study, is to investigate, in a phenomenological way, the backbending effect in platinum Pt-186 nucleus, in order to get a good description of the bends by using new parameters. VMI model and interacting boson model IBM-1 have been used to perform this research for a heavy mass nucleus (Z = 78). Energy ratios and arrangement of the bands show that the platinum Pt-186 have O(6)- SU(3) dynamical symmetry. Our current calculations gave results that are reasonably consistent with the most recent experimental data, especially the results calculated according to the VMI-model. Variable moment of inertia has been applied to describe successfully the effect of backbending in deformed even-even Pt-186 nucleus. Backbending was observed in the ground and 𝛽 -bands, due to the change of the moment of inertia but not for ( 𝛾 (cid:2869) , 𝛾 (cid:2870) ) bands, because no changing in the moment of inertia.
INTRODUCTION
There are two nuclear collections particles: protons, and neutrons so called nucleons, separately divided over certain energy level subjected to the restrictions of the Pauli exclusion principle.All nuclei have ground and excited states, and the nucleons in excited states can be removed from, or added to, nuclei.The nuclear structure gained by studying these phenomena [1].The IBM-1 was used to description the nuclear collective motion suggested firstly, by Iachello and Arima in order to study the collective states in e-e positive parity nuclei.This model does not distinguish between neutron bosons and proton bosons [2,3].This research, aims to calculate energy levels, gamma transition and study the backbending phenomena, using the IBM-1 and VMI models.
Backbending has been observed experimentally in the band of the ground state [4,5] or in the rotational band of some deformed nuclei.The effect occurs because, the moment of inertia () rapidly increases with the rotational frequency () towards the solid value [6].When the rotational energy ħω is greater than the energy needed to separate a pair of protons or neutrons , the separated proton or neutron moves to another orbit, which result in change of the moment of inertia [7].An explanation of this effect is attributed to a disappearance of the pairing by band crossing of two rotational energy and Corielis force effect [8,9], this effect of Corielis force increases with rotational frequency at high angular momentum for some bands, leads to depairing nucleon pairs, the first pair depairing called "two quasi particles".the case where the depairing of two quasi particles, which may couple with the collective rotation to produce a new band, this effect leads to back-bending phenomena [10].Many researchers have been interested in studying the phenomenon of backbending using different methods, including Regan(2003) [10] who used the E-Gos method by drawing the relation between the transitional energy E over spin ( ) for two successive levels and the spin (J).Some theoretical researchers have recently focused on studying the nuclear properties of platinum isotopes, including N. Ashok and A.Joseph(2019) [11] studied the ground state properties of Pt isotopes with the help of Skyrme-Hartree-Fock-Bogoliutov (HFB)theory by using harmonic oscillator H.O. and transformed harmonic oscillator T.H.O. to calculate (separation energy of 2-neutrons) and r.m.s radii of proton and neutron.The results obtained are in good agreement with the practical data.
M. Khalil et al (2019) [12] studied the platinum isotopes properties using particle rotor model VMI and IBM to calculate the energies of single particle spectrum and investigated the phenomena of the back-bending.S.H. Al-Fahdawi, A.K. Aobaid (2021) [13] used the first model of interacting bosons and the generalized moment of inertia model to study some of the nuclear properties of deformed heavy nuclei and obtained acceptable results compared to the experimental values and concluded the success of these two models for the study of heavy nuclei.E.A. Al-Kubaisi, A.K. Aobaid (2021) [14] also used the first model of the interacting bosons and vibrator moment of inertia (VAVM) model to calculate the energy levels, the quadrupole moment for even-even -162 nucleus and showed that the (VAVM) model are better than the results calculated by (IBM-1).
THEORETICAL ASPECT 2.1. IBM-1 Basis
The interacting b3oson model-1 is an important model used to study the low-lying collective states structure in deformed e-e nuclei, and has been considered as systems composed of interacting (s -d) bosons, which described in terms of monopole boson with ℓ and quadrupole boson with ℓ [15].The formula of the Hamiltonian operator can be written by [16]: Where ε is the energy of bosons , is the potential energy between the bosons and j.General formula for Hamiltonian operator in Eq. 1 assumed by Iachello and Arima can be written as [16,17]: Where: ( , ), (, ) are creation and annihilation operators respectively, ℓ , , , ℓ , , ℓ , describes the bosons interactions with each other, ε = ε d -ε S represent the bosons energy.The energy of the boson s (ε S ) was considered to be zero, therefore: ε = ε d .The other formulas of Hamiltonian operator in equation ( 2) can be written as multipole expansion mutual into equation of various boson-boson interactions [18]: Where the parameters( , , , , ) represents the strength of the pairing, angular momentum,quadrupole, octupole and hexadecapole interactions between bosons respectively.
VMI Model Basis
The (VMI) model proposed firstly, by M. Mariscotti et al. 1969 [19] to calculate the energy states values for any band as: Moment of inertia can be determined from equilibrium condition [19,20]: Determines (in ħ unit) as a function of (J).The parameter C is the hardness coefficient and is the moment of inertia of the ground state (for > 0).
From equations (4,5) can obtained: Eq. 6 contains one real root for any value when ( ,C) finite and positive.
The lest fit-to-square (l.s.f.) procedure has been applied to all measured values for any state.
The energy of the J-level according to the rotational model is given by the relation [21]: As for the transition energy between levels J → J-2 is given by the relationship [22,23]: In order to study the phenomenon of backbending, the moment of inertia (2/ħ^2 ) must be calculated from the Eq. 8 and the square of the rotational energy (ħ) as: Where Equation ( 8) can be written for harmonic oscillator as: While the rotational energy squared (ħ) can be written as [22,23]: ( The nuclear stiffness parameter was introduced, which measures the initial variation of moment of inertia w.r.t.angular momentum, can be calculated from equation ( 6) as [19,24]:
HAMILTONIAN INTERACTION PARAMERERS
The Hamiltonian parameters in the IBM computer program "PHINT COD" [25] was used to make the Hamiltonian diagonal.The equivalent program for PHINT code is (IBM1.For) and the input file called "Bos.inp.".All parameters can be changed indepently fitting with the experimentally energy spectrum for the nucleuos, and from these calculations, we find the nuclear structure of the Pt-186 spectra by the Hamiltonian interaction paramerers values, These coefficients that have reasonable agreement with the experimental data were shown in Table 1.These chosen parameters depended on number of proton bosons and neutron bosons number were calculated from the nearst closed shell, and the number of total bosons N = + .The nucleuos of even-even Pt-186 have atomic number equel 78 protons ,so there are 4 holes (2 protons bosons) to fill the shell Z = 82, and neutrons number equel 108, so there are 18 holes to fill the shell N = 126 or 9 neutrons bosons.The total numbers of bosons N=11.
While the results of VMI model were calculated using VMI.For program from file "Par.input" this file depends on ( ħ , C, E k ) parameters, where: ħ moment of inertia for ground state, C is constant parameter fitted with experimental data, E k is the head of the band energy.The other files called "Enr.out" and "Enr1.out"these files calculated the following: 1 -Theoretical energy . 2 -Rotational energy square (ħ) and (
𝒥 ħ
). 3 -Nuclear softness ()from equation ( 12) 4 -Deviation (∆) [26,27] which determined the deviation between calculate energy states .and experimental values .from equation: where k is the number of levels.5 -Chi-squared (χ 2 ) from equation [19]: Where all calculations for VMI model were chosen from the smallest (χ 2 ) as in Table .1.The ratios of the excitation energies 4 , 6 and 8 dividing on the energy level of the first exited 2 for Pt-186 nucleus using IBM-1 and VMI have been calculated and compared with the identical values for the three limits, SU(5),SU(3) and O(6) as in Table 2, these calculations shows that the platinum-186 has Gamma unstable O(6) dynamical symmetry, but the arrangement of the bands according to their appearance (g, , , ) bands shows that the nucleus under study belong to rotational dynamic SU(3)limit. 2 show that the platinum nucleus belongs to gamma unstable O(6) limit, while the arrangement of the band (g, , ) indicates that it belongs to rotational SU(3) limit, because the level (0 ) appeared before (2 ) level this means that a beta band () had appeared and therefore the nucleus understudy had O(6)-SU(3) dynamical symmetry.
The energy spectram of platinum Pt-186 for (g, , γ , γ ) bands as a comparison of IBM-1 and VMI calculations with experimental data were plotted in Figure 1.
The experimental data and calculated of energy bands for the ground and -bands were plotted in Figure 2. Good agreements from the comparison of the IBM-1 and VMI model calculations (energies, spin and parity) with the experimental data.But in γ − band, the agreements were acceptable in the low-lying states, while it is deviated in the high spin (energies)of the experimental data because, the calculations of IBM-1 have been performed with no distinction made between neutron and proton bosons.
In γ − band VMI calculations were in agreements with experimental data while, the calculations of IBM-1 were not good with experimental data because the interacting boson model does not distinguish between neutron and proton bosons, there were no experimental values for the energy states for band. ) and Rotational energy squared(ħ) were calculated using equations (10 and 12) respectively, ) and (ħ) was drawn for the ground and beta bands in which a backbending appeared in it, and shown in Figures 3 and 4. The backbending of these bands occur, due to the change in the moment of inertia and -band lies in SU(3) limit, and no backbending was observed in the ( , ) bands, because the moment of inertia does not change ,also these bands belonging to -unstable limit.
The drawing of the ground state band Figure 3 had a backbending between the levels 12 , and (18 ), due to the deformation of these levels, also, the backbending occurs due to the rapid increase in the moment of inertia at relatively high spin than the expected value according to the rotational motion model of some nuclei, which causes a decrease in the expected energy value at these cases result in a backbending in the moment of inertia curve as a result of the disengagement of one or two pairs of nucleons and their re-engagement, which reduces the expected energy value that causes the backbending .1.The results of state bands show reasonable agreement with empirically but had been found a little difference in high states, due to the interacting boson model do not distinguish between proton and neutron bosons.
2. The IBM-1 calculations show that the currently results of the energy states were in good agreement with practical calculations for the g-band and in reasonable agreement with the beta band and high in ( , ) band, also some of the energy states calculated in my current research did not calculate empirically, especially in γ -band.
3. The results of VMI successfully investigated energy bands in low and high spin levels, and the predictions of this model gave a good description of the occurrence of backbending in the ground and beta bands due to the small rotational frequency (ω) of nucleons, and thus, the nucleon pair behavior at high angular momentum appears to be crucial for this an effect, and either the lack of backbending in the gamma bands may be attributed to the presence the deformation of an octupole or a hexadecabol in these bands.Rotational energy squared (ħω) 2
Figure 1 .Figure 2 .
Figure 1.The energy spectra for Pt-186 nucleus as a comparison of IBM-1 and VMI calculationswith the available experimental data[28,29]
Figure 4 .
Figure 4. Moment of inertia (2/ħ^2 ) as a function of Rotational energy squared (ħ) for -band experimental5.CONCLUSIONSIn the present work, the IBM-1and VMI model have been applied successfully in description deformed e-e Pt-186 nucleus and I got:1.The results of state bands show reasonable agreement with empirically but had been found a little difference in high states, due to the interacting boson model do not distinguish between proton and neutron bosons.2.The IBM-1 calculations show that the currently results of the energy states were in good agreement with practical calculations for the g-band and in reasonable agreement with the beta band and high in ( , ) band, also some of the energy states calculated in my current research did not calculate empirically, especially in γ -band.3.The results of VMI successfully investigated energy bands in low and high spin levels, and the predictions of this model gave a good description of the occurrence of backbending in the ground and beta bands due to the small rotational frequency (ω) of nucleons, and thus, the nucleon pair behavior at high angular momentum appears to be crucial for this an effect, and either the lack of backbending in the gamma bands may be attributed to the presence the deformation of an octupole or a hexadecabol in these bands.
Table 1 .
Best fitted interaction parameters for the energies of IBM-1 and VMI model
Table 3 .
Comparision of experimental and calculated results for IBM-1 and VMI model | 3,250.8 | 2023-06-02T00:00:00.000 | [
"Physics"
] |
Quest markup for developing FAIR questionnaire modules for epidemiologic studies
Background Online questionnaires are commonly used to collect information from participants in epidemiological studies. This requires building questionnaires using machine-readable formats that can be delivered to study participants using web-based technologies such as progressive web applications. However, the paucity of open-source markup standards with support for complex logic make collaborative development of web-based questionnaire modules difficult. This often prevents interoperability and reusability of questionnaire modules across epidemiological studies. Results We developed an open-source markup language for presentation of questionnaire content and logic, Quest, within a real-time renderer that enables the user to test logic (e.g., skip patterns) and view the structure of data collection. We provide the Quest markup language, an in-browser markup rendering tool, questionnaire development tool and an example web application that embeds the renderer, developed for The Connect for Cancer Prevention Study. Conclusion A markup language can specify both the content and logic of a questionnaire as plain text. Questionnaire markup, such as Quest, can become a standard format for storing questionnaires or sharing questionnaires across the web. Quest is a step towards generation of FAIR data in epidemiological studies by facilitating reusability of questionnaires and data interoperability using open-source tools. Supplementary Information The online version contains supplementary material available at 10.1186/s12911-023-02338-6.
Background
Questionnaires are key instruments for collecting information from participants in epidemiological studies.Moving from paper to web-based questionnaires can improve data quality and decrease the time and cost of questionnaire delivery and completion.[1] The development of web-based questionnaires can be facilitated by web applications that support complex logic and userinterface formatting, ideally following FAIR (Findable, Accessible, Interoperable, and Reusable) principles [2] so that the questionnaires are reusable and the data interoperable.Applications such as Survey Monkey or Google Forms allow researchers to develop questionnaires through proprietary user interfaces.However, these tools do not support complex logic.To support web-based administration, epidemiologists often use word processing software to develop annotated questionnaire versions that contain the static question text, dynamic text piped from responses, and logic that directs participants along a personalized path through the module.In addition, annotated documents may contain user-interface elements (e.g., introductory text, pop-up definitions, formatting) desired in the final product presented to the study participant.These documents are then used by programmers to develop web-based questionnaires, often using proprietary software that can handle the complex logic.Proprietary questionnaire platforms, such as Qualtrics, can handle complex questionnaires, but the entire software ecosystem is managed internally, which represents an impediment to the availability of questionnaire responses as data commons.[3] Clients program questions through a graphical interface occasionally adding custom code.While such platforms may share a question library with other paying users, they lack opensource representations that allow widespread reusability of questionnaires.
Ideally, the annotated questionnaire document would be in a human-readable, platform-independent, machine-readable, plain-text format with questions and logic that an application can render for the participant.Markup languages meet these requirements and can resemble the annotated documents produced by epidemiologists.Originally used to simplify development of HTML, markup has been used in many different contexts including writing books, software documentation, and within messaging applications.[4,5] For questionnaires, describing the complex logic between questions and the interplay between questions and responses are easier to define using a markup language than using annotated word processing or spreadsheet documents.In addition, markup languages promote equitable research by providing free, open-source tools that enable reuse of questionnaires by scientists who may not be able to afford commercial tools, particularly those in low-and middleincome countries.
We developed the Quest open-source questionnaire markup and supporting applications in the public domain with the aim to remove barriers that prevent adoption of FAIR principles in epidemiology.[6] The major advancements that Quest brings are: a markdown format that allows reuse of questionnaire modules across studies; providing a default markdown renderer that integrates into a studies web application allowing studies to choose a backend system instead of forcing vender-specific backends; a standard markdown usable by commercial and open systems facilitating interoperability.
Basic quest markup
In this section, we describe the basics of the markup.More complex markup is available, and a description can be found online in the Quest wiki.[7] In this section, italic text will be used to distinguish markup elements and their orchestration from the text describing them.
The basic markup structure for a questionnaire module is a series of questions.Questions are defined in the markup with a question id surrounded by square brackets followed by the question text and a set of responses.The first letter of a question id must be a capital letter (A-Z); the rest of the id can be capital letters, numbers, underscores, or octothorps (hash tags).
Question syntax
As is the case for markup languages in general, the questions themselves are composed of plain text.The elements of each question are represented with simplified syntactic patterns mapped to another markup language, HTML.In essence, this follows the same rationale associated with other markup languages [4,5].Correspondingly, each question block consists of the question text and responses that can use a range of HTML form elements to handle different response types.The most common cases are select one of the responses or select multiple responses, which map into an HTML input type = radio or type = checkbox, respectively.The markup uses parentheses surrounding a value to represent a radio button and square brackets surrounding the return value for checkboxes.For the markup to remain consistent, return values cannot be valid question ids.A wide range of text and numeric input formats are supported, which are specified using a vertical bar followed by two underscores and another vertical bar, |__| for basic text values, while numeric input is specific by |__|__|.Other HTML element types can be specified using |date|, |tel|, and |SSN|.Quest's GitHub wiki contains detailed information on additional response types, and a summary of current Quest markup is provided [see Supplementary Tables 1, Additional file 1].Advanced developers can also have responsive grids that display multiple questions with the same responses.An in-browser application is provided at https://episphere.github.io/questwhere the questionnaire markup can be tested interactively during development.
Questionnaire logic
The markup logic includes simple and more complex syntax to allow for skip logic.The simple logic is the arrow markup: response -> question id.The arrow indicates that if this response is selected, go to the question with the given id.This simple logic allows the developer to skip parts of the module that are not applicable to the participant.The no response markup, < #NR -> question id>, can be used for cases when the user does not select one of the responses.
The arrow markup adds a question to a stack (last in, first out list) of questions that assembled for the participant.When the stack empty, next question is assumed to immediately follow the current question.However, questionnaire modules may include follow-up questions for situations when the participant can choose multiple responses.In this situation, each response may contain arrows pointing to the follow up questions, and a default next question, in which < -> question id > points to the next question after all the follow-up questions are answered by the participant.All selected responses with arrows are added to the question list along with the default next question.The default next question is always added to the stack.Care should be taken with follow up questions.All additional follow-up questions must specifically be added to the stack with an arrow or the default arrow, otherwise they will be ignored as the next question will come from the stack regardless of where it appears in the markup.
Finally, for questions that cannot be skipped by any other means, a displayif mechanism can skip a question based on previous results.This basic syntax will cover the most straightforward situations, but there are situations where complex logic requires a more functional representation than conditional event algebra.That expert level logic grammar allows, for example, the definition of loops within the module.This is explained in Quest's wiki [7] in detail.Figure 1 is a simple example of a questionnaire module using Quest markup.This example illustrates both the markup and its rendering by the reference application: the markup is available online as a text file at https://danielruss.github.io/questionnaire/paper_example1.txt
Supporting software
We developed a JavaScript library as an open-source reference implementation that renders the Quest markup into HTML for display in a browser.Working inside the browser provides access to all major computer operating systems, tablets, and smart phones.The library follows the module logic found in the markup displaying the appropriate question based on current and past responses.The quest library can be inserted into a standard HTML page or progressive web application using a content delivery network that caches code on GitHub (e.g.https://cdn.jsdelivr.net/gh/episphere/quest/replace2.js ).Our implementation caches the entire module in the browser DOM, and participant responses are saved in the browser's indexedDB, an asynchronous NoSQL persistent storage native to the modern browser [8].Participants with spotty or intermittent internet capabilities can therefore complete modules even if they lose the internet connection.Finally, results are transmitted back to the study via a callback function executed upon completion of the module.To support different studies with different preferences for data backend, Quest itself does not specify how or where studies store results.That is instead defined by the JavaScript callback function, which receive as the input argument a JSON object populated by the responses to the questionnaire.
We also provide an application for developing and presenting the Quest markup, which provides the developer the same view of the questions as the participant.
As mentioned in the Methods section, the markup development tool is available at https://episphere.github.io/quest.
Styling the appearance and user interaction is a major component of the Quest renderer, which we've approached by independently parameterizing a Cascade Style Sheet document (CSS).Naturally, if no style is defined, for example rendering the questionnaire in a web application, the styling will be that of the application itself.This mimetic design implies that a cohort study using Quest will render questionnaires with the appearance of being native to the overall presentation configured for the Web Application.
Results
Our JavaScript reference implementation markup development tool provides live editing/rendering of the Quest markup.Arguments are passed via the URL search parameters.For example, the markup is passed using the url parameter making the complete URL for markup from Fig. 1 https://episphere.github.io/quest?url=https://danielruss.github.io/questionnaire/paper_example1.txt and a screen shot is shown in Fig. 2. Logic and styling are activated by appending the URL of the style sheet to the style search parameters.A full screen participant view, useful for module and styling development, can also be triggered by adding the search parameter run.(e.g., https://episphere.github.io/quest/?style=Style1.css&run&url=https://danielruss.github.io/questionnaire/paper_example1.txt).An example of embedding the JavaScript markup renderer into a web application can be found at https://github.com/danielruss/AppUsingQuest.
The first production use of the Quest markup and application was the Connect for Cancer Prevention study.[9] By calling the Quest JavaScript library, the study Fig. 2 A screenshot of the live real-time composer/renderer showing a developer view of the module markup in Fig. 1 along with the unstyled, rendered HTML delivers multiple questionnaire modules into their participant progressive web application.Figure 3 is a screenshot of the Connect participant application displaying a question from a Connect questionnaire.Developers learned the Quest markup quickly by coding straightforward questions first, and then gradually learning more complex logic components such as those involving displayif, looping and grid logic.
Discussion
We have developed a modular questionnaire markup language that defines a declarative formalism for specifying both the question content and the module logic.Quest markup is rendered in real-time into HTML.Epidemiologic studies can develop simple web-based applications, such as a PWA, that engage study participants and directly collect responses as a zero-footprint solution: no additional software is required to be installed to render a questionnaire formulated with the Quest markup language.
Currently, no open standard exists for interchange of questionnaires.In addition to markup, many other commonly used formats could have been chosen.We limited our discussion to human readable formats because we believe that benefits of easily understanding the format outweigh benefits of binary formats.With other formats, such as JSON or XML, the questionnaire programmer must carefully follow the recursive data format.A JSON brace in the wrong place or a misplaced XML end tag leads to bugs.Questionnaire programmer would need to convert every question into the format.In contrast, Quest markup mimics the appearance of the annotated documents provided by questionnaire developers.Furthermore, Quest markup is also a good interoperable standard because the human readable UTF-8 markup text lends itself to ready serializing to machine-friendly variable structures, such as JSON.
An additional major benefit of the combination of human-readable markup, privacy-preserving computation and browser-based development, is our ability to address FAIR principles.Specifically, Quest was designed as a testing ground for questionnaire commons addressing all the FAIR principles.The markup design exercise was put to the real-world test of making it work for the NCI/DCEG Connect for Cancer Prevention Study, which uses GitHub Pages to disseminate questionnaires with versioning and on the Web.
The guiding principles of FAIR are laid out in Box 2 of [2].For findability, Quest accesses questionnaire modules via a persistent URLs acting as "globally unique and persistent identifier" [2] of for data.Metadata and indexing requirements for the modules can then be associated with these identifiers as linked data.Accessibility requires that the identifier be retrievable by "standard communication" using and "open, free, and universally implemented" protocol.[2] Quest then uses JavaScript to Fig. 3 A screenshot of the Connect for Cancer Prevention progressive web application providing a questionnaire module.Notice that previous information, in this case age, can be provided to the module and displayed within a question fetch the module needed to generate the corresponding HTML questionnaire rendering.If needed, applications using Quest can require authentication and authorization, for example, through external OAuth2 services along the interoperability standards gaining adoption for HL7-FHIR patient centric designs.[10] Interoperability is provided by the standard Quest markup itself, as the knowledge representation of questionnaires.Finally, reusability addresses attributes, licensing, provenance, and standards.Since the Quest renderer uses URLs as unique identifiers, updated versions of questionnaires receive new URLs documenting the historical lineage of marked up document.
In addition to the zero-footprint nature of web applications and the inherent privacy protection of operating client side, the reliance on web technologies to assemble in-browser applications brings with it an open-ended engineering platform.Specifically, additional client-side libraries can be integrated, as illustrated by the Connect for Cancer Prevention PWA calling the occupation coding service provided by SOCcer [11,12].The same extensibility is at hand for styling the questionnaire, by calling customized Cascading Style Sheets (CSS files).Notably, wearable and IoT devices are entering the communication ecosystem that surrounds study participants.Accordingly, online questionnaires follow advances in web technologies widening the range of participation models (e.g., voice, location, sensors, etc.).
Finally, Quest markup lays the foundation for other questionnaire rendering software.Allowing other teams to create more innovative, performant, and feature-rich online questionnaire software with a minimum shared set of expected features for epidemiological studies.
Conclusion
Funding organizations, such as the NIH, increasingly expect grantees to make their data and software compliant with FAIR principles.[13] Accordingly, Quest markup language was developed to facilitate the collaborative development and maximize the reusability of questionnaire modules across multiple studies.As illustrated by the reference in-browser markup renderer, no specialized questionnaire servers are required.Neither to describe the questionnaire elements nor the logic underlying their presentation to cohort study participants.In a nutshell, Quest aims to enable the emergence of Questionnaire Commons that make the most of the nimble, extensive, and transparent nature of Web computing.To that end, Quest is provided with open source and in the public domain, with no restrictions on use or modification.
Fig. 1
Fig.1 An example module of seven questions.The user is initially shown question QUESTION_1.If they select (1) then, they are immediately taken to question END, otherwise they are taken to QUESTION_2.In QUESTION 2, the user can select multiple answers.Assuming they select 13:runny nose and 15:fever, questions DECON1, TEMP, and END are added to a stack of questions to ask.In question DECON1, if the user selects (1) Yes then DECON2 is added immediately to the question stack, otherwise no questions are added, and the question stack is popped to select the question TEMP and then END.Notice that in the TEMP question a minimum of 90 and a maximum of 120 degrees are enforced.The JSON returned would look like: {QUESTION_1:2, QUESTION_2: [13,15], DECON1:0, TEMP:98} | 3,773 | 2023-10-25T00:00:00.000 | [
"Computer Science",
"Medicine"
] |
Mitochondrial DNA disease and developmental implications for reproductive strategies
Mitochondrial diseases are potentially severe, incurable diseases resulting from dysfunctional mitochondria. Several important mitochondrial diseases are caused by mutations in mitochondrial DNA (mtDNA), the genetic material contained within mitochondria, which is maternally inherited. Classical and modern therapeutic approaches exist to address the inheritance of mtDNA disease, but are potentially complicated by the fact that cellular mtDNA populations evolve according to poorly-understood dynamics during development and organismal lifetimes. We review these therapeutic approaches and models of mtDNA dynamics during development, and discuss the implications of recent results from these models for modern mtDNA therapies. We particularly highlight mtDNA segregation—differences in proliferative rates between different mtDNA haplotypes—as a potential and underexplored issue in such therapies. However, straightforward strategies exist to combat this and other potential therapeutic problems. In particular, we describe haplotype matching as an approach with the power to potentially ameliorate any expected issues from mtDNA incompatibility.
essential respiratory chain proteins are encoded by the nucleus, as well as many proteins required for mtDNA maintenance and replication. Therefore, mutations in either the mitochondrial or nuclear DNA of the cell may cause pathological loss of function in mitochondria and lead to mitochondrial disease (Taylor and Turnbull, 2005;Greaves et al., 2012). In this review, we will focus on diseases resulting from mtDNA mutation. MtDNA is maternally inherited, apparently because sperm contribute almost no cytoplasm to the zygote, and paternal mitochondria are ubiquitinated (Sutovsky et al., 1999(Sutovsky et al., , 2000 and targeted for destruction (Cummins et al., 1998;Shitara et al., 2000;Al Rawi et al., 2011;Sato and Sato, 2011;DeLuca and O'Farrell, 2012) as soon as fertilization has taken place, persisting only in abnormal embryos or interspecies crosses (Gyllensten et al., 1991;St John et al., 2000). MtDNA is possibly even eliminated prior to fertilization (Luo et al., 2013).
Diseases resulting from mutations in mtDNA have unique characteristics of onset, severity and inheritance, largely due to the fact that there are thousands of copies of mtDNA in a typical nucleated cell Wallace, 1999). In most normal individuals these are effectively genetically identical (a situation termed 'homoplasmy'). In mtDNA disease there may be a number of different, mutated mtDNA molecules, giving rise to 'heteroplasmy' (more than one mtDNA type coexisting in the same cell).
MtDNA is maternally inherited, producing a characteristic disease distribution down the maternal line. MtDNA haplotypes can modulate the pathological effects of mutated nuclear encoded genes (Strauss et al., 2013), and the same mtDNA variation can be deleterious or beneficial depending on its mtDNA background and environment (Ji et al., 2012). Many mtDNA diseases are heteroplasmic, that is, both mutated and wild-type mtDNA co-exist in affected individuals. In most of these cases a dosage effect is observed (Jeppesen et al., 2006), with the proportion, copy number and distribution of mtDNA mutants influencing tissue function (Petruzzella et al., 1994). The 'threshold' above which mtDNA disease shows clinical symptoms is around 70% mutated mtDNA in the commonest disorder (Jeppesen et al., 2006). This dependence on mutant load is important because intracellular populations of mtDNA, and thus the proportional presence of mutant mtDNA, can change during development, according to dynamics which are currently poorly characterized.
The developmental modulation of mtDNA populations means that patients with mtDNA disease frequently exhibit progressive symptoms, as mutant mtDNA accumulates in affected tissues (Poulton et al., 1995;Weber et al., 1997). For instance, children with Pearson's syndrome may present with severe anemia and lactic acidosis in early infancy. The characteristic mutation is a single mtDNA deletion of about 5 kilobases, encompassing both protein and tRNA coding regions. Affected children initially have high levels of mutant mtDNA in all tissues. As the disease progresses, the level of mutant in blood drops and their anemia resolves. However, if they survive to adolescence they may develop a myopathy as the proportion of mutant mtDNA in muscle increases (McShane et al., 1991). While the shifting of mtDNA populations is less extreme in most maternally inherited heteroplasmic mtDNA disease, in almost all cases the level of mutant mtDNA is lower in blood than in post-mitotic tissues such as muscle and brain (Rahman et al., 2001). This example illustrates a potential diagnostic problem: as mutant load changes with time, blood levels of mutant mtDNA cannot easily be used to advise patients on their prognosis or transmission risks.
Mitochondrial diseases are often clinically heterogeneous. While many patients do not fit into specific clinical syndromes, well known examples of mtDNA diseases include MIDD (mitochondrially inherited diabetes and deafness) (van den Ouweland et al., 1992), MELAS (mitochondrial myopathy, encephalomyopathy, lactic acidosis, stroke-like symptoms) (Goto et al., 1990), MILS (maternally inherited Leigh's syndrome) (Holt et al., 1990), MERRF (myoclonic epilepsy with ragged red fibers) (Wallace et al., 1988b) and LHON (Leber's hereditary optic neuropathy) (Wallace et al., 1988a;Howell and McCullough, 1990;Johns et al., 1992). Muscle dysfunction is an important feature of MELAS (Ciafaloni et al., 1992) and MERRF, both of which can cause cognitive decline, ataxia, epilepsy, cardiomyopathy and deafness . Diabetes is a common feature of MELAS (van den Ouweland et al., 1992). MILS mainly involves the central nervous system with psychomotor delay, visual and hearing impairment (Degoul et al., 1995). LHON is usually a non-syndromic optic neuropathy and most patients are homoplasmic for mutant mtDNA (Howell and McCullough, 1990;Johns et al., 1992). Specific mutations in mtDNA are known to give rise to these diseases: for example, the m.3243A.G mutation most often causes MIDD, but in more severe cases MELAS (Goto et al., 1990), the m.8344A.G mutation can cause MERRF (Wallace et al., 1988b), and m.11778G.A (Wallace et al., 1988a), m.3460G.A (Howell and McCullough, 1990) and m.14484T.C (Johns et al., 1992) mutations can cause LHON. However other mtDNA mutations also give rise to these diseases. A review of clinical features and a morbidity map of mtDNA mutations can be found in Chinnery and Hudson (2013) and DiMauro et al. (2013).
Another striking feature of mtDNA disease inheritance involves the observed large shifts of heteroplasmy between mother and offspring. For example, it is possible for a phenotypically healthy mother, harboring 50% mutated mtDNA, to produce both healthy and severely affected children (Larsson et al., 1992). The reason for this shift between generations is the so-called 'bottleneck' effect, whereby heteroplasmy levels in offspring are remarkably variable with respect to the maternal heteroplasmy, while the average heteroplasmy across many offspring is often comparable to that of the mother (Jenuth et al., 1996). The mechanism underlying this effect is hotly debated (Carling et al., 2011), with some suggesting that responsibility lies with a pronounced decrease in mtDNA copy number in the germline (Cree et al., 2008), others proposing random partitioning of clusters of mtDNA at cell divisions (Cao et al., 2007(Cao et al., , 2009, and others proposing replication of a subset of mtDNAs during development (Wai et al., 2008) (Fig. 1).
In the most extreme examples of the bottleneck, there may be rapid switching from near homoplasmy in one mtDNA type to near homoplasmy for another between mother and child. Using a heteroplasmic length variant in noncoding mtDNA (Marchington et al., 1997) one of the current authors (J.P.) found evidence for this switching in oocytes from women and from mice. This switching was also apparent in oocytes from women who were heteroplasmic for pathogenic mtDNA mutants (Blok et al., 1997;Marchington et al., 1998;Brown et al., 2001). While further work is needed to elucidate this mechanism (Carling et al., 2011), the bottleneck is a clear example of how developmental phenomena mold the statistics of intracellular mtDNA populations.
MtDNA diseases are currently not directly curable, despite several promising approaches to shift the amount of mutated mtDNA in patients affected by heteroplasmic diseases to lower, less pathogenic levels. For example, specifically designed nucleases (including so-called mitoTALENs (Bacman et al., 2013) and zinc-finger nucleases (Gammage et al., 2014)) can cut mutated mtDNA at the site of the mutation. In the absence of clinically available cures of mtDNAs disease, strategies to prevent their transmission to the next generation to allow (subclinically) affected women to have healthy children (or at least highly increase the chances thereof) are extremely important (Fig. 2). We designate those therapeutic strategies that are in current clinical practice as 'Classical' and those that have not yet been approved for use in humans as 'Modern'. Classical strategies aim to either replace the affected oocyte from the patient altogether, or monitor embryo/fetal heteroplasmy. Modern strategies aim at replacing the affected mtDNA. These new strategies have become prominent in the scientific literature and media alike. The appealing catchphrases 'like changing a laptop battery' (referring to the replacement of dysfunctional mtDNA) and associated 'three-parent babies' (referring to the presence of a third-party's mtDNA in an embryo; see below) have captured the imagination of many involved in communicating science to the general public, and several of these recently proposed therapies for mtDNA disease inheritance are currently on the verge of clinical application.
Classical options in reproductive management of mtDNA disease Three notable strategies have existed, at least in concept (Sauer and Kavic, 2006), since before the first maternally inherited mtDNA disease was described 25 years ago (Wallace et al., 1988b), to address the issue of potential inheritance of mutant mtDNA in families at risk of transmitting diseases ( Fig. 2A).
Oocyte donation (Fig. 2B) is a simple way to completely eliminate the risk of transmitting heteroplasmic mutant mtDNA from mother to child. This approach involves using the oocytes from a third-party donor rather than the mother, thus losing any genetic inheritance from the mother, but representing the only strategy guaranteeing to intercept the transmission of the disease.
Alternative strategies exploit the difference in mutant load between a potential mother's oocytes, arising largely from the aforementioned mtDNA bottleneck, and selected concepti with a low mutant load. This strategy can only be applied in heteroplasmic disease. Selection of low risk concepti was initially carried out during established pregnancy (chorionic villus sampling) (Harding et al., 1992). Here, heteroplasmy in chorionic villi is analyzed at the end of the first trimester, with a view to terminating fetuses displaying high heteroplasmies and thus at high risk of inheriting mtDNA disease. This approach is useful to address mtDNA disease inheritance in disorders where there is a good relationship between phenotype and mutant load (White et al., 1999). It is not suitable for homoplasmic diseases, nor for those in which the mtDNA mutant load is a poor predictor of phenotype, like LHON (Black et al., 1996). Furthermore, if the load of mutant mtDNA in trophoblast is a poor representation of that in the rest of the conceptus, the efficacy of this approach is decreased. This poor representation can be the case when segregation starts early, as described in the next paragraph.
A third therapeutic strategy involves selecting low risk early ('cleavage') embryos, through the use of preimplantation genetic diagnosis (PGD) (( Fig. 2C). After fertilization of an oocyte and limited subsequent development, a small fraction (1-2 cells) of the cleavage embryo is sampled to determine the mutant load. At this stage, the variation in mutant load between individual blastomeres is small. The mutant load in an embryo can then be used to estimate the risk of the individual developing symptoms of a mtDNA disorder post-natally (Poulton et al., 2010). This strategy is currently the mainstay of modern clinical practice, being used successfully to help families with mtDNA disease (Steffann et al., 2006;Monnot et al., 2011). However, when PGD is carried out on blastocysts, a later embryonic stage, samples may not adequately reflect post-natal heteroplasmy. This problem arose when a variant of the technique, blastocyst biopsy, was used for prenatal screening of an embryo carrying the m.3243A.G mutation (Treff et al., 2012). In this instance, Figure 1 The mitochondrial DNA bottleneck during development. (A) A fertilized oocyte has a given heteroplasmy (mutant load) value. During gestation, the female embryo/fetus develops primordial germ cells that develop into oocytes. The heteroplasmy in these oocytes shows high variance due to the bottleneck effect, whose proposed mechanisms are shown in (B)-(D). (B) A reduction of mitochondrial DNA (mtDNA) copy number in the primordial germ cells and consecutive reamplification during oocyte development accelerates random drift and increases variance. (C) Random partitioning of clusters of mtDNAs at each cell division during primordial germ cell development could powerfully increase heteroplasmy variance. (D) Allowing only a small random subset of mtDNAs to replicate (here two instances are depicted with circles and squares)-either a specifically selected set or through restricted random turnover-can increase heteroplasmy variance through imposing a lower effective population size. the mutant load in trophoblast cells (12%; Treff et al. (2012)) was substantially lower than in some samples of the child (47% blood, 52% urine; Wallace and Chalkia (2013) and Mitalipov et al. (2014)). It is currently unknown whether the difference in heteroplasmy occurred between trophoblast and inner cell mass, or whether the heteroplasmy levels changed during gestation. However, this case shows the considerable residual risk of this method. Generally, cell-to-cell heteroplasmy and copy number variation are likely to develop as cells develop down the specific functional lineages found in the blastocyst. Such variation could be further exacerbated by a proposed rapid mtDNA segregation in preimplantation embryos (Lee et al., 2012). However, it needs to be clarified whether this is a general phenomenon, or a result of the merging of two distinct cytoplasts that segregate independently in 'artificially generated embryos'. If the latter is true, the effect will be of importance in all techniques that include some degree of cytoplasmic transfer, including karyoplast transfer (Steffann et al., 2014). In conclusion, PGD on cleavage stage embryos seems robust, but on blastocysts may be unreliable.
New developments: modern treatments for mtDNA disease
The above approaches to address the inheritance of mitochondrial disease have several shortcomings. Oocyte donation has the disadvantage that none of the mother's nuclear DNA content is retained in the offspring. PGD of blastocysts (but not of cleavage stage embryos) and chorionic villus sampling run the risk that differences between tissues and individual cells may lead to an inaccurate inference of heteroplasmy levels, and thus erroneous risk estimation. PGD of cleavage stage embryos however appears to be robust and accurate (Monnot et al., 2011).
Two recently proposed therapies, pronuclear transfer and chromosomal spindle transfer, are designed to circumvent these problems. Both these approaches aim to transfer the nuclear genome of a parent 'donor' oocyte to a healthy 'recipient' oocyte with no nucleus and healthy mtDNA. Specifically, pronuclear transfer ( Fig. 2D) involves transferring the two pronuclei (from mother and father) of a fertilized donor oocyte to an enucleated recipient oocyte. Chromosomal spindle transfer ( Fig. 2E) involves transferring the chromosomal spindle from a donor oocyte into an enucleated recipient oocyte prior to fertilization. Thus, the nuclear genome is transferred to an environment with functional mitochondria, i.e. the defective 'batteries' of the cells are replaced with working ones; resulting in a healthy embryo and definitely interrupting inheritance of the pathological mtDNA (Poulton and Oakeshott, 2012). These therapies aim to achieve zero mutant load, consistent across all cells, and allow the mother and father both to contribute nuclear DNA, thus addressing the shortcomings of traditional therapies.
Pilot studies of these techniques have proved their feasibility, but also highlighted the potentially important and currently unavoidable phenomenon of mtDNA carryover. Ideally, pronuclear and spindle transfer both should lead to a complete lack of donor mtDNA in the recipient, but technical limitations currently make this unfeasible (St John and Campbell, 2010). Currently, no method has been shown to reproducibly eliminate 100% of the unwanted donor mtDNA carryover. While all techniques tend to reduce carryover to below 1%, comparison between the studies is hampered by varying detection limits of donor mtDNA, between 0.01 and 2%. In five out of nine human embryos created by pronuclear transfer (Craven et al., 2010) the average carryover of donor mtDNA was 1.68%. Observed carryovers in other studies include 0.5 -0.6% from human spindle transfer (Tachibana et al., 2013); 0.31% in human nuclear transfer (Paull et al., 2013); and 1% in rhesus monkeys with spindle transfer (Tachibana et al., 2009;
Figure 2
MtDNA disease inheritance and therapeutic approaches. (A) A mother with mtDNA harboring a pathological mutation is at risk of transmitting the associated disease to her offspring. (B) Oocyte donation uses an oocyte from a third-party donor who does not carry the mtDNA mutation. (C) Preimplantation diagnosis involves sampling mutant load in cells after conception. As the mother's oocytes may exhibit a wide range of heteroplasmy levels, some concepti may inherit acceptably low mutant loads: these are retained. (D) Pronuclear transfer involves the removal of the nucleus from a third-party oocyte with unaffected mtDNA, then the transfer of two pronuclei (from the mother's egg and father's sperm) onto this healthy background. (E) Spindle transfer involves the replacement of a third-party oocyte nucleus with the chromosomal complex from the mother, prior to fertilization with the father's sperm. Lee et al., 2012). Therefore, a low-level heteroplasmy in the resulting embryo cannot be excluded.
Such low amounts are generally insufficient to cause disease (Craven et al., 2011). The mtDNA bottleneck may conceivably lead to amplification in the next generation of this amount in female embryos: however, it was recently shown that for carryover ,5%, this is no concern also for the following generations, and both methods should therefore be safe in this respect (Samuels et al., 2013).
Potential issues with modern treatments
While these modern treatments are largely deemed sufficiently safe for use in the clinic, several uncertainties exist regarding the behavior of mtDNA populations after treatment. Until these uncertainties are addressed, clinical applications of modern treatments should arguably be limited to cases where no good alternatives exist. Families with severe phenotypes and a homoplasmic mtDNA mutant of proven pathogenicity are the best candidates, because the only classical option from which they benefit is oocyte donation. However, such families are relatively rare, largely because it is essential but difficult to prove that a homoplasmic mutation is causative. Correct selection of the first families for treatment will therefore be imperative. At the current time most possible and ethically justifiable pilot tests have been performed both in animal models and (abnormal) human embryos, with positive results. However, some questions regarding the subsequent behavior of mtDNA populations in offspring produced using these treatments remain, which have been flagged by researchers and noted in the literature. These issues are not fatal flaws in the concept of mtDNA therapies, but do represent areas of uncertainty associated with these therapies. All of them are connected with the mtDNA haplotype of the third-party 'recipient' providing an oocyte with healthy mtDNA. In a random pairing, it is likely that the donor and recipient haplotypes vary considerably: pairwise comparisons in human mtDNAs show up to 130 single-nucleotide polymorphism (SNP) differences ( (Blanco et al., 2011), with those located in the protein-coding regions of the mtDNA leading to up to 20 amino acid changes (Craven et al., 2011)). On average two Europeans or two Africans will differ at 29.3 and 78.3 sites, respectively (Lippold et al., 2014). Consequently, a rather complex mixture of nuclear DNA and different mtDNAs can arise: nuclear DNA from the patient ('donor') and the father, the majority of mtDNA from the enucleated recipient oocyte with presumed wildtype mtDNA (haplotype B) and a small amount of the carryover patient mtDNA (haplotype A; which may be either mutant or wild type if the donor is heteroplasmic). So a maximum of three different mtDNAs can be present in the embryo, with the healthy haplotype B constituting the vast majority, around 99%. This new haplotype is alien to both the patient (maternal) and paternal nuclear DNA, and the implications of this combination are currently largely unexplored.
The first potential issue concerns nuclear-mitochondrial interaction (Fig. 3A). Energy production is dependent on extensive cross-talk between genes from the nucleus and mtDNA (Johnson et al., 2001;Reinhardt et al., 2013). Usually the offspring's mtDNA is inherited with a haploid maternal genome. This co-inheritance is thought to facilitate nuclear-mitochondrial interaction. However, during karyoplast transfer this co-transmission is interrupted, and the mtDNA is confronted with a completely 'unknown' nuclear DNA. This situation may well lead to complications, as several physiological parameters like respiration, performance ( (Nagao et al., 1998), inter-species and/inter-subspecies heteroplasmy) and learning ((Roubertoux et al., 2003), intra-subspecies heteroplasmy) were reduced in mtDNA-nuclear mismatches in male mice. Males are particularly at risk as maternal inheritance of mtDNA implies that the relevant aspects of natural selection only work directly on females, i.e. accumulation of mtDNA mutations that are harmful to males is facilitated, as discussed with LHON. However, recently arguments have been brought forward that male excess in LHON might have other causes (e.g. lower estrogen levels, as estrogen seems to ameliorate mitochondrial dysfunction in LHON (Giordano et al., 2011)), and studies in macaques and mouse models support the view that nuclearmitochondrial interaction will have limited effect, if at all, on modern treatments (Chinnery et al., 2014). It was recently found that mtDNA haplotypes define gene expression patterns in mouse embryonic stem cells (Kelly et al., 2013), so clearly nuclear DNA (nDNA)-mtDNA interaction does depend on the mtDNA haplotype. However, in vivo experiments with xenomitochondrial mice show that the nuclear-mitochondrial system seems to be able to compensate for a high level of diversity. In these xenomitochondrial mice harboring Mus terricolor mtDNA on a Mus musculus background virtually no negative in vivo effects were found (Cannon et al., 2011). In contrast, in conplastic strains that harbored a range of different mtDNAs (M. m. domesticus, but also other subspecies of M. musculus) and the nucleus of M. m. domesticus, behavioral differences and varying susceptibility to experimental autoimmune encephalomyelitis were found (Roubertoux et al., 2003;Yu et al., 2009). Some of these effects might however be caused by a single mutation present in a specific mtDNA haplotype, New Zealand Black (NZB), used in that study (and several others) (Moreno-Loshuertos et al., 2006. In somatic cell nuclear cloning it was found that a certain mtDNA genetic difference between donor cell and its recipient oocyte can even be beneficial for development. A difference in copy number ratio of mtDNA to mitochondrial mRNA between the respective haplotypes might argue for a necessity for homoplasmy on the mRNA level (Bowles et al., 2008). Thus, while slightly deleterious consequences of nDNA-mtDNA mismatch have been observed in several studies, it seems likely that cells are flexible enough to deal with mismatch situations of limited heteroplasmy and genetic difference.
The second potential issue concerns mtDNA -mtDNA interaction (Fig. 3B). As described above, up to three different mtDNAs can be present in the embryo (excluding low-level microvariation (He et al., 2010;Ye et al., 2014)). Different human mtDNA types show differences in oxidative phosphorylation (OXPHOS), potentially triggered by adaption to various climates or energy demands during evolution, a controversial topic that is still discussed (reviewed in (Wallace and Chalkia, 2013)). What happens if such divergent haplotypes are mixed, even at low levels as after karyoplast transfer? In mice, the mixture of two mtDNA haplotypes of the same subspecies led to physiological changes (e.g. hypertension, changed body mass, blood parameters (Acton et al., 2007)) and altered behavior , while mice carrying 100% of either haplotype stayed healthy. It is likely that low heteroplasmies ,5% are insufficient to produce strong manifestations of these effects, but further studies are needed to define the exact heteroplasmy thresholds of importance for mismatching.
The third potential issue concerns mtDNA segregation, that is, the process by which one mtDNA type comes to dominate over another within a cell. The simplest example of this is if mutated mtDNA experiences a proliferative advantage over non-mutated mtDNA, and hence even a small initial amount of mutant mtDNA could eventually come to dominate the cell. As we review below, the evidence to support this positive segregation of mutant mtDNA is scanty, even in diseases where it is known to occur. However, the segregation of nonpathological mtDNAs may be a more pertinent issue in mtDNA therapies. If the mtDNA haplotype of the affected woman ('donor') experiences a proliferative advantage over that from the 'recipient' healthy oocyte (irrespective of the presence of pathological mutations), an arbitrarily small amount of carryover donor mtDNA could subsequently come to dominate the cellular population (Fig. 3C).
In this scenario, the amplification of donor mtDNA is due to haplotypic differences alone, without any segregation specifically arising from pathological mutations. This mechanism can lead to the amplification of a pathological mutation even if that mutation does not affect segregation. Specifically, if the proliferating donor mtDNA haplotype is associated with a pathological mutation, the amplification resulting from haplotype segregation will lead to a concomitant amplification of the mutation, which 'hitchhikes' upon the proliferating haplotype as illustrated in Fig. 3C, potentially reaching pathological levels both in the offspring and subsequent generations.
We focus on segregation effects in particular, for two reasons. First, the aforementioned phenomenon of mtDNA carryover potentially creates a situation in which segregation has to be taken into account: that is, where several different mtDNAs are present within a cell, a situation that has so far rarely been described (St John and Schatten, 2004). Second, as we will subsequently describe, experimental evidence exists to suggest that segregation between different mtDNA haplotypes, though rarely commented upon, may be a significant effect, while evidence regarding the other two issues is more sparse.
Segregation of pathological mutations
Due to the potentially dramatic physiological implications of mtDNA mutations and progressive segregation documented in tissue culture (Hayashi et al., 1991;Dunbar et al., 1995;Emmerson et al., 2001), one may expect that pathological mutations would lead to extreme segregation effects. While the topic is controversial (Craven et al., 2010), there are good examples of segregation of disease-related mutations in humans (Larsson et al., 1990;Poulton et al., 1995;Weber et al., 1997). To our knowledge, the level of mutant mtDNA is always lower in blood than in post-mitotic tissues such as muscle and brain (Ciafaloni et al., 1992;Larsson et al., 1992;Rahman et al., 2001). A possible cause is the replacement of defective cells (i.e. cells with high levels of mutant mtDNA) with healthy cells in tissues with rapid turnover (Rahman et al., 2001).
In mice containing a mixture of wild-type mtDNA and mtDNA with a 4696-bp deletion (denoted D mtDNA) that leads to lethal renal failure, Figure 3 Potential issues associated with mixed mtDNA populations resulting from modern therapies. (A) Incompatibilities may exist between the nuclear DNA (from mother and father) and mtDNA (from a third party), as these genomes have not necessarily co-evolved. Such incompatibilities may conceivably manifest as, for example, dysfunctional protein products or signaling pathways. (B) The mixture of two mtDNA types within a cell has been found to cause detrimental physiological effects, for unknown reasons. (C) Segregation is the proliferation of one mtDNA haplotype over another in a cellular mixture, potentially causing changes in the population fraction of one mtDNA haplotype. If one mtDNA haplotype experiences a proliferative advantage over another, it may come to dominate the cellular population over time. If some mtDNAs of this haplotype harbor a pathological mutation, this mutation may thus become amplified even if the pathological mutation itself does not affect segregation. the D mtDNA was observed to preferentially accumulate in several tissues over time (e.g. heart, skeletal muscle, kidney, liver, testis and ovary) (Sato et al., 2007). This model, however, is complex and does not clearly recapitulate human disease for three reasons. Firstly, renal failure is uncommon in human mtDNA disease and has rarely, if ever, been reported in mtDNA deletions. Secondly, this rearrangement was maternally inherited, and included mtDNA duplications as well as deletions, both of which are uncommon in human mtDNA disease. Thirdly, the level of mutant mtDNA in the female germline declined with age, a trend that does not reflect the findings in humans (Chinnery et al., 2004). Nevertheless, the model does recapitulate the accumulation of mutant mtDNA in post-mitotic tissues that appears to be the rule in human mtDNA disease. In a mouse model harboring a slightly deleterious tRNA mutation (m.3875delC), the mutational load was reduced through a mechanism acting at the cellular or organelle level in the developing embryo (Freyer et al., 2012), consistent with observations in inter-subspecies cattle ooplasm transfer described in more detail below (Ferreira et al., 2010). In the mouse model this effect is dependent on the initial heteroplasmy of the mother: offspring from mothers with higher heteroplasmy had lower average heteroplasmy than their mothers. The authors argue that this shift has to take place during gestation, at the cellular or organelle level. However, as no oocytes with .80% of mutated mtDNA could be found, it is possible that this effect (additionally) works on oocyte development.
These findings are consistent with the presence of purifying selection in the germ line, a mechanism acting to eliminate highly deleterious mutations, particularly those located in protein-coding regions. Purifying selection has been directly observed in heteroplasmic mice harboring mtDNA with a severe ND6 mutation along with the wild-type mtDNA. In these mice, the mutation was selectively eliminated during oogenesis within four generations, while a milder cytochrome oxidase 1 (COI) mutation was retained (Fan et al., 2008).
Very recently, in heteroplasmic Drosophila melanogaster that harbored a COI mutation that resulted in temperature-sensitive mitochondrial malfunction, it was shown that one possible mechanism of purifying selection is selective propagation of fit mitochondria on the organelle level (Hill et al., 2014).
There is thus some evidence for segregation of pathological mtDNA in animal models, particularly involving selection against mtDNA with demonstrable deleterious effects.
Segregation of genetically different mtDNA haplotypes
In order to elucidate the mechanisms that govern segregation between non-pathological mtDNA haplotypes, ooplasm transfer and blastomere/cytoplast fusion have been used to create various heteroplasmic animal models using naturally occurring haplotypes that do not specifically harbor a pathological mutation. The best-known example is the heteroplasmic mouse line containing a mixture of NZB mtDNA and a common laboratory mouse strain (CIS) mtDNA (Table I). Laboratory mouse strains show very little variation in mtDNA (Goios et al., 2007), with the NZB strain one of the very few that show considerable genetic difference to the common CIS mtDNA.
In the NZB/CIS model, the mixture of two naturally occurring but genetically different haplotypes (belonging to the same subspecies) leads to tissue-specific segregation effects: the proportion of NZB mtDNA increases with time in liver and kidney and decreases in blood and spleen. This mixture of mtDNAs leads to detrimental physiological (Acton et al., 2007) and behavioral consequences as described above , although both mtDNAs are regarded free of pathological mutations. Interestingly, the offspring showed a considerable reduction of NZB mtDNA compared with their mothers . The difference was already visible in the oocytes of the mother, but it is likely that the drift also occurs during gestation. This argues for a directed segregation effect operating in the germ line in addition to the aforementioned bottleneck effect. The basic mechanisms of these segregation effects are largely unknown. One nuclear gene has been found to influence segregation in blood (Gimap3) (Jokinen et al., 2010), and one of the 91 SNPs between the two mtDNA haplotypes was proposed, and hotly discussed, as being responsible via its influence on reactive oxygen species (ROS) production (an 'A' track polymorphism in the DHU loop of the tRNA Arg (Moreno-Loshuertos et al., 2006)). Perhaps the most convincing explanation for why heteroplasmy is detrimental is an evolutionary one. Co-evolution of minor differences of in trans protein reading frames between divergent haplotypes ensures that multimeric enzyme complexes maintain a high efficiency. However these minor changes will impair efficiency of the complexes when heteroplasmy for divergent haplotypes is present. Of note, the NZB mouse strain generates more ROS than other haplotypes. Even if this ROS production itself is not physiologically deleterious, a deleterious underlying mechanism may be responsible for this difference, driving the tendency of offspring to reduce NZB mtDNA levels -probably on the oocyte and (partly) cellular level (Wallace and Chalkia, 2013).
Ooplasm transfer studies of segregation in other model organisms are limited. In cattle, inter-subspecies ooplasm transfer (Bos primigenius taurus/ B. p. indicus) has revealed segregation effects during blastocyst development and during gestation, with the B. p. indicus mtDNA being removed over time (Ferreira et al., 2010). Also in two inter-subspecific mouse models effects were observed (M. m. musculus/ M. m. domesticus, Table I).
However, the NZB model has remained the dominant heteroplasmic model utilizing naturally occurring mtDNA for almost 20 years. A large proportion of our knowledge about mtDNA segregation is based on this most prominent and best-studied heteroplasmic model, yet it is unknown whether its segregational effects represent an exception or a rule, and whether other combinations of mtDNA haplotypes may present different results.
Very recently, to address this question, we produced four mouse models by ooplasm transfer, placing various naturally occurring mtDNA haplotypes from mice captured from the wild in Europe onto a common laboratory mouse mtDNA and nuclear background (C57BL/6N). The wild-derived haplotypes we used display a spectrum of genetic differences with C57BL/6N, thus enabling us to control genetic distance (from very similar haplotypes, to haplotypes that differ in a comparable number of sites to two randomly chosen human mtDNAs, as described above). We also developed a mathematical framework to facilitate the direct comparison of many of these mice. We found that tissue-specific segregation was very common (including within post-mitotic tissue types), with the magnitude of segregation increasing with the genetic distance between the mtDNA haplotypes, and identified several contrasting mechanisms related to mtDNA turnover and organismal age by which this segregation occurred (Burgstaller et al., 2014). This study suggests that segregation between naturally occurring haplotypes may be the rule rather than exception, particularly with genetically diverse mtDNA pairings.
Heteroplasmy also exists, and has been studied, after nuclear transfer; but most studies have investigated the effects of the transfer process itself rather than focusing on mtDNA heteroplasmy. Nevertheless, these studies cover several species (cattle, sheep, pig, mouse), and cover inter-and intra-species heteroplasmy (reviewed in detail in St John et al. (2010)), and provide a body of data demonstrating co-existence of two mtDNA haplotypes in several species in vivo. Several studies report aberrance from expected donor mtDNA amounts that could be caused by segregation bias (Hiendleder et al., 1999;Takeda et al., 2003;Burgstaller et al., 2007). In particular, one study systematically analyzing cloned pigs and their offspring demonstrated powerful segregation effects between mtDNA from the genetically distant Meishan and Landrace breeds, which represent two subspecies of Sus scrofa. In these animals, the Meishan mtDNA significantly increased in liver, relative to spleen, ear and blood ((Takeda et al., 2006), Table I). Another interspecies study in mouse (M. m. molossinus/M. m. domesticus), also found segregation in liver, with the M. m. molossinus mtDNA increasing (measured relative to brain (Inoue et al., 2004)).
It is notable that in studies observing many animals over a substantial amount of time, or over several generations, segregation between different mtDNA types is often observed (Table I). Interestingly, in all studies of post-natal animals, liver is the tissue with the highest segregation effect. We can only speculate why this might be, but note that liver tissue has a high energy demand combined with high mtDNA turnover. Liver mtDNA half-lives are estimated at between 2 (Miwa et al., 2008(Miwa et al., , 2010 and 9 days (Gross et al., 1969;Menzies and Gold, 1971;Korr et al., 1998) when compared with, for example, skeletal muscle (reports from 18 (Korr et al., 1998) to 700 days (Collins et al., 2003)). The fast turnover time of mtDNA in liver and potentially strong selective pressure for energy production may underlie the rapid segregation observed in this tissue.
Implications
We have reviewed classical and modern approaches to address the inheritance of mtDNA disease. Modern approaches-pronuclear transfer and spindle transfer-have the potential to ameliorate mtDNA disease without the unsatisfactory genetic features of classical approaches. However, we have noted that several uncertainties are currently associated with the post-therapy behavior of embryos created using these techniques. These issues include mtDNA-mtDNA and mtDNA-nDNA mismatches, which could be important at high heteroplasmies, but are likely dampened by the ability of modern therapies to guarantee ,1-2% donor carryover. Segregation of pathological mtDNA is potentially damaging. A key argument for nuclear transfer is therefore that current evidence suggests such segregation is either of very low magnitude or acts in such a way to remove pathological mutation (or both), and hence is likely not a key issue in mtDNA therapies (Craven et al., 2010). While further work is required to satisfactorily characterize these phenomena, the evidence suggests that they might not pose immediate issues in the application of genetic therapies.
The remaining phenomenon, segregation of non-pathological mtDNA haplotypes, is possibly the most important unaddressed question associated with modern mtDNA therapy, due to the potential consequent amplification of pathological mutations associated with one haplotype in the offspring resulting from genetic therapy, and in subsequent generations.
A key clinical consideration is whether segregation, and subsequent potential amplification of pathological mutations, could occur in postmitotic tissues in which mtDNA diseases are most often manifest (Reeve et al., 2008). In organs where cells are constantly renewed (for example, skin and intestine), cells with damaged OXPHOS systems are probably replaced by functioning ones (Rahman et al., 2001). Organs particularly at risk for mtDNA disease are the post-mitotic tissues of heart, and skeletal muscle and brain; and liver and kidney which show high (liver) (Michalopoulos and DeFrances, 1997) and rather limited (kidney) (Little, 2006) regenerative potential. Our recent study using wild-derived mouse mtDNA has demonstrated haplotypic segregation in heart and skeletal muscle (Burgstaller et al., 2014). Liver and kidney are both tissues that show mtDNA segregation bias in the famous heteroplasmic mouse model described by the Shoubridge group described above (Jenuth et al., 1997).
An important question when considering the implications of animal models is whether phenomena observed in animals also occur in humans. Investigation of mtDNA dynamics during human development is practically limited for clear reasons, and the mouse NZB model remains by far the best-studied model of mtDNA haplotype segregation. However, reports of coexistence between two mtDNA haplotypes, and reports of mtDNA segregation effects, are present across several species, suggesting that these effects may be shared by all mammals. Further studies are however needed to confirm this assumption, and, importantly, elucidate the mechanisms on which these effects are based.
Based on the current available evidence, we believe that segregation between different naturally occurring mtDNA haplotypes may potentially influence the post-therapy behavior of intracellular mtDNA populations in offspring produced through modern gene therapies. To recap, these therapies involve recruiting a 'recipient' oocyte to serve as a healthy mitochondrial background for nuclear DNA resulting from fertilization. However, experimental limitations mean that some of the original mother's 'donor' mtDNA will inevitably be present in embryos produced in this way. If donor mtDNA proliferates over recipient mtDNA, the donor mtDNA will become amplified during development and during the lifetime of the offspring. If the donor mtDNA is associated with a pathological mutation, even if this mutation does not affect mtDNA proliferation, its 'hitchhiking' on the proliferating mtDNA may cause its amplification to potentially pathological levels in the offspring. We note that this worst-case haplotype segregation will not, in itself, cause additional harm to offspring beyond that expected from mtDNA disease inheritance; rather, it has the potential to nullify the beneficial effects of genetic therapy by re-establishing the original mtDNA mixture that was present in the donor oocyte, possibly in a tissue-specific way. The possibility also exists that the amplification of one mtDNA type through segregation may affect the behavior resulting from the aforementioned mtDNA-mtDNA mismatch, which is very likely suppressed at low heteroplasmies.
It is notable that all potential mtDNA segregation issues (indeed, all three of the potential issues we note) associated with modern mtDNA therapies can be ameliorated by employing a simple 'haplotype matching' protocol: that is, ensuring that the donor and recipient mtDNA haplotypes are as similar as possible. This approach will minimize nDNA-mtDNA mismatching (as the donor nucleus will have co-evolved with donor mtDNA, very similar to recipient mtDNA); mtDNA-mtDNA mismatch (due to the genetic similarity); and mtDNA segregation (as two very similar haplotypes are expected to show very little segregation). The ideal recipient would be of the same haplotype as the donor (minus the pathological mutation), for example, from a healthy maternal relative. Alternatively (or in addition), further research on the segregation of different pairs of mtDNA haplotypes could be used to choose suitable recipients for a given donor, in order to minimize segregation effects.
Experts in the field of karyoplast transfer have noted that 'it is possible to match mitochondrial haplotype between the mother and the mitochondrial donor to avoid any concern, even though the evidence says it should not be needed' (Chinnery et al., 2014). We think that the somewhat overlooked issue of mtDNA segregation currently constitutes a reason that merits this safety precaution, which would solve all potential concerns reviewed here. Additionally, in the case of exact haplotype matching, offspring mtDNA would have a complete genetic identity with the mother's mtDNA, possibly going some way towards alleviating the ethical issues associated with 'three-parent babies'; that is, offspring with genetic material from mother, father and a third party.
While uncertainties exist regarding the behavior of mixed intracellular mtDNA populations, and animal models of mtDNA mixtures during development suggest that segregation potentially requires further study and consideration in therapeutic contexts, it should firmly be noted that these recent mtDNA-replacement strategies hold the promise to eliminate transmission of mtDNA diseases for good, and in so doing dramatically improve the lives of families carrying mtDNA disease. The potential advantages of these therapies seem to, in general, substantially outweigh their known risks. The unknown risks must thus be balanced against the certainties of classical genetic management. Hence potential patients for the first treatment trials will be from the rare homoplasmic families at proven high recurrence risk of severe phenotypes, for whom classical genetic management has least to offer. Initiating clinical trials is the only way to evaluate the presently unknown risks and future hopes for families brought by modern mtDNA therapies. | 9,802.8 | 2014-11-24T00:00:00.000 | [
"Biology",
"Medicine"
] |
Auranofin Resistance in Toxoplasma gondii Decreases the Accumulation of Reactive Oxygen Species but Does Not Target Parasite Thioredoxin Reductase
Auranofin, a reprofiled FDA-approved drug originally designed to treat rheumatoid arthritis, has emerged as a promising anti-parasitic drug. It induces the accumulation of reactive oxygen species (ROS) in parasites, including Toxoplasma gondii. We generated auranofin resistant T. gondii lines through chemical mutagenesis to identify the molecular target of this drug. Resistant clones were confirmed with a competition assay using wild-type T. gondii expressing yellow fluorescence protein (YFP) as a reference strain. The predicted auranofin target, thioredoxin reductase, was not mutated in any of our resistant lines. Subsequent whole genomic sequencing analysis (WGS) did not reveal a consensus resistance locus, although many have point mutations in genes encoding redox-relevant proteins such as superoxide dismutase (TgSOD2) and ribonucleotide reductase. We investigated the SOD2 L201P mutation and found that it was not sufficient to confer resistance when introduced into wild-type parasites. Resistant clones accumulated less ROS than their wild type counterparts. Our results demonstrate that resistance to auranofin in T. gondii enhances its ability to abate oxidative stress through diverse mechanisms. This evidence supports a hypothesized mechanism of auranofin anti-parasitic activity as disruption of redox homeostasis.
INTRODUCTION
For more than 50 years, the mainstay of treatment for acute toxoplasmosis has been a combination of a dihydrofolate reductase inhibitor and a sulfa antimicrobial. While this is currently a critical therapeutic strategy, currently approved drugs cannot eliminate latent parasites in cysts that are found in chronically infected individuals. Moreover, these drugs have significant bone marrow toxicity, are suspected teratogens and the emergence of resistance remains a potential threat to treatment. Repurposing FDAapproved drugs will accelerate drug discovery for neglected parasitic diseases. Most recently, auranofin, a reprofiled drug that is FDA-approved for treatment of rheumatoid arthritis, has emerged as a promising anti-parasitic agent. It has antiproliferative activity against Plasmodium falciparum (Sannella et al., 2008), Schistosoma mansoni (Angelucci et al., 2009), Leishmania infantum (Ilari et al., 2012), and Entamoeba histolytica (Debnath et al., 2012), as well as other parasites with public health significance (Martinez-Gonzalez et al., 2010;Debnath et al., 2013;Tejman-Yarden et al., 2013;Bulman et al., 2015;da Silva et al., 2016;Hopper et al., 2016;Peroutka-Bigus and Bellaire, 2019). Despite this promising broad spectrum of anti-parasitic activity, the molecular target of auranofin has only been indirectly implicated as thioredoxin reductase.
Thioredoxin reductase enzymes are found in archaea, bacteria and diverse eukaryotes and play a key role in reduction of disulfide bonds that is essential for cell replication and survival. In humans, inhibition of thioredoxin reductase 1 induces expression of heme oxygenase-1 to repress inflammation (Kobayashi et al., 2006).
We previously demonstrated that auranofin reduces T. gondii infection of host cells in vitro and a single dose of auranofin allows survival of chicken embryos infected with this parasite (Andrade et al., 2014). It is hypothesized that the anti-parasitic activity of auranofin comes from inhibition of the thioredoxin reductase enzyme of these parasites, akin to its action on human cells (Kuntz et al., 2007;Debnath et al., 2012;Tejman-Yarden et al., 2013). This postulated mechanism of action explains why parasites treated with auranofin accumulate ROS (Debnath et al., 2012). Intriguingly, auranofin may improve infection outcome by decreasing pathogenic host inflammatory damage in addition to its direct inhibition of parasite replication.
In this paper we describe our work to investigate resistance to auranofin in T. gondii parasites. This approach illuminates mechanisms of resistance that might arise during its use as an anti-parasitic. In some circumstances, identification of resistance mechanisms validates proposed drug targets [i.e., (McFadden et al., 2000)]. In this study, we found that resistance was not associated with changes to the Toxoplasma thioredoxin reductase gene and no other single locus emerged as a consistent site underlying resistance. However, we observed that resistant clones accumulated less ROS than their wild type counterparts, demonstrating that auranofin resistance enhances oxidative stress responses.
Host Cell and Parasite Cultures
Parasite clones and human foreskin fibroblasts (HFF) were grown and maintained in Dulbecco's modified Eagle's medium supplemented with 10% fetal bovine serum (Hyclone), penicillin and streptomycin (50 mg/ml each) and 200 mM L-glutamine. We will refer to this complete medium as D10. T. gondii were maintained by serial passage in HFF monolayers at 37°C in a humid 5% CO 2 atmosphere, including RH parasites (National Institutes of Health AIDS Reference and reagent Repository, Bethesda, MD), RH tachyzoites expressing cytoplasmic yellow fluorescent protein (YFP) (kindly provided by M.J. Gubbels, Boston College, Boston, Massachusetts) (Gubbels et al., 2003) and Ku80 knock-out parasites (DKu80, kindly provided by V. Carruthers, University of Michigan Medical School Dexter, Michigan) (Huynh and Carruthers, 2009;Fox et al., 2009).
Generation of T. gondii Clones Resistant to Auranofin
We mutagenized T. gondii with ENU as previously described (Pfefferkorn and Pfefferkorn, 1979;Nagamune et al., 2007). Briefly, approximately 1.5 × 10 7 intracellular tachyzoites of wildtype RH strain T. gondii were mutagenized with ENU (N-Ethyl-Nnitrosourea, N3385-1G, Sigma Aldrich; 100-200 µg/ml) in DMEM without FBS for 2 h at 37°C. These parasites were washed and transferred to a new confluent HFF monolayer for selection with auranofin (2.5 mM-3 mM) until normal pace of replication was observed (~2 weeks). Auranofin resistant clones were single cell cloned by limiting dilution in 2 mM auranofin. Isolated clones were propagated in D10 media with 1 mM auranofin to maintain selection pressure without cytotoxic effects on host cells (data not shown).
We selected ten representative T. gondii clones from independent ENU-generated pools to investigate resistance. To assess auranofin resistance, we modified a tachyzoite growth competition assay (Ma et al., 2008). Using wild-type YFPexpressing T. gondii parasites as a control, confluent monolayers of HFF cells were inoculated with approximately equal numbers (2 × 10 6 ; 1:1 ratio) of ENU-mutagenized parasites and YFPexpressing wild-type RH parasites. Upon lysis, a new T25 flask with confluent HFF was inoculated with 0.25 ml of lysed parasites. Another 0.25 ml of lysed parasites was used for flow cytometry. The YFP fluorescence allowed us to differentiate wild-type and Aur R parasites in the mixed populations to track their growth over serial passages. Host cell debris was removed by filtering of parasites with a 3 mm filter membrane. After parasites were collected by centrifugation (400g for 10 min), the pellet was resuspended in 1 ml PBS. The total number of parasites and the number of parasites expressing YFP were quantified by flow cytometry using a FACSCalibur flow cytometer (Becton Dickinson) and analyzed with FlowJo software (Tree Star). The result of three independent experiments analyzing the replication fitness advantage between each auranofin resistant clone and the wild-type parasites was analyzed with parametric paired t-tests. A two-tailed p-value <0.05 was considered statistically significant.
To determine the presence of other SNVs in the coding region of our T. gondii auranofin resistant clones, gDNA from 10 resistant clones and wild-type RH parasites was submitted for library construction and whole genome sequencing at the Genomic High Throughput Facility at University of California, Irvine. Briefly, genomic DNA (gDNA) was initially quantified with a Qubit DNA High Sensitivity kit. One hundred nanograms of gDNA was sheared to <300-500> bp in size using a Covaris S220 ultrasonicator (shearing conditions: duty Cycle 10%, Cycles/burst 200 and treatment time 55 sec) (Covaris, MA). The size and the quality of the resulting DNA fragments were analyzed with an Agilent DNA High Sensitivity chip. Ten nanograms of each sample's fragmented DNA was endrepaired, poly adenylated and ligated to a 6bp DNA barcodes (Bioo Scientific NextFlex DNA barcodes) using a PrepX Complete ILMN DNA Library kit (Takara Bio, CA). The genomic library was built using the Apollo 324 system (Takara Bio, CA) to generate 520 bps inserts. To enrich the DNA libraries, 8 cycles of PCR amplification using a Kapa library amplification kit was used (cycling conditions: 98°C for 45 s, 8 cycles of 98°C for 15 sec, 60°C for 30 s 72°C for 30 s and then 72°C for 1 min) in an Apollo 324 system. The ultimate DNA concentration was calibrated using Qubit 2.0 dsDNA HS Assay kit and analyze with an Agilent DNA High Sensitivity chip. Before sequencing, we quantified our DNA libraries using a KAPA Library Quantification kit for Illumina platforms (Roche). Each library was normalized to 2nM and then pooled equimolar amounts in the final multiplex library that was sequenced on the Illumina HiSeq 4000.
The ten auranofin resistant T. gondii clones and wild-type RH T.gondii were sequenced with HiSeq 4000 paired end 100 reads, quality analyzed, adapter and quality trimmed before alignment with the annotated reference genome of GT1 strain from ToxoDB.org (published 7/19/2013) (Gajria et al., 2008). After initial quality control with FASTQC, reads were trimmed using Illumina adapter sequences with the error probability of 0.05 and reads shorter than 20 bases were discarded. Single Nucleotide Variant (SNV) calling for the haploid genome had a coverage threshold of >5 fold. Each of the 10 mutant clones were compared to wild-type RH variant track and shared variants were filtered out. Only SNVs in the coding regions of the 10 mutant clones are reported. Only genes containing SNVs with 100% frequency and ≥50% of read count and read coverage were considered in our final analysis. Genomic DNA from each mutant clone was used to amplify the targeted SNV region of interest. The resulting amplicons ranged between 1.7-4.5 kbp in size. PCR amplicons were purified prior to sequencing via a QiaQuick PCR Purification Kit (Qiagen 28106). Amplicons were sequenced by Sanger sequencing (Genewiz), using sequencing primers targeted to the SNV region (Table S1). Genomic sequences for all 10 mutant lines can be accessed NCBI, under the Bioproject PRJNA679812.
Generation of Modified SOD2 Lines
We generated a T. gondii clone with a 3' knock-in YFP tag to TgSOD2 using a ligation independent cloning approach as previously described (Huynh and Carruthers, 2009). The mitochondrial distribution of our clone was confirmed by fluorescent microscopy using a Zeiss Axioskop with Axiovision camera and software. Co-localization of the TgSOD2-YFP with mitochondria was done by staining parasite mitochondria with an anti-F1B ATPase antibody (gift from Peter Bradley at UCLA). We subsequently introduced the L201P mutation into the TgSOD2-YFP line using a modified CRISPR plasmid generated using plasmid pSAG1::CAS9-U6::sgUPRT as the scaffold (Shen et al., 2017;Addgene # 54467). Our plasmid had an additional DHFR-TM marker for pyrimethamine selection derived from pLoxP-DHFR-mCherry [ (Long et al., 2016), Addgene # 70147], and the two gRNA sequences were replaced with cggtataccggacagatccg and TAGTTTCGCCCAGTTCAAGG, selected by the Benchling software (Benchling.com).
The DNA sequence used for homology directed repair (HDR) was amplified with the primers GATAGTGTGTGA AGAGCAGC and CTGCCGTTACCAACATGG from the LIC-YFP-SOD2 plasmid generated above. PCR amplicon contained the L201P mutation (CTA->CCA) generated with Q5 Site-Directed Mutagenesis kit (New England Biolab), as with the following modifications. To screen for positive CRISPR-Cas9 clones containing L201P, two silent mutations were created to generate new and unique restriction endonuclease sites close to Leucine 201: AflI on Leucine 180 (CTC->CTT, 62 bps upstream of L201P) and XcmI on Serine 208 (TCA->TCT, 22 bps downstream of L201P). The PCR amplicon contained two additional deletions corresponding to the gRNA sites on endogenous SOD2 gene, in order to avoid SpCas9 protein cleaving the amplicon for HDR. The sequence AGAGACC CGGCGG (including the PAM sequence cgg) was deleted on the HDR sequence corresponding to gRNA #1, and sequence CACTTAACAGAGGGTC (including the PAM sequence agg) was deleted on the HDR sequence corresponding to gRNA #2.
The L201P point mutation was introduced into the TgSOD2.YFP line through CRISPR-Cas9 ( Figure S1). Transfection of the CRISPR-CAS9 plasmid and the HDR DNA was carried out as published previously, with minor modifications (Shen et al., 2017). Briefly, 10 7 of freshly lysed parasites with SOD2-YFP were filtered and spun down at 400xg and 18⁰C for 10 min. The pellet was resuspended with Cytomix buffer (10 mM KPO 4 , 120 mM KCl, 5 mM MgCl 2 , 25 mM HEPES, 2 mM EDTA, 2µM ATP and 5µM glutathione) up to 800µl, including 7.5µg of CRISPR plasmid and 1.5µg of HDR DNA. The mixture was transfected using the ECM 630 Electro Cell Manipulator (BTX). Parasites were incubated into T25 flasks after electroporation and switched to 2 mM pyrimethamine selection 24 h later. Transfected pools remained in pyrimethamine selection for 3 days, and then switched to D10 media without selection for one more day before single cell cloning. The gDNA for isolated clones were collected by the DNeasy Blood and Tissue kit (Qiagen) and used to PCR amplify SOD2 with either primers SOD2 F/AmpR (for SOD2-YFP) or primers SOD E2/SOD R (for SOD2 only, Table S1). PCR amplicons were digested with XcmI for screening and positive clones were sequence verified.
Measurement of ROS Accumulation in T. gondii
To expose our parasites to oxidative stress, we harvested freshly lysed parasites and treated 5 × 10 4 parasites with 500 mM hydrogen peroxide (H 2 O 2 ) for 30 min at 37°C, 5% CO 2 , with or without 1 mM auranofin After incubation, ROS was detected using H 2 DCDFC (Invitrogen). Each sample was incubated with 1 µM of H 2 DCDFC and examined under a Spectramax i5, (Molecular Devices California US; multimode microplate reader SoftMax Pro 7.0.2 Software) multimode microplate reader (excitation 494 nm; emission 522 nm) at 37°C for 15 min. For excitation, a single flash from a UV Xenon lamp was used for each well, and emission signals were recorded with a sensitivity setting of 100. The values are presented as relative fluorescence units. We compared ROS accumulation of each representative auranofin resistant T. gondii clone to that of wild-type parasites by an unpaired t-test in three independent experiments. A two-tailed p-value <0.05 was considered statistically significant.
Protein Alignment and Structural Models
The predicted amino acid sequences for TgSOD2 (TGGT1_316330) and TgRNR (TGGT1_294640) were submitted to the SwissModel site to identify the optimal available protein structures for modeling (Guex et al., 2009;Benkert et al., 2011;Bertoni et al., 2017;Bienert et al., 2017;Waterhouse et al., 2018). The Toxoplasma sequences were threaded onto the structure of Plasmodium knowlesi SOD (PDB 2AWP) (Vedadi et al., 2007) and human RNR (PDB 3HND) (Fairman et al., 2011). Clustal Omega was used to align the amino acid sequences and EMBOSS was used to calculate percent identity and similarity. Information on MS peptide coverage was obtained from the ToxoDB database (Gajria et al., 2008).
Expression Level of the Thioredoxin Reductase Gene in Mutants
We used qPCR to quantify the expression level of thioredoxin reductase gene in both wild type and mutant parasites. RNA was extracted from parasite lines using the RNeasy Mini Kit (Qiagen), and then used for cDNA synthesis via iScript cDNA Synthesis Kit (Bio-Rad). The resulting cDNA samples were then subjected to qPCR with iTaq Universal SYBR Green Supermix (Bio-Rad) with the following gene targets: actin (TGGT1_209030, ToxoDB.org), GAPDH1 (TGGT1_289690, ToxoDB.org), thioredoxin reductase (TGGT1_309730, ToxoDB.org), and superoxide dismutase 2 (TGGT1_316330, ToxoDB.org).
The actin gene was amplified from cDNA samples as one PCR fragment~60bp, with primers CACGAGAGAGGATACG GCTTC and CGATGTCGCTAGAGTCCTCAG. The GAPDH1 gene was amplified in one PCR fragment~120bp with primers CACTTTGCCAAGATGGTGTG and GACTGTC TTCAACGAGAAGGAG. The thioredoxin reductase gene was amplified a~70bp PCR fragment using the primers CACAAAC ACGAACAACGGCG and GTGGACGGCTGAACAAAGTCG.
For the~50bp fragment in superoxide dismutase 2 gene, primers GGCACACCGTTCGCTGATAAG and CGTGGTTCCATG CTTGTGCTG were used.
In order to quantify the amount of target transcript present, target CT values were normalized against actin CT values. The resulting DCT values were then compared through 2-way ANOVA between the different parasite strains surveyed.
Aur R T. Gondii Lines Have a Growth Advantage Over Wild Type Parasites
To verify auranofin resistance of the selected clones, we adapted a previously described competition assay (Ma et al., 2008). This was used in place of a normal dose-response assay because at auranofin concentrations higher than 3 mM, the fibroblast monolayer begins to disrupt (data not shown), making the plaque assays not feasible. To confirm resistance to auranofin in the Aur R T. gondii lines, we assessed their growth in competition with YFP-expressing RH strain parasites in control media and in media with 2.5-3.0 mM auranofin. In competition assays, the inoculating parasite population is composed of 50% YFP-expressing wild type (sensitive) parasites and 50% of an Aur R line verified by flow cytometry. The relative contribution of the individual populations was measured after each cycle of lytic growth in serial culture. If a parasite line increases to >50% of the population, it exhibits a growth advantage over the competing line. In the absence of auranofin, Aur R lines grew comparably or less well than the parental RH line that does not express YFP ( Figure 1A). However, in the presence of 2.5 mM auranofin, all Aur R lines displayed significant growth advantage over YFP-expressing wildtype RH parasites ( Figure 1B). The observations indicated that day 6 is the optimal day to capture the growth advantage of resistant lines under auranofin selection because YFP-expressing wild-type parasites were eliminated from the culture. Given these results, we measured the ability of a larger set of Aur R lines to compete with wild-type YFP-expressing RH strain parasites in the absence or presence of auranofin by quantifying their contribution to the culture at 6 days ( Figure 2). These results show that 9 of the 10 Aur R lines outcompete the sensitive YFP-expressing RH line in the presence of 2.5 mM auranofin. Line 6 has a distinct profile: it grows slowly in both control and auranofin media, suggesting that it may evade auranofin-mediated killing by protracted replication.
Aur R Lines Do Not Harbor Mutations in the Thioredoxin Reductase Gene
Considering that thioredoxin reductase (Sannella et al., 2008;Angelucci et al., 2009;Debnath et al., 2012;Ilari et al., 2012) is the proposed target of auranofin, we hypothesized that resistance in T. gondii would develop through mutations to the parasite thioredoxin reductase gene (TgTrxR, TGGT1_309730). Therefore, we amplified and sequenced the TgTrxR gene in 53 lines derived from parallel (independent) selections (not shown). We did not detect mutations (SNVs) in the thioredoxin reductase gene of any of the lines, indicating that auranofin resistance does not arise through modifications to TgTrxR that directly influence auranofin binding or TgTrxR enzyme activity.
Aur R Lines Accumulate Less ROS
The anti-parasitic activity of auranofin relies on its ability to induce accumulation of ROS in parasites (Debnath et al., 2012). We exposed wild type RH strain parasites and four representative Aur R lines to hydrogen peroxide (H 2 O 2 ) for 30 min before measuring ROS accumulation with dichloro-fluorescein (H 2 DCDFC). All Aur R lines displayed decreased accumulation of ROS relative to the parental RH line in both the absence and presence of auranofin ( Figure 3). For control, we also measured ROS accumulation in the absence of H 2 O 2 and observed no fluorescence signal (data not shown). This negative control is consistent with a prior study measuring ROS accumulation using the same dichloro-fluorescein dye without the H 2 O 2 (Lee et al., 2006).
Aur R Lines Harbor Diverse Mutations That Introduce Point Mutations
Given that we did not observe mutations in T. gondii thioredoxin reductase, we performed whole genome sequencing (WGS) analysis of 10 independent T. gondii Aur R lines. Our goal was to determine whether these T. gondii parasites harbor a common mutation(s) that underlies resistance to auranofin. Analysis of coding sequences with 100% SNVs frequency and at least 50-fold coverage identified a variety of~5 non-synonymous SNVs in the exons of each line. Selected SNVs observed in the WGS were validated by PCR amplification and Sanger sequencing (primers in Table S1). We chose to specifically investigate the Aur R line 8 to consider which SNV was most likely to underlie auranofin resistance. Line 8 has 12 SNVs that introduce amino acid substitutions to diverse protein coding genes ( Table 1). We used the community annotated ToxoDB database and BLAST analysis to investigate these loci. In particular, the availability of RNA-seq datasets permitted us to evaluate whether loci were expressed in the tachyzoite stage. In order to assess the significance of expressed loci, we used BLAST analysis to look for protein homologs and significant motifs and collected RNA-seq values and CRISPR screen scores for available loci. This information led us to focus on 2 of the loci (TGGT1_316330 and TGGT1_294640) as most likely resistance conferring. These loci encode a superoxide dismutase (SOD2) and the large subunit (R1) of ribonucleotide reductase (RNR). Both enzymes have strong fitness conferring effects in the deposited CRISPR screen data and well-conserved metabolic roles. RNR removes the 2'-hydroxyl group of the ribose ring of nucleoside diphosphates to form deoxyribonucleotides from ribonucleotides which are used in DNA synthesis while SOD inactivates damaging superoxide (O − 2 ) radicals by converting them into molecular oxygen and hydrogen peroxide. Since both TgRNR and TgSOD2 are conserved proteins, we threaded the amino acid sequence onto homologous crystal structures to locate the position of the amino acid substitutions. TgSOD2 is a Fe-type SOD and was previously demonstrated to localize to the tachyzoite mitochondrion (Brydges and Carruthers, 2003;Pino et al., 2007). SODs form dimers with shared coordination of two iron (FeIII) cofactors. The L201P mutation is located near to the key iron-coordinating residues: H111, E249 and H160 ( Figure 4A) and is located in an a-helix. Substitution of a proline at this site likely causes the helix to bend, perhaps altering SOD enzymatic properties. TgRNR aligns well with human ribonucleotide reductase 1 (61.7% identity and 78.2% similarity). However, there is a 59 amino acid N-terminal extension to the Toxoplasma protein, with the conserved sequence beginning with M60 ( Figure 4B). This leader does not represent mis-annotation of the N-terminus of the protein: there are MS peptides mapping to this sequence, especially in samples that are from monomethylarginine and phosphopeptide-enriched datasets in ToxoDB. This is particularly interesting as arginine methylation is a modification associated with proteins that have roles in transcriptional regulation, RNA metabolism and DNA repair. RNR use free-radical chemistry to convert ribonucleotides into deoxyribonucleotides. Significantly, ribose reduction requires generation of a free radical and RNR requires electrons donated from thioredoxin. Moreover, superoxide has been shown to inactivate RNR and yeast lacking either cytoplasmic or mitochondrial SODs have increased susceptibility to oxidative stress and RNR inactivation. The L166Q mutation is in a solvent exposed loop ( Figure 4C), that is poorly conserved and not known to contribute to enzyme interactions.
Resistance to Auranofin Is Not Conferred by an Isolated Mutation in TgSOD2
The anti-parasitic effect of auranofin appears to stem from its molecule of gold, which is known to induce the accumulation of ROS in parasites (Debnath et al., 2012;Adeyemi et al., 2017). Since we hypothesized that a single mutation (SNV) suffices to generate resistance to auranofin, we analyzed the effect of the TgSOD2 L201P mutation on auranofin sensitivity. SOD2 is a bona fide enzymatic ROS scavenger and SOD2 is the sole mitochondrial superoxide dismutase in tachyzoites. To analyze the role of mutant SOD2 in auranofin resistance, we generated a T. gondii clone with a knock-in YFP tag for SOD2 that localizes to the tachyzoite mitochondrion as confirmed by co-localization with the mitochondrial marker F1B ATPase ( Figure 5A). Subsequently, we generated two T. gondii SOD2.L201P lines: one lacking the YFP tag and one tagged with a C-terminal YFP fusion ( Figure S1). Both lines were corroborated by Sanger sequencing. In the absence of Auranofin, both L201P lines grew less well than the TgSOD2-YFP wild-type line but were more fit than Aur R line 8 ( Figure 5B). Previous attempts to create SOD1 and SOD2 gene deletions failed (Odberg-Ferragut et al., 2000) and SOD2 is a fitness-conferring locus in a genomewide CRISPR/Cas screen (Sidik et al., 2016). It is noteworthy that the L201P mutation is associated with reduced fitness in the absence of superoxide stress (control conditions). In the presence of auranofin, only line 8 grows well, indicating that the single L201P mutation to TgSOD2 does not confer resistance. This may indicate that this substitution does not influence superoxide metabolism at all or that auranofin resistance also requires the mutant TgRNR enzyme. It is unlikely that these enzymes directly interact, as RNR enzymes are cytosolic while TgSOD2 is in the mitochondrial compartment. However, the SOD2.L201P T. gondii line accumulated ROS similarly to wild-type parasites, consistent with its auranofin-sensitive growth ( Figure 5C).
DISCUSSION
Auranofin, is an FDA-approved drug for treatment of rheumatoid arthritis. More recently it has been proposed to be a promising anti-parasitic agent with activity against diverse parasites with public health significance. Despite this promising broad spectrum of anti-parasitic activity, the molecular target of auranofin has only been indirectly implicated as thioredoxin reductase. We previously demonstrated that auranofin decreases the parasite burden during a systemic infection with T. gondii using a chicken embryo model of acute toxoplasmosis (Andrade et al., 2014). Our preliminary observations suggest that auranofin exerts a dual effect: it modulates both parasite proliferation and host immunopathology. Herein, we report isolation and characterization of auranofin-resistant T. gondii lines. None of our resistant lines have mutations in the thioredoxin reductase gene. This may indicate that this enzyme is not the molecular target of auranofin in T. gondii; alternately, it may indicate that thioredoxin reductase cannot tolerate resistance-conferring mutations or is not the sole significant target in this organism. During these investigations, we monitored the activity of auranofin on wild type parasites as a control condition relative to the resistant lines. Our observations suggest that auranofin enhances cumulative free radical damage, ultimately causing parasite death. Each of our auranofin resistant T. gondii lines carry several mutations (SNVs), suggesting a high threshold to develop auranofin resistance. This is a desirable characteristic for drug development. Importantly, resistant lines accumulate less reactive oxygen species (ROS) than wild-type parasites.
Chemical mutagenesis is a well-established and robust approach [i.e., (McFadden et al., 2000;Nagamune et al., 2007)] to identify the molecular targets of candidate anti-T. gondii drugs. This approach is also relevant to anticipating mechanisms of drug resistance. We isolated auranofin-resistant T. gondii lines derived from the RH strain. Resistant lines have increased replication relative to wild type RH parasites in competition assays carried out in the presence of auranofin. Lines with ≤5 accumulated SNVs (lines 1-4) appear to have a growth advantage over wild-type parasites in the absence auranofin. This probably reflects a fitness defect induced by expression of YFP in wild type parasites (Ma et al., 2008); these lines likely grow comparably to non-YFP expressing wild type parasites without auranofin present. In contrast, resistant lines that have >5 SNVs each compete poorly with wild type parasites in the absence of selection, suggesting that FIGURE 4 | Location of substitutions in TgSOD2 and TgRNR. (A) SOD forms dimers that coordinately bind to iron cofactors at their interface. The Toxoplasma amino acid sequence was threaded onto a Plasmodium knowlesi protein structure (PDB 2AWP) to create a model for TgSOD2. The mutation at position 201 (red) is near to the end of an a-helix; substitution of a proline for leucine at this location likely causes the helix to bend. This mutation is also proximal to key iron-coordinating residues H111, E249 and H160. (B) Alignment of the Toxoplasma and human RNR sequences indicates that TgRNR has a novel 59 amino acid N-terminal extension. MS peptides (yellow highlight) in ToxoDB map to this sequence with S and T phosphorylated, and R monomethylated (underlined). (C) The Toxoplasma amino acid sequence for RNR was threaded onto the human RNR structure (PDB 3HND) to create a model for TgRNR. The L166Q mutation is in a solvent exposed loop, that is poorly conserved and not known to contribute to enzyme interactions. accumulation of SNVs comes at a cost to parasite fitness. An alternative explanation to the isolation of multiple SNVs in each resistant clone is that auranofin interferes with many processes and development of resistance requires alterations to more than one target. Another possibility to consider is that the expression level of thioredoxin reductase could change with auranofin treatment. We measured the transcript levels of two redox genes, TrxR and SOD2, in both wild type and mutant line 8 ( Figure S2). Auranofin did not induce significant changes in the expression level of either TrxR or SOD2. While changes in TrxR transcription were not detected in the mutant line, it is possible that overexpression could lead to development of resistance. Future studies could investigate whether overexpression of thioredoxin reductase alone is sufficient to confer resistance to auranofin.
Auranofin contains a molecule of gold in a 3,4,5-Triacetyloxy-6sulfanyl-oxan-2-yl methyl ethanoate scaffold and has been proposed to be a pro-drug (Chircorian and Barrios, 2004) that delivers gold to dithiol groups, like those found in proteins that bear thioredoxin-containing domains (Saccoccia et al., 2012). Although auranofin inhibits mammalian thioredoxin reductase (Gromer et al., 1998), the source of its anti-inflammatory activity remains ill-defined and may reflect inhibition of multiple targets. For example, macrophages decrease production of nitric oxide and pro-inflammatory cytokines in response to auranofin, which also inhibits cyclooxygenase-2 -dependent prostaglandin E2 production (Yamashita et al., 2003). Previous studies using the parasite enzymes trypanothione reductase and glutathione-thioredoxin reductase incubated with auranofin demonstrated that co-crystals harbor gold bound to reactive cysteines in these proteins (Angelucci et al., 2009;Ilari et al., 2012;Parsonage et al., 2016). In addition, S. mansoni worms with decreased expression of thioredoxin glutathione reductase (SmTGR) exhibit diminished parasite viability upon exposure to auranofin. E. histolytica treated with auranofin upregulate transcripts of a gene encoding a protein resembling an arsenite-inducible RNA-associated protein (AIRAP) (Debnath et al., 2012). Interestingly, both arsenite and auranofin are inhibitors of thioredoxin reductase, implying that EhTrxR is a target of auranofin. Studies in both S. mansoni and E. histolytica corroborated the interaction of thioredoxin reductase with auranofin by mobility shift assays or other in vitro measures (Kuntz et al., 2007;Debnath et al., 2012). In these cases, the approach involved target validation guided by the chemical properties of gold rather than target identification through genetic resistance. It was surprising to not identify changes to the thioredoxin reductase gene, given previous studies that implicate antioxidant enzymes containing thioredoxin or thioredoxin-like domains as a target for auranofin in parasites. However, although S. mansoni and E. histolytica thioredoxin reductase antioxidant systems are essential for survival, a previous study indicates that T. gondii thioredoxin reductase is necessary for ROS metabolism (Xue et al., 2017). It should be noted that infection with 10 6 TgTrxR knockout parasites increases survival in mice by approximately 2 weeks relative to the parental RH line (Xue et al., 2017). In vitro, the null line exhibits reduced antioxidant capacity, invasion efficiency, and proliferation (Xue et al., 2017). These observations are in-line with the CRISPR fitness score for thioredoxin reductase (-1.98) which places it as fitness-conferring but not essential.
Among the significant SNVs found in our auranofin resistant T. gondii clones, we directed our attention at a point mutation to the superoxide dismutase 2 (SOD2) gene. SOD2 is a bona fide enzymatic antioxidant system that catalyzes the dismutation of superoxide anions into hydrogen peroxide and oxygen. The Toxoplasma genome contains three SOD genes. TgSOD1 is expressed in the cytoplasm of tachyzoites and TgSOD2 is located in the mitochondrion; TgSOD3 is specifically expressed in oocysts. TgSOD2 is a critical enzyme for tachyzoite survival: previous efforts to generate a TgSOD2 knockout line were not successful and its CRISPR fitness score for (-4.09) indicates that it is strongly fitness conferring and likely essential (Odberg-Ferragut et al., 2000). We hypothesized that the L201P mutation in TgSOD2 might increase its capacity to convert superoxide into hydrogen peroxide and oxygen to confer auranofin resistance. To test this hypothesis, we generated several lines that harbor knock-in tag to wild type SOD2 and engineered both YFP-tagged and untagged lines that bear the L201P mutation. Consistent with previous characterization of TgSOD2 (Pino et al., 2007), both wild type and L201P TgSOD2-YFP lines exhibit a mitochondrial distribution. However, neither the tagged or untagged L201P TgSOD2 lines exhibit increased growth in 2.5 uM auranofin, indicating that the single TgSOD2 mutation cannot confer resistance to this level of auranofin or reduce accumulation of ROS in our assay.
In addition to the L201P TgSOD2 mutation, Aur R line 8 harbors a mutation in the large subunit of TgRNR, an essential enzyme which generates deoxyribonucleotides that are required for genome duplication and parasite replication. TgRNR has a 59 amino acid N-terminal extension relative to other RNRs. Peptides from this extension are represented in phosphorylated and monomethylated MS datasets, suggesting that this domain may regulate RNR activity. RNRs perform free-radical chemistry, require electrons donated from thioredoxin and are susceptible to superoxide inactivation. Although the L166Q mutation is in an area that is not known to contribute to enzyme interactions, it may play a role diminishing TgRNR susceptibility to superoxide inactivation. While this mutation may indirectly contribute to resistance to superoxide, RNR enzymes do not directly contribute to redox homeostasis and the observed reduced accumulation of ROS in the Aur R lines.
Our previous and current findings suggest that auranofin decreases T. gondii viability through oxidative stress (Andrade et al., 2014). Aur R lines exhibit an increased ability to scavenge ROS. We carried out preliminary studies to evaluate whether auranofin resistant clones were resilient to the hydrogen peroxide damage. Since our observations were highly variable, we are currently generating parasites expressing redox sensors. The genetic studies presented here did not identify a consistent mechanism for acquisition of resistance by point mutations to a specific protein target. Our future studies will examine whether there are transcriptional, translational or post-translational changes that contribute to auranofin resistance in these lines. For example, difference-gel electrophoresis and mass spectrometry have been used to identify differentially expressed proteins in sulfadiazineresistant T. gondii isolates. While it was disappointing to not identify a single molecular target for auranofin in T. gondii in our studies, this result likely indicates that an enhanced ability to scavenge ROS in the presence of auranofin requires multiple changes to the parasite genome. Therefore, if auranofin is repurposed as an anti-parasitic treatment, it would likely present an increased threshold to resistance in clinical contexts.
DATA AVAILABILITY STATEMENT
The sequencing data are publicly available at NCBI's BioProject, with the accession # PRJNA679812.
AUTHOR CONTRIBUTIONS
CM and JT collected, contributed, and performed the analysis of the data and drafted the paper. SJ and SS collected and contributed the data. SR, YD, and JZ collected the data. NM conceived the analysis and edited the paper. RA conceived and designed the analysis, collected the data, performed the analysis and wrote the paper, and obtained funding. All authors contributed to the article and approved the submitted version.
FUNDING
RA was supported by NIHK08 5K08AI102989-04; AMFDP RWJF 70642 (the views expressed here do not necessarily reflect the views of the Foundation) and UCI DOM Chair Research Award 60244. This work was made possible, in part, through access to the Genomics High Throughput Facility Shared Resource of the Cancer Center Support Grant (P30CA-062203) at the University of California, Irvine and NIH shared instrumentation grants 1S10RR025496-01, 1S10OD010794-01, and 1S10OD021718-01.
ACKNOWLEDGMENTS
Special gratitude to Dr. Melissa Lodoen (UCI) for her advice and critical review of this article and to Dr. Edward Brignole (MIT) for his thoughts on the TgRNR mutation. We also thank Dr. Peter Bradley (UCLA) for sharing the F1B ATPase antibody for mitochondria microscopy.
SUPPLEMENTARY MATERIAL
The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fcimb.2021. 618994/full#supplementary-material Supplementary Figure 1 | CRISPR-Cas9 strategy for generating the T. gondii SOD2.L201P line. TgSOD2-YFP parasites incorporating the LIC plasmid (blue line) undergo CRISPR-Cas9 mediated homologous directed repair (HDR). The DNA amplicon used for HDR contains the L201P mutation (gold vertical line in exon 3) and two silent and unique restriction sites AflI and XcmI (black vertical lines in exon 3), which allow for screening of positive clones. Sequences corresponding to the two gRNA were deleted from the DNA for HDR (Dg1 and Dg2, golden triangles). Each of the gRNA was present twice in TgSDO2-YFP parasites (g1a/b, g2a/b; red font), resulting in two T. gondii SOD2.L201P lines. For HDR after g1a/g2a cuts, a T. gondii SOD2.YFP.L201P line is generated (lower left). For HDR after g1a/g2b cuts, a T. gondii SOD2.L201P line without YFP or LIC plasmid (blue) is generated (lower right). Positive clones were screened with XcmI digest before sequence verification.
Supplementary Figure 2 | Auranofin treatment does not change the expression of thioredoxin reductase nor superoxide dismutase 2 transcripts. RNA was harvested from freshly lysed parasites incubated in either control or 1µM auranofin containing media. Relative expression of SOD2 and TrxR genes normalized to actin. Parasites with or without exposure to auranofin, expressed similar transcripts amounts of TrxR and SOD2 (p>0.05, Mann-Whitney U-Test). There were no statistical differences. This manuscript has been released as a pre-print at https:// www.biorxiv.org/as (Ma C et al., 2020). | 8,538.6 | 2020-04-25T00:00:00.000 | [
"Biology"
] |
Towards the Green Synthesis of Furfuryl Alcohol in A One-Pot System from Xylose: A Review
: In the pursuit of establishing a sustainable biobased economy, valorization of lignocellulosic biomass is increasing its value as a feedstock. Nevertheless, to achieve the integrated biorefinery paradigm, the selective fractionation of its complex matrix to its single constituents must be complete. This review presents and examines the novel catalytic pathways to form furfuryl alcohol (FuOH) from xylose in a one-pot system. This production concept takes on chemical, thermochemical and biochemical transformations or a combination of them. Still, the bulk of the research is targeted to develop heterogeneous catalytic systems to synthesize FuOH from furfural and xylose. The present review includes an overview of the economic aspects to produce this platform chemical in an industrial manner. In the last section of this review, an outlook and summary of catalytic processes to produce FuOH are highlighted.
Introduction
Owing to the continuous global demand and concerns related to chemicals, fuels and materials produced from the oil industry (i.e., coal, natural gas and gasoline), which currently supplies most of these substances consumed on the planet, and the dependency of the global economy on them, alternative renewable resources have gained momentum in industry and academia. In this sense, lignocellulosic biomass is becoming an attractive alternative to substitute fossil derivatives in the production of fuels and chemicals in liquid, solid and gas form [1]. Furthermore, this type of biomass is the most abundant (after atmospheric CO 2 ), non-contaminant, inexpensive renewable carbon source. Valorization of by-products from the pulp and paper industry emerges as a notably promising feedstock, considering that it does not compete with food consumption. Moreover, pulp and paper mills in the Nordics are struggling to keep profiting as a consequence of the digitalization of literature, the climate crisis and especially the competitive growing market in equatorial and sub-equatorial regions with larger tree-growing rates and low-cost labor [2]. This current tendency can bring new markets to current forest firms to further expand their product portfolios with biobased chemicals and biofuels, as an extension to their cellulose-based products, such as paper and packaging materials. This situation forces the shift of their bulk production of paper-grade pulp on the way to other products with lower production volumes but higher profit, such as bio-oil from lignin [3], and value-added chemicals like xylitol [4], furfural (FUR), 5-hydroxymethylfurfural (HMF) and acetic acid from hydrolysate liquor from the dissolving pulp production [5,6].
The peculiar layout of biomass (highly oxygenated compounds) causes its conversion into chemicals and fuels to be energy-intensive and comprises profound chemical transformations [7]. One option to handle the complex matrix of biomass feedstock considers its conversion into simpler fractions, which could be further transformed downstream. Promising biomass-derived molecules
Scope of the Review
Catalytic conversion of lignocellulosic biomass represents a potential tool to selectively produce value-added materials, chemicals and fuels in solid, liquid and gas form. Among the appealing biomass-based chemicals, furfuryl alcohol is a highly attractive compound due to its wide range of applications and its sustainable production possibilities [54].
A recent review has reported advances in the catalytic production of attractive molecules from lignocellulosic biomass, which includes promising furanic compounds [5]. Moreover, readers may encounter other interesting reviews that cover the field of FUR-upgrading in a broader perspective [55,56]. Furthermore, other important reviews have collected the recent developments in the formation of FUR [57] from hemicelluloses [58] using solid acid catalysts [59] and biphasic systems [60]. A review written by Nakagawa et al. [40] gives a good perspective on the reductive conversion of FUR and HMF for catalyst development. Additionally, this review relies on helpful earlier research and reviews on furfuryl alcohol manufacture and hydrogenation, as well as catalyst characterization techniques and literature published in the field [10,54,61,62]. The reaction mechanisms to form FuOH are presented and discussed. As the following sections demonstrate, recent breakthroughs in the catalytic conversion of xylose and biomass-based feedstocks to FuOH have been developed and reviewed in the present paper. As it has been reported previously [63], xylose is the most abundant pentose in birch hydrolysate liquor found typically in the process stream of a pulp and paper company in the Nordics. Therefore, xylose was the focus feedstock for this review paper to form FuOH. Biochemical routes are shortly reviewed here since a research paper recently published covers this topic [64]. Further applications of FuOH are also mentioned and the economic aspects of its production and its market are reviewed. Nonetheless, since the present review concentrates on the formation of FuOH, the further step of synthesis of its derivatives is shortly discussed. The last sections provide an outlook of the field and conclusions that could form the basis for future studies.
Reaction Mechanisms
From the thermodynamic and kinetic perspective, a C=C bond is more easily hydrogenated than a C=O bond. Consequently, the selective hydrogenation of unsaturated aldehydes, including α,β-unsaturated aldehydes, to unsaturated alcohols remains a challenge. Thus, in order to hydrogenate FUR, significant studies to comprehend the reaction mechanisms have been included in the present review.
Typically, when employing Cu-based catalysts, it is conceivable to achieve selective hydrogenation of the C=O bond and at the same time keep the C=C bond intact in the furan ring. Aldehydes adsorbed onto metal surfaces take place via two types of bonding configurations: η 1 -(O) and η 2 -(C,O) [65] (Scheme 1). In a previous study, the hydrogenation of FUR over Cu-based catalysts took place through a η 1 -(O)-aldehyde (perpendicular) binding mode [33]. In this case, the aldehyde group is bonded to the surface of the active site through the carbonyl O atom with the C=C bond continuing mostly unaltered and away from the metallic surface [33]. On a Pd (1 1 1) surface, the η 2 -(C,O) configuration is preferred [65]. A strong metal-support interaction increases the selectivity of FUR hydrogenation over Pt/TiO 2 [66]. When this catalyst was used, the synergy between the metal and support enhanced the hydrogen spill over. This, in turn, led to the hydrogenation of a furfuryl-oxy intermediate that was attached to the support at oxygen vacancies. Nevertheless, the configuration on the Ni surface has been less investigated. It has been observed that both states of acetaldehyde occur on a Ni (1 1 1) crystal at 105 K [67]. Nakagawa et al. [68] introduced a reaction mechanism on the surface of a Ni/SiO 2 catalyst for FUR hydrogenation to FuOH (Figure 2), in which a strongly adsorbed FUR molecule with a η 2 -(C,O)-type configuration on the Ni metal surface is attacked by two adsorbed hydrogen atoms. The adsorbed hydrogen atoms are situated in the threefold hollow site that is a stable adsorption site on the Ni (1 1 1) surface [69]. such as Sn and Ti have been added to the catalyst, exempting C=C bonds from hydrogenation [70]. Gallezot and Richard [71] proposed that the catalytic hydrogenation of α,β-unsaturated aldehydes may be improved over a Pt-based catalyst if a second metal is included as a promoter. The high conversion and selectivity associated with the inclusion of promoters has two main benefits: (1) the catalytic active sites interacting with the promoter [72,73]; (2) the electron contributions from the promoters, such as increasing the basicity of the zeolite, hindering the dispersion of Pt, increasing the stability of the catalyst and promoting the formation of electron-rich platinum species (Pt δ-) [74].
Another option for the adoption of transition metals is the reduction of FUR via the Meerwein-Ponndorf-Verley (MPV) mechanism [75,76]. This technique requires an alcohol, mainly a secondary alcohol that donates hydrogen species (sacrificing alcohol), through an intermolecular hydride transfer catalyzed by a Lewis acid, to reduce the carbonyl group of an aldehyde or ketone in the analogous alcohol via a six-membered intermediate [77,78]. At the beginning, homogeneous Lewis catalysts were used to carry out the MPV reaction [79][80][81]. However, these catalysts are associated with high costs and limited catalytic activity, due to their sensitivity towards moisture. Nevertheless, solid catalysts offer advantages such as easy recovery and regeneration. A reaction mechanism for the MPV reduction step was proposed by Paulino et al. [82]. In the tandem reaction, xylose is dehydrated to FUR on the zeolite Brønsted acid sites. The aldehyde is later coordinated to the Lewis acid sites by its C=O bond, establishing a cyclic six-membered transition state, formed by adsorption of an alcohol molecule, and is shifted to the carbonyl group. By now, it is known that both Lewis and Brønsted acid sites on catalysts are active and take part in a significant part in the conversion of C 5 sugars to FuOH. In this reaction, Perez et al. [83] proposed a tandem transformation of xylose/xylulose dehydration to FUR that rests principally on Brönsted acid sites [84][85][86], whereas the following stage (FUR transfer hydrogenation) takes place on the Lewis acid sites [87][88][89]. The surface sites engaged in each reaction step are highlighted in Scheme 3. and Richard [71] proposed that the catalytic hydrogenation of α,β-unsaturated aldehydes may be improved over a Pt-based catalyst if a second metal is included as a promoter. The high conversion and selectivity associated with the inclusion of promoters has two main benefits: (1) the catalytic active sites interacting with the promoter [72,73]; (2) the electron contributions from the promoters, such as increasing the basicity of the zeolite, hindering the dispersion of Pt, increasing the stability of the catalyst and promoting the formation of electron-rich platinum species (Pt δ-) [74]. Another option for the adoption of transition metals is the reduction of FUR via the Meerwein-Ponndorf-Verley (MPV) mechanism [75,76]. This technique requires an alcohol, mainly a secondary alcohol that donates hydrogen species (sacrificing alcohol), through an intermolecular hydride transfer catalyzed by a Lewis acid, to reduce the carbonyl group of an aldehyde or ketone in the analogous alcohol via a six-membered intermediate [77,78]. At the beginning, homogeneous Lewis catalysts were used to carry out the MPV reaction [79][80][81]. However, these catalysts are associated with high costs and limited catalytic activity, due to their sensitivity towards moisture. Nevertheless, solid catalysts offer advantages such as easy recovery and regeneration. A reaction mechanism for the MPV reduction step was proposed by Paulino et al. [82]. In the tandem reaction, xylose is dehydrated to FUR on the zeolite Brønsted acid sites. The aldehyde is later coordinated to the Lewis acid sites by its C=O bond, establishing a cyclic six-membered transition state, formed by adsorption of an alcohol molecule, and is shifted to the carbonyl group. By now, it is known that both Lewis and Brønsted acid sites on catalysts are active and take part in a significant part in the conversion of C5 sugars to FuOH. In this reaction, Perez et al. [83] proposed a tandem transformation of xylose/xylulose dehydration to FUR that rests principally on Brönsted acid sites [84][85][86], whereas the following stage (FUR transfer hydrogenation) takes place on the Lewis acid sites [87][88][89]. The surface sites engaged in each reaction step are highlighted in Scheme 3.
Biochemical Conversion of FUR to FuOH
A new approach for conventional routes to produce chemicals from biomass is the use of biocatalysis. Numerous advantages stand out from this technological strategy, such as mild reaction conditions, high yields and environmental friendliness [90].
Biochemical Conversion of FUR to FuOH
A new approach for conventional routes to produce chemicals from biomass is the use of bio-catalysis. Numerous advantages stand out from this technological strategy, such as mild reaction conditions, high yields and environmental friendliness [90].
The reduction of FUR has been biochemically proven for yeasts like Methanococcus deltae [91], Saccharomyces cerevisiae [92], Pichia stipites [93], Escherichia coli [94,95] and Bacillus coagulans [90,96] (Scheme 4). A methanogenic bacteria, Methanococcus deltae, was capable of transforming FUR to FuOH in an almost stochoimetric amount [91]. He et al. [94] biosynthesized FuOH from FUR employing a recombination of E. coli CCZU-A13 harboring a NADH-dependent reductase (SsCR) ( Table 1). The maximum activity of the enzyme was found at 30 °C and the optimum pH was observed at 6.5. They investigated also the effect of the FUR concentrations fluctuating from 20 to 300 mM. FuOH yields of 100% were obtained when the FUR concentration was ≤200 mM. However, for concentrations of 200 and 300 mM of FUR, the FuOH yields decreased to 94% and 74%, respectively. Studying these effects supported their next research stage to employ a combination of a solid acid catalyst and the cells to produce FuOH directly from biomass-derived xylose. A one-pot chemo-enzymatic process to dehydrate corncob-derived xylose was developed using SO4 2− /SnO-Kaoline and a bioreduction of FUR to FuOH with E. coli CCZU-T15 [97]. This chemo-enzymatic catalysis yielded 74% FUR from corncob-derived xylose and a subsequent 100% FuOH yield. Moreover, Bacillus coagulans NL01 has also been employed to produce FuOH from FUR, producing about 98 mM FuOH within 24 h with a conversion of 92% and a selectivity of 96% [90]. A methanogenic bacteria, Methanococcus deltae, was capable of transforming FUR to FuOH in an almost stochoimetric amount [91]. He et al. [94] biosynthesized FuOH from FUR employing a recombination of E. coli CCZU-A13 harboring a NADH-dependent reductase (SsCR) ( Table 1). The maximum activity of the enzyme was found at 30 • C and the optimum pH was observed at 6.5. They investigated also the effect of the FUR concentrations fluctuating from 20 to 300 mM. FuOH yields of 100% were obtained when the FUR concentration was ≤200 mM. However, for concentrations of 200 and 300 mM of FUR, the FuOH yields decreased to 94% and 74%, respectively. Studying these effects supported their next research stage to employ a combination of a solid acid catalyst and the cells to produce FuOH directly from biomass-derived xylose. A one-pot chemo-enzymatic process to dehydrate corncob-derived xylose was developed using SO 4 2− /SnO-Kaoline and a bioreduction of FUR to FuOH with E. coli CCZU-T15 [97]. This chemo-enzymatic catalysis yielded 74% FUR from corncob-derived xylose and a subsequent 100% FuOH yield. Moreover, Bacillus coagulans NL01 has also been employed to produce FuOH from FUR, producing about 98 mM FuOH within 24 h with a conversion of 92% and a selectivity of 96% [90].
Quaker Oats Co. began patenting their knowledge on FuOH by a continuous process to produce FuOH from FUR in the vapor phase employing CuO and Na 2 O-SiO 2 , yielding around 99% FuOH [106,107]. Furthermore, a patent submitted by Lillwitz in 1978 [108] describes the production of FuOH using HMF as the feedstock in liquid phase with Pd and Rh at ≥135 • C. FuOH is continuously extracted and the pH value is in the range of 6.5 to 9.0. The highest yield obtained was 79% with an HMF conversion of 87% in a continuous flow without the presence of solvents at 200 • C and 0.02 MPa. A method for the catalytic conversion of FUR to FuOH involving a Ru-supported and N-doped graphene material has been recently patented [109]. Furthermore, a patent reports the formation of FuOH from FUR in liquid phase employing a copper-aluminium alloy and 5.5% of Ni-Fe at 130-140 • C with a H 2 pressure of 3 MPa, that results in a FUR conversion of 99.5% and a selectivity to FuOH of 97.6% [110]. In contrast, 2-zirconium hydroxyphosphinyl acetate has been employed to convert FUR (98.1%) to FuOH (96.5% yield) at 150 • C in 1.5 h [111]. In this application, the inventors used isopropanol as a hydrogen source and as a solvent. Furthermore, the catalyst was reused three times, leading to a reduction in the catalytic activity to 92.5% FuOH yield. In a similar invention, ZrO 2 @SBA-15 was used as a catalyst to form FuOH from FUR by transfer hydrogenation in a reaction temperature range of 130-160 • C and a hydrogenation reaction time of 1-4 h [112].
Additionally, a process of especial interest to obtain FuOH through multifunctional catalysts from carbohydrates (xylose) derived from lingocellulosic material was developed by Fraga and Perez in 2013 [113]. Even though various catalysts were reported, such as Pt/SiO 2 -SO 3 H, Pt/ZrO 2 -SO 4 2− and Pt/ZrO 2 , the patent claims that the highest selectivity to FuOH of 93% with a conversion of 19% is reached using a Pt/SBA-15-SO 3 H catalyst at 130 • C, 3 MPa and a reaction time of 90 min.
Formation of Furfuryl Alcohol from Xylose in One-Pot Reactions
Recent advances in one-pot cascade conversion of xylose to FuOH over solid acid catalysts have attracted much attention from the industry and academia. The synthesis of FuOH from xylose employing bifunctional catalysts that incorporates acid and metal sites in one reactor brings challenges in avoiding side reactions to optimize the yield of FuOH. Furthermore, most of the studies of one-pot conversion of xylose to FuOH over bifunctional catalysts involve precious metals like Pt and Pd, and metal oxides and mesoporous silica with acid sites, such as sulphate or sulfonic groups. Nevertheless, the adequate conversion of xylose to FuOH adopting a one-step process is very attractive as it is more cost-effective and the energy-intensive separation of FUR might be avoided.
In one of the pioneering works of the one-step production of FuOH from xylose, Perez and Fraga [114] investigated a dual catalyst system consisting of Pt/SiO 2 and sulfated ZrO 2 as metal and acid catalysts, respectively. The highest selectivity to FuOH (51%) is achieved at 130 • C in 6 h employing a 1:3 aqueous to 2-propanol phase ratio at a xylose conversion of 65% (Table 2). Under these experimental conditions, reusability tests were performed. However, after the first reusability cycle, the selectivity to FuOH declined progressively after each run, reaching 29% in the third cycle. The authors suggested that the solid acid catalyst underwent deactivation, due to the unaffected formation of other products, which are dependent on metal sites (either SiO 2 or Pt). Additionally, a multifunctional catalyst based on sulphated zirconia was investigated in the one-pot formation of FuOH from xylose [115]. The highest selectivity to FuOH (27%) was obtained at a xylose conversion of 32% with an acid/metal ratio of 142 at 130 • C and 3 MPa. An interesting effect that was studied in the article shows the role of isolated metal centers, which afford the production of xylitol, whereas the presence of sole acid sites leads to the formation of FUR. A following paper from the same research group reported a high selectivity to FuOH (75%) over a metal-free catalyst (zeolite beta) via MPV [82]. This high selectivity to FuOH was linked to the configuration of tetrahedral-framework Al centers tailored by the Al-O-Si bond distance and the characteristic typology of the catalyst. In another contribution from Fraga's group, multiwalled carbon nanotubes-supported noble metal catalysts (Ru, Pt, Au, Pd and Rh) were assessed for the one-pot conversion of xylose in aqueous phase [116]. Under the experimental conditions (6 h at 130 • C, 3 MPa H 2 , using water/2-propanol (1:1)), Ru displayed the highest catalytic activity to hydrogenate xylose to xylitol and FuOH, providing 84% and 9% yield, respectively, at 100% xylose conversion. The highest FuOH yield was obtained with the Pd-functionalized catalyst (12%) at a 66% xylose conversion. The authors also studied SBA-15 catalysts incorporating Al as a heteroatom at different Si/Al ratios ([Al]-SBA-15) in the formation of FuOH from xylose [83]. The alterations to the surface of [Al]-SBA-15 guided the product distribution, FuOH being the main product and only FUR in a minor quantity despite the Si/Al ratio. All the modified mesoporous catalysts reached selectivities to FuOH of around 90%. Reusability tests were completed at 130 • C in 4 h using a water/2-propanol medium (1:1), where it can be seen that after three reusability cycles, the catalytic activity loss is insignificant (pentose conversion remains around 15%). However, the selectivity to FuOH decreased from 90 to 80%, and the selectivity to FUR increased from 10 to 20%. This effect might be a result of the Lewis acid sites losing activity after each run that results in lower conversions of the adsorbed FUR intermediate favoring the aldehyde to desorb. This developed system was designed because it requires neither molecular hydrogen nor noble metal sites for xylose conversion to FuOH, which raises costs. Brønsted acid sites come across to be active for the pentose dehydration reaction, whereas Lewis acid sites promote the transfer hydrogenation of the adsorbed FUR intermediate to FuOH [83].
Deng et al. [51] synthesized and employed a bifunctional Cu/SBA-15-SO 3 H catalyst to form FuOH from xylose in a one-pot catalytic system. The highest FuOH yield (63%) was obtained at 4 MPa, 140 • C and 6 h in a biphasic water/n-butanol solvent mixture at a total xylose conversion. Under these experimental conditions, the authors also identified three main side-products, xylitol, FUR and xylulose. They observed that the relative high hydrogen pressure led to the side hydrogenation reaction of xylose to xylitol and the relative high reaction temperature led to the further hydrogenation to MF. Nevertheless, a study on the hydrothermal stability of the catalyst is missing and it would be of great concern to observe the catalytic activity of the functionalized Cu/SBA-15 through several reusability cycles under the same experimental conditions.
Canhaci et al. [117] converted xylose to FuOH on a single organic-inorganic hybrid mesoporous silica-supported catalyst. They employed Pt/SBA-15-SO 3 H bearing different acid/metal site ratios and found negligible sole sugar dehydration to FUR and a striking production of FuOH. When they tested the catalyst reusability, the catalytic activity decreased and the product distribution changed after each reaction cycle. Nevertheless, their work demonstrated that sulfonated ordered mesoporous silica-supported catalysts deliver active and highly selective systems for the generation of FuOH from xylose. High selectivities were accomplished (83-87%) in this system.
Cui et al. [118] converted xylose to FuOH and MF. Firstly, they dehydrated xylose to FUR using a Hβ zeolite catalyst in a fixed-bed reactor with a high xylose conversion (>99%) and FUR yield (87.6%) when using γ-butyrolactone (GBL) and water. Secondly, they added the ternary Cu/ZnO/Al 2 O 3 catalyst, which they reported in previous work to form MF from FUR [119,120], to hydrogenate FUR. A high yield of FuOH (87.2%) and MF were obtained at 150 • C and 190 • C, respectively. After a time-on-stream of 162 h in the reactor, a decline in the yield of FuOH was observed, due to the deactivation of the Hβ zeolite catalyst, but after reactivation, the catalytic activity could be recovered. N/R: not reported; a selectivity to FuOH; b total Si/Al ratio. c As observed from a published figure based on data using a continuous fixed-bed reactor. d Surface area of the support. e Pressurized with N 2 . f Observed from a figure in the referred article.
Furthermore, Xu et al. [121] used formic acid both as an acid catalyst and as a hydrogen donor together with a mesoporous N-doped carbon-confined Co catalyst (Co-N-C) to convert xylose to FuOH. They reported a 69.5% FuOH yield at 160 • C in 3 h from xylose, and after five reusability cycles, it was observed that the Co-N-C catalyst possesses high stability for the xylose conversion. Moreover, Ordomsky et al. [122] developed a biphasic system to dehydrate xylose and hydrogenate FUR employing Amberlyst-15 and a hydrophobic Ru/C catalyst, which is located in the organic phase. However, due to the experimental conditions, the main products of xylose dehydration with hydrogenation of FUR are THFA, γ-valerolactone, levulinic acid and pentanediols. The low amount of FuOH formed under these conditions could be the result of the high pressure (4 MPa) and high temperature (165 • C), which could have hydrogenated FuOH further to THFA.
Formation of Furfuryl Alcohol from Biomass-Derived Xylose in One-Pot Reactions
The use of biomass-derived xylose in the formation of FuOH has been limited. Nevertheless, a combination of thermochemical and biochemical processes has been used in a two-stage process.
He et al. used SO 4 2− /SnO 2 with different strains of E. coli such as CCZU-A13 [94], CCTU-T15 [97] and CCZU-K14 [98] with a xylose-rich hydrolysate liquor from corncob to produce FuOH. They employed a biocompatible solid acid catalyst (SO 4 2− /SnO 2 -APG) and E. coli CCZU-A13 to dehydrate the sugar-rich hydrolysate and bioreduce the intermediate FUR, respectively. At 170 • C in 20 min, 91% of xylose was converted to 52 mM of FUR. Then, after a pH adjustment, E. coli CCZU-A13 cells incorporating reductase SsCR were added. After another 3 h of reaction, FUR was converted to FuOH, reaching a yield of 44% determined from the initial xylose. The recycling experiments performed showed a 7% catalytic activity loss (from 100% to 93%) after five cycles, which demonstrates the relative stability of both the solid acid catalyst and the immobilized cells (Table 3). In a similar manner, corncob-derived xylose was converted in a one-pot tandem reaction to FuOH using SO 4 2− /SnO 2 -kaoline and recombination E. coli CCZU-T15 in toluene water media [97]. In this case, a FUR yield of 74.3% was achieved in a toluene/water medium (1:2, v/v) at 170 • C in 30 min containing 10 mM OP-10. Afterwards, FUR was converted to FuOH with E. coli CCZU-T15, yielding 13% based on starting material corncob. However, it was considered that 100% FuOH yield was obtained from the bioreduction step of FUR. Moreover, they used SO 4 2− /SnO 2 -Montmorillonite as a catalyst and E. coli CCZU-K14 to obtain FuOH in a tandem reaction [98]. The highest FUR yield (41.9%) was obtained from xylose at 170 • C in 20 min. The yield of FuOH was obtained at 100% by E. coli CCZU-K14 whole cells from 200 mM FUR at 30 • C in 24 h, at a pH of 6.5 and 1.5 mol glucose/mol FUR.
Effect of Solvents in the Formation of Furfuryl Alcohol
The solvent effects in heterogeneous catalysis have been justified by correlating reaction rates and product distributions with two main properties: the solvent polarity and/or the dielectric constant. The results reported by various authors [117,123,124] on the formation of FUR from xylose when using zeolites indicate that the presence of water has a negative effect on the dehydration of xylose. Therefore, organic solvents are added to boost the catalytic activity of xylose dehydration and the selectivity toward the aimed compound to be formed. Biphasic systems including an aqueous and an organic phase with a high partition coefficient for FUR have been widely studied in the formation of FUR [14,63,125,126]. The organic solvent protects the FUR formed from side reactions and potentially allows for easier separations of the products. The solvent effects can influence the kinetics of both polar and non-polar substrates like Singh et al. [127] discussed in their article. It is known that a polar solvent enhances adsorption of the non-polar reactant, while a non-polar solvent enhances the adsorption of a polar reactant. However, the specificity of metals on solvent effects is not yet known.
Typically, the hydrogenation of aldehydes with alcohol as the solvent initiates the generation of secondary products such as acetals. This is caused by the reaction between the substrate and the solvent. When Merlo et al. [35] used ethanol as the solvent in the conversion of FUR to FuOH, the ether 2-isopropoxymethylfuran was found as a consequence of the reaction between FuOH and ethanol. However, FuOH and the solvent did not seem to form by-products when n-heptane and toluene were employed.
López-Asensio et al. [75] evaluated three alcohols in the MPV reaction in the presence of two catalysts, such as i-propanol, 2-butanol and cyclohexanol, as sacrificing alcohols. The results show that when 2-butanol is used, a slight increase in FUR conversion occured for both Zr-doped mesoporous SBA-15 catalysts, in comparison to i-propanol and cyclohexanol. This effect is due to the fact that the long aliphatic chain reduces the polarity of the secondary alcohol and promotes the generation of a more stable six-membered intermediate with the Lewis acid sites, which facilitates the relocation of the hybrid species. On the other hand, when employing cyclohexanol, the poorest catalytic results are observed under the experimental conditions, probably attributable to steric hindrance in the development of six-membered intermediates.
Related research was accomplished by Gong et al. [31] comparing 2-propanol, ethanol, water and toluene. The protic solvents (2-propanol, ethanol and water) exhibited higher catalytic activity than the nonpolar solvent (toluene). This could be due to the hydrogen bonding between the carbonyl oxygen of FUR and the hydroxyl group of the protic solvents. In a surprising manner, Tamura et al. [26] were able to form FuOH from FUR in water under mild reaction conditions, for instance, low H 2 pressure (0.8 MPa) and low temperature (130 • C), employing Ir-ReO x /SiO 2 as a highly active and selective solid catalyst. The authors claimed that not only FUR can be hydrogenated to FuOH in this system, but various unsaturated aldehydes can be transformed to the corresponding unsaturated alcohols due to the synergy created between Ir metal and ReO x , which increases the selectivity and activity without losing the activity of the noble metals. Bonita et al. [128] compared the conversion of FUR and the selectivity to FuOH in isopropanol, toluene and hexanes. It was noted that the FUR conversion and the selectivity to FuOH declined significantly in toluene and hexanes, compared to the system involving isopropanol.
A study comparing various water/isopropanol mixture compositions uncovered the significant roles performed by the solvent polarity and the amount of the hydrogen donor in the formation of FuOH [82]. The study demonstrated that the highest FuOH yield (~80%) was obtained when a zeolite beta was used with the lowest water/isopropanol mixture (0.0026:1). Therefore, extensive amounts of water were shown to reduce the catalytic activity as it has been previously evidenced [129]. This effect has been correlated with the interaction between water molecules from the reaction medium and/or formed upon reaction with the surface acid sites [130][131][132]. Subsequently, the water molecules may be kept linked to the acid centers reducing its inherent activity [83].
Moreover, water, 1-butanol, MTHF and cyclohexane have been studied on FUR hydrogenation [122] over an Ru/C catalyst. Hydrogenation of FUR in MTHF and 1-butanol affords different compounds as opposed to the reaction in water. When employing MTHF as the solvent at a 91% conversion of FUR, the main compounds formed are FuOH (42%), THFA (11%) and MF (19%). Therefore, solvent selection plays a great role in the selectivity to FuOH and FUR conversion. Moreover, a recent article reviewed different heterogeneous catalysts that were employed with diverse organic solvents to boost furan yields from sugar dehydration reactions [60]. Nevertheless, further research has to be completed in an effort to comprehend the influence that aqueous systems have on heterogeneous catalysts and how solvent properties alter the reaction.
Economic Aspects
As discussed in the previous section, FuOH covers more than 60% of applications of FUR, and some market studies even report that more than 80% of all FUR produced is converted to FuOH [133,134]. The global FuOH market is mainly driven by the consistent growth in the foundry industry, in which FuOH competes with phenol that is therefore used to produce phenolic resins. The FuOH market was estimated to be EUR 493 million in 2019, projected to reach EUR 630 million by 2024 and it is expected to reach EUR 1350 million by 2028 [134]. It is estimated that the market size of FuOH will expand, especially in the polymer, solvent and adhesives industries and in its application as a wetting agent [135]. The largest FUR-consuming region in the world is Asia-Pacific (led by China) with an estimated share of 61% in 2017 and 77% in 2018 [133,134]. China is the leading producer of FUR followed by South Africa and the Dominican Republic [136]. Moreover, the production capacity is continuously expanding with more players entering the global FuOH market. This market growth is attributed to the increasing demand of FuOH and other FUR-based derivates [137]. A process to produce FuOH from FUR developed by Tseng et al. [138] reported an annual cost of EUR 744,000/year [139] to produce 50 kmol/h.
Summary and Outlook
The valorization of non-edible biomass and the active diversification of pulp and paper companies are thriving topics to achieve the biorefinery paradigm. In the present review, several successful cases have been discussed, highlighting the diverse options to obtain furfuryl alcohol in a chemoselective way from xylose and biomass-derived xylose.
The current commercial production of FuOH from FUR is performed on a Cu-Cr catalyst, which is associated with toxic effluents from chromium compounds, rapid deactivation and harsh process conditions. Besides, the former dehydration step of xylose to FUR needs to be completed in a different reactor. Nevertheless, the process yields high amounts of FuOH (>90%).
This review highlights new promising catalytic pathways for the one-pot formation of FuOH from biomass. It can be observed that bifunctional catalytic systems owning precious metals like Pt and Pd are employed with sulfonated or sulfated catalysts to form FuOH from xylose and biomass-derived feedstocks. Besides, protic solvents such as water and isopropanol are frequently used, due to their advantages in the formation of FuOH.
There is an abundance of elements that could affect the activity of the catalysts, such as the acidity of the support, the interaction between the metal and the support, the surface area and the metal content. Therefore, it is clear that the acidic properties of the solid acid catalyst and the solvent employed in the system contribute significantly to the reaction pathway. In relation to the transition metal catalyst, the renowned toxicity of chromium contains a likely risk both to health and the environment. The employment of solid catalysts offers the special advantage to design the catalyst for the system and the reaction. Nevertheless, hydrothermal stability issues under these conditions can lead to leaching, which turns the system into homogenous reactions. Thus, innovative technological systems are urgently needed that can still move on over numerous sequential stages in potential heterogeneous catalytic systems or through a single-step reaction directly from xylose over multifunctional catalysts.
The proof-of-concept of the biochemical catalytic conversion of FUR to FuOH is promising and further technical development is needed. The advantages of this kind of system are the low temperature condition, atmospheric pressure and low demand of hydrogen. A critical technological feature is the opportunity of enzymes to catalyze reactions with furans both in the presence and absence of water. Nevertheless, the biggest challenges in the biocatalytic conversion of FUR remain in delivering optimization and techno-economic feasibility to achieve efficient biorefinery concepts. | 8,047.2 | 2020-09-23T00:00:00.000 | [
"Environmental Science",
"Chemistry"
] |
Machine learning topological defects in confluent tissues
Active nematics is an emerging paradigm for characterizing biological systems. One aspect of particularly intense focus is the role active nematic defects play in these systems, as they have been found to mediate a growing number of biological processes. Accurately detecting and classifying these defects in biological systems is, therefore, of vital importance to improving our understanding of such processes. While robust methods for defect detection exist for systems of elongated constituents, other systems, such as epithelial layers, are not well suited to such methods. Here, we address this problem by developing a convolutional neural network to detect and classify nematic defects in confluent cell layers. Crucially, our method is readily implementable on experimental images of cell layers and is specifically designed to be suitable for cells that are not rod shaped, which we demonstrate by detecting defects on experimental data using the trained model. We show that our machine learning model outperforms current defect detection techniques and that this manifests itself in our method as requiring less data to accurately capture defect properties. This could drastically improve the accuracy of experimental data interpretation while also reducing costs, advancing the study of nematic defects in biological systems.
Introduction
Tissue dynamics underpins a wide variety of biological processes such as wound healing [1], cancer metastasis [2] and morphogenesis [3].Many of these processes concern confluent tissues, such as epithelial and endothelial cell layers, making suitable descriptions of the dynamics of these systems a prerequisite for our understanding of these processes.Unlike constituents in a passive material, cells within a confluent tissue can generate forces and exert stresses on their neighbours and underlying substrate.As such, active matter physics provides a natural framework for describing confluent tissues and has provided numerous insights into these systems [4].Active matter is an emergent field of physics concerned with describing many-body systems far from equilibrium, where the system is driven from equilibrium by energy expended by individual constituents [5].
A fruitful connection between active matter and biology is the widely accepted use of active nematic theory to model the interplay between cell shapes and tissue dynamics [6,7].Nematic systems consist of elongated constituents that exhibit orientational order with no preferred direction within this orientation, i.e. they are head-tail symmetric.As such, the orientation of a cells long axis is a nematic object, and the average local direction of cell elongation can be thought of as a nematic field.The nematic field is coupled to the velocity field, with the energy expenditure of individual cells driving a rich variety of out of equilibrium behavior [8,9].A particularly fertile line of study within confluent tissues is the formation, dynamics and properties of topological defects in the nematic field [10,11], as they have been found to mediate important homeostatic and morphogenetic processes [7].
Topological defects are singularities in the nematic field, points where its orientation does not vary smoothly but is discontinuous.In active nematic systems, two types of defects are typically found: comet-shaped singularities, known as +1/2 defects (Fig. 1a), and trefoil-shaped singularities, known as −1/2 defects (Fig. 1b).It is these nematic defects that are being highlighted as having a functional role in an increasing number of biological processes.Comet-shaped +1/2 defects have been found to trigger cell extrusion in epithelial layers [12], control the collective dynamics of confluent layers of neural progenitor cells [13], and have been highlighted as organisation centres during Hydra morphogenesis [14].These +1/2 defects also mediate processes in densely packed bacterial systems, triggering the formation of fruiting bodies in Myxococcus Xanthus colonies [15], as well as facilitating collective motion in Pseudomonas aeruginosa [16] and E. Coli colonies [17].On the other hand, −1/2 defects have been associated with controlling areas of cell depletion in bacterial colonies [15].We have also recently shown that active nematic defects can arise in confluent cell layers with no inherently nematic active forces [18].Due to the prevalence and functional role of ±1/2 in many systems, the efficient detection and characterisation of topological defects in confluent tissues, and cellular systems generally, is of fundamental interest to both biology and physics.
While defect detection algorithms exist, their application to imaging data often requires a sophisticated understanding of the underlying physics.Current algorithms entail locating degenerate points in the nematic field followed by inspecting how the orientation of the nematic field changes around this point [19], usually by calculating a quantity known as the winding number at this point.This can be effective in systems where the nematic field is well defined across the domain [20,21] including tissues where cells are elongated or rod-like, such as spindle shaped fibroblasts [22].However, this method is not suited to systems where the nematic field is not well-defined everywhere, as degenerate points in the field could just be areas of low order and do not necessarily indicate the existence of a defect.This is often the case in confluent tissues such as epithelial layers, where the cells are not rod-like and can be nearly isotropic in shape at times.Previous work studying defects in these systems searched for defects by calculating the winding number on a pre-defined grid of points in the nematic field [12,18,23,24].This method required thousands of defects to be detected to adequately discern the average properties of defects, such as tissue stress and velocity fields, which are often the target of experimental studies.The necessity of such large amounts of data suggests the method of defect detection is inefficient and imprecise, which begs the question as to whether better methods of detection are possible for these systems.
One possibility is to utilize machine learning to improve defect detection.Machine learning methods are being exploited in an increasing variety of applications in active nematic systems [25][26][27].They have also been used to detect topological defects in various systems [28][29][30], including cellular systems [31].This previous study identified degenerate points in the nematic field of a cellular layer and then used a feed-forward, fully-connected neural network to perform a binary classification by labelling the points as either +1/2 or −1/2 defects.However, as previously discussed, this method is less applicable to experimental cellular systems where the nematic field is not well-defined everywhere, and points with low nematic order do not necessarily indicate the existence of a defect.Additionally, this study did not demonstrate that its machine learning method outperformed existing techniques for detection.As such, there is still a need to develop a machine learning method that can outperform current techniques and be readily utilized in an experimental setting.
Here, we address this problem by developing a methodology to detect nematic defects in confluent tissues using a convolutional neural network (CNN).We design the method such that it is well suited for use in systems that currently lack effective detection techniques, is user-friendly and readily implementable on experimental data.In contrast to previous work, we show that it outperforms current detection techniques and further demonstrate its efficacy by finding the mean velocity field around +1/2 defects and comparing this to defects detected using the winding number method, highlighting the improvement in capturing properties of topological defects with limited data.
Acquisition of test and training data
For our method to be useful, it needs to be suitable for use on experimental data.For this reason, it takes as its input the x and y coordinates of each cells centre of mass and the orientation of the long axis of each cell, both of which can be readily acquired using standard segmentation software [32].However, a large amount of data is required to adequately train and test the model.Here, to obtain sufficient amounts of data, we train and test our model using data from a numerical model of a confluent cell layer: the active vertex model (AVM) [33].
The AVM have been used extensively to study epithelial tissue dynamics [34] and has been found to accurately replicate phenomena observed experimentally [35,36].Moreover, data from an AVM is appropriate for developing our method as it represents the cell layer in a manner very similar to how they are represented once experimental images have been segmented (Fig. 2a).We implement the AVM in the same manner as our previous work [18].Briefly, we represent the tissue as a confluent tiling of polygons, the degrees of freedom being the cell vertices.In the overdamped limit, these vertices move according to two types of forces: passive mechanical interactions between cells which arise due to gradients in an effective tissue energy function, and polar self-propulsive forces that model the motility of each cell.The effective tissue energy for a tissue containing N cell is where A a and P a are the areas and perimeters of cell a respectively, with A 0 and P 0 being the target area and perimeter for each cell.The first term encodes the incompressibility of each cell and the cell layers resistance to height fluctuations.The second term in the energy function encodes the competition between cortical tension and cell-cell adhesion.The force on vertex i due to mechanical interactions is then Self-propulsion is modeled by each cell generating a polar force of magnitude f 0 , that acts along polarity vector na = (cos θ a , sin θ a ).The self-propulsion force on each vertex is then the average self-propulsion of the three cells that neighbour vertex i, where ζ is the damping coefficient.The polarity vector of each cell undergoes rotational diffusion according to where ξ a (t) is a white noise process with zero mean and unit variance and D r is the rotational diffusion coefficient.For a complete description of the AVM implementation please see [18].Using the positions of the cell vertices, we find the centre of mass and long axis orientation of each cell, which we input into our model.Details of parameter values used can be found in S1 Appendix.domain.As defects by definition occur in regions with low nematic order, we identify these areas as ROIs.To do this, we interpolate our cell orientation data to a fine grid and average each point over a sliding window to smooth out the data and create a nematic field (Fig. 2b).At each grid point we then calculate the scalar nematic order parameter S, which is defined as the largest eigenvalue of the nematic tensor Q = 2û m ûn − δ mn , where û = (cos θ, sin θ) with θ being the orientation of the field, δ mn is the kronecker delta and • represents a spatial average over a sliding window.S takes a value of one if the local nematic field is perfectly aligned and 0 if the local field is isotropic.As we seek areas of low order, we identify contiguous areas in our domain where S is below a threshold value S th = 0.15.We then take the centres of mass of these areas to be the centres of our ROIs, cropping the field in a square around these points (Fig. 2b).We choose the value of S th such that it is high enough to capture all disordered regions that may contain defects, but low enough such that these regions are distinct and do not coalesce.We size our ROIs such that they contain 5-7 cells, large enough so as to capture the core of the defect but small enough to isolate the defects and avoid capturing multiple defects in a single ROI.We then use our ROIs as March 21, 2023 5/15 inputs into a machine learning model that classifies them as containing a +1/2 defect, −1/2 defect or neither (Fig. 2c).Details of parameter values used for pre-processing the data can be found in S1 Appendix.As the position of potential defect locations (the centres of our ROIs) can be located anywhere in the system domain, our method is effectively off-lattice in its detection, although they still lie on a fine grid.This contrasts with previous work detecting defects in epithelial cell layers [12,23], which can only detect defects at predefined locations on a coarse-grained lattice.
Model architecture and training
We use a CNN to classify our ROIs.A schematic of the architecture can be seen in Fig. 3c.We use two convolutional layers, each detecting 32 features.Due to the size of our ROIs, we do not use any max pooling layers after these convolutional layers.We then follow these convolutional layers with an additional fully-connected layer of 100 artificial neurons before our output layer of three neurons, representing our three possible outputs, or classes, of a +1/2 defect, −1/2 defect or no defect.Having a third output of no defect is key here and what makes our method particularly well suited to epithelial tissues.As the cells in our tissue do not have a well-defined long axis, neighbouring cells are not always nematically aligned and there can be regions with low nematic order that do not necessarily contain nematic defects.Including an option for our CNN to classify an ROI as having no defect accounts for this possibility.The output of our convolutional and fully-connected layers are rectified linear units, while the output layer is softmax [37].
To generate training and testing data we manually classify 5000 ROIs, using 4500 to train our model and saving 500 for testing.To enlarge our training data, we generate three new copies of each training ROI by rotating each one by angles −π/2, π and π/2.Also, as the type of defect is invariant under reflections, we double this enlarged training data by reflecting each ROI about its centreline, leading to 36000 training inputs.We do not enlarge our testing data set.
We train our model by minimising the cross-entropy cost function, defined as where N is the number of items in each batch of training data, y i,c is the correct label (0 or 1) for class c of the i th ROI and p i,c is the probability calculated by the model that the i th ROI belongs to class c.We minimise C using a stochastic gradient descent algorithm with a batch size of N = 64 ROIs.We train over 20 epochs with a learning rate of 0.05 for the first ten epochs and 0.005 for the following ten, initializing our weights using a Glorot normal distribution [38].For each epoch, a random 10% of our training images are held back for validation and used at the end of the epoch to assess the accuracy of our model.Training for more than 20 epochs did not lead to any appreciable improvements in validation accuracy.A complete list of parameter values used can be found in S1 Appendix.
Winding number calculation and comparison
Manually labelling the ROIs allows a direct comparison between our method and the current standard technique used in defect detection, calculation of the winding number, to be drawn, as we can also classify each ROI by calculating its winding number.We can then find the accuracy of both methods when compared against our manually labelled ROIs, our 'ground truth'.Previous work on applying machine learning to detect nematic defects in tissues has used the winding number as the ground truth [31], thereby making it impossible to determine if the machine learning method is superior to current techniques.The winding number is the amount the nematic field rotates as a closed loop is traversed around the centre of the defect [39].The ±1/2 defects found in nematic systems are so called because the nematic field rotates by half a full rotation, or π radians, around the loop (S1 Fig) .The sign of the defects depends on whether the rotation of the nematic field is in the same direction as the direction in which the loop is being traversed.If the nematic field rotates clockwise as the loop is traversed in a clockwise direction, the defect is positive; if it rotates anti-clockwise, it is negative.We classify each ROI by finding the winding number on the fine grid around the edge of the ROI.
Machine learning model outperforms winding number classification
The mean performance of our CNN model with each training epoch can be seen in Fig. 3a.After training, our model clearly outperforms the winding number for overall classification accuracy on the training data set, defined as the percentage of correct predictions.However, these are ROIs that our model is being trained on, meaning it has 'seen' them before in previous training epochs.The real utility of our method depends on its ability to classify ROIs it has not seen before, which we test using the 500 ROIs in our test data set.Here our model is again more accurate than the winding number, achieving an accuracy of 84.0% compared to the winding numbers 76.6%, demonstrating that our method outperforms the current most widely used technique.
Defect detection techniques can often be sensitive to the window size used to detect them.If our trained model is to be readily usable on experimental data, it should achieve accurate results over a range of window sizes.To investigate this, we assess the accuracy of our trained model and the winding number method in classifying the test data at different grid sizes (See S1 Appendix).As our model takes as input a 9 × 9 grid of points, changing the grid size is akin to changing the ROI size.We find that, over a range of grid sizes, our method outperforms the winding number method, demonstrating its robustness in classifying defects even when the ROI size is not well tuned to the size of defects in the system (See Fig 1 As an example of the defects detected using each method, we look at an example domain from our AVM containing ROIs from our test data set (Fig. 3b).In line with Fig. 3a, both techniques show good agreement with manually labelled defects although the winding number appears to detect more false positives than the neural network.To assess this further and properly delineate the efficacy of both methods, we break down their performance for each class in Table 1.We calculate the precision, sensitivity and F1 score of each method.The precision is defined as Both methods display a similar pattern of having a higher sensitivity than precision for both defect categories, but a higher precision than sensitivity when no defect is present.Additionally, both methods exhibit a lower F1 score when no defect is present, a reflection of the larger differential between precision and sensitivity scores.Errors in both methods, therefore, primarily come from falsely detecting defects, as opposed to missing defects that should be detected.This information is lost when looking just at the weighted average values across all classes, which give more comparable precision and sensitivity scores.
Where the two methods differ is in our CNN model having consistently higher F1 for each category.This is driven by its higher precision in each defect class and higher sensitivity when no defect is present.The winding number, however, is slightly more sensitive to detecting defects when they are present.Taken together, these results show that the improved performance of our neural network compared to the winding number primarily stems from it detecting fewer false positive defects.We point out An example domain with defects detected using each method: our ground truth (GT), neural network (NN) and winding number (W).
here that it could be argued that precision and sensitivity should not be weighted equally, as they are in the F1 score, and that there may be scenarios where ensuring detecting as many defects as possible is more important than minimising detecting defects that are not there.However, we now show that, while the winding number may detect a slightly higher proportion of defects, the higher overall performance of our model can manifest itself in a wider improvement to experimental results.
Superior performance leads to improved capturing of defect properties
While results thus far point to the effectiveness of our model, to show that this realizes itself in tangible improvements to wider results, we look at the ability of our model, and the winding number, to ascertain the properties of defects.Experimental studies often seek not only to detect defects but examine tissue properties around them [12,23].To this end, we calculate the average velocity field around +1/2 defects detected using each method.This observable is particularly pertinent as one can infer global system properties from the velocity of +1/2 defects.The velocity direction indicates whether the system is behaving as an extensile (the net force on cells is pushing out along its long axis) or contractile (pulling in along its long axis) nematic, with tail-to-head motion indicating extensile forces and head-to-tail motion indicating contractile [11].Epithelial layers have been shown to exhibit both forward and backward motion in experiments [23].It is therefore valuable to be able to distinguish defect motion accurately and efficiently.Previous work using this AVM has determined that +1/2 defects move in a tail-to-head direction, indicating extensile behavior, in this system [18].However, obtaining the characteristic extensile flow field required averaging the velocity field over many simulations and several thousand defects.While this was achievable in a numerical model, time and cost constraints could make the requirements of such vast amounts of data to understand the properties of these defects prohibitive in an experimental setting.Due to the difficulty in collecting experimental data, it is therefore crucial that defect properties can be discerned using a minimal amount of information.
The average velocity fields for manually labelled, winding number detected and neural network detected defects, using 150 +1/2 defects detected from the test data set, can be seen in Fig. 4. 150 defects were used as this was number of +1/2 defects manually labelled in the test data set and so the largest number we could use to compare the different methods.The manually labelled defects demonstrate the clearest tail-to-head, vortical flow fields characteristic of extensile systems [11].The flow field around CNN detected defects clearly show better agreement with the manually labelled flow field than the flow field found using the winding number, reflected in the higher correlation between the two fields.The improvement between the two methods is even more stark when looking at the difference in velocity magnitudes between each method and the manually labelled flow field (Fig. 5).This illustrates the impact of the reduced performance, particularly the reduced precision, of the winding number and confirms the primacy of our model in detecting 'better' defects, as the anticipated mean-field behavior is clearer.
Additionally, we compared the velocity field around defects detected using our model and defects detected on the same data using the 'on-lattice' winding number method used previously [18] (S2 Fig) .The difference between the two is even starker than between our model and the off-lattice winding number method used in the present study.As well as highlighting the improvement using an off-lattice method can bring, it further underlines the benefits of our model compared to techniques used March 21, 2023 9/15 currently.
Discussion
In this study, we have developed a new method for detecting nematic defects in confluent tissues which, crucially, is readily implementable on experimental data.Our model can therefore aid in the expanding study of characterising cellular layers as active nematic systems, as active nematic defects are increasingly found to play functional roles in these systems [6].Importantly, we demonstrate that our method displays superior performance to the current standard use of the winding number in detecting defects and in capturing the mean-field properties of these defects.This reduces the amount of data required to obtain these properties, potentially improving experimental data interpretation.Interestingly, although the overall performance of our model is better, the winding number is slightly more sensitive to detecting defects.This means there could be applications where using the winding number would be more suitable, if the cost of missing a defect in the domain is very high.However, the improved performance in finding mean defect flow fields demonstrate that, in practice, the increase in overall performance of our model makes it more advantageous.This improved performance is likely due to the winding number only using information around the edge of the ROI, where as our CNN can take advantage of spatial information and correlations across the whole region.
In contrast to previous studies on using machine learning to detect nematic defects [31], our method is specifically designed for noisy experimental systems where the nematic field may not be well defined everywhere, and consequently low nematic order may not guarantee the presence of a defect.However, we anticipate our method will work well with any system whose nematic field can be easily interpolated to a 2D grid.Indeed, applying our method to active nematic systems, such as microtubule systems [40], would be an interesting future application of our technique.Another interesting future avenue of research includes extending the model to detect integer +1 defects, such as spiral or aster shaped singularities, as these have been engineered to arise in cellular systems [41,42], and have also been linked to morphogenetic processes [14,43].
The trained CNN model along with training data and Python scripts to detect defects can be found at https://github.com/KilleenA/ML_DefectDetection.
AVM IMPLEMENTATION AND PARAMETERS
We simulate N = 400 cells in a fluid like state with parameters A 0 = 1, P 0 = 3.7, f 0 = 0.5, ζ = 1 and D r = 1.We numerically integrate the equations of motion using the Euler-Maruyama method with time step ∆t = 0.01.
We initialize the simulation by arranging N cells on a hexagonal lattice, with grid spacing d = 2/ √ 3, in a periodic domain with dimensions dN × ( √ 2/3)dN .We then draw a Voronoi diagram from the seeded points to obtain the initial positions of the vertices.The choice of grid spacing gives edges of length a = d/ √ 3 and ensures that all cells initially have unit area.This means the average cell area throughout the simulation Ā = 1 and we use √ Ā as our unit length.To ensure different realisations of the system were independent, cells have random initial polarities and we integrate through at least 2 × 10 3 time units to initialise the system.
MACHINE LEARNING MODEL IMPLEMENTATION AND PARAMETERS
To identify ROIs we interpolate our input data to a fine grid with grid spacing ∆x = ∆y = 0.2, where the average cell length is approximately 1 length unit.We then smooth the data by passing a sliding window of size 9 × 9 over the data.We use a window of the same size to calculate the scalar order parameter at each point.Our threshold value of S for identifying ROIs S th = 0.15.Our ROIs are then also 9 × 9 in size, meaning the inputs to our model are 9 × 9 grids.
We implement our model in Python using the TensorFlow library.Our convolutional neural network (CNN) has two layers, both detecting 32 features, the first has feature detectors of size 6 × 6 and the second 3 × 3. Our 100 neuron fully-connected layer uses L2-regularisation with strength λ = 0.01.We chose this architecture as it achieved the highest classification accuracy on the training data, although we note that our results are not sensitive to the particular architecture used and similar architectures achieved comparable, albeit slightly lower, accuracies.To avoid overfitting when training the model, we use dropout on the fully-connected layer, leaving out a random 50% of the neurons in each training batch.All layers use rectified linear units as their output function with the exception of the output layer, which uses softmax.
SENSITIVITY TO GRID SIZE
To further evidence the suitability of our method for experimental data analysis, we examine the sensitivity of the model to the size of the fine grid to which we interpolate the input data.Different systems will have different sized cells and different sized defects relative to the size of these cells.So, while we tune our window size to the size of defects in our system, it may be less clear what the correct size is in experimental systems, so our model should be able to accommodate different grid sizes without it impacting performance.To investigate this, we assess the accuracy of our trained model and the winding number method in classifying the test data at different grid sizes.We do not retrain our model with input data at the new grid size, we use the original model where the weights were trained on data interpolated to the original grid size (∆x = 0.2).To enable comparisons with our ground truth, for each ROI we take the coordinates of the centre and interpolate the cell data to a new grid size about this central point.As our model takes as its input a 9 × 9 grid, inputting data at a new grid size is the same as changing the size of our ROIs around the defect center.We look at a range of grid sizes from ∆x = 0.1, which gives an ROI with a window length approximately one cell across, to ∆x = 0.8, meaning our interpolated grid is of the same order as the typical cell length and the ROI window lengths are approximately four times larger than the typical defect size.When smoothing the interpolated field, we scale the size of our sliding window such that the area over which we average is approximately constant.
The classification accuracy as a function of grid size can be seen in Fig. 1.With the exception of the smallest grid size, our CNN consistently outperforms the winding number.Along with being more accurate, our method is also less sensitive to the grid size than the winding number, whose performance drops sharply when the grid size varies from that which gives the highest accuracy, and is no better than random selection when the grid size is greater than 0.5.The CNN, however, is able to maintain a level of accuracy greater than, or similar to, the winding number's best performance even when the ROI is approximately four times larger than the size of the defect.This demonstrates the robustness of our approach and the ability of method to detect defects even when the parameters of the model are not perfectly tuned to the system.FIG. 1. Machine learning model is less sensitive to system parameters than winding number.Classification accuracy vs. grid size for the CNN and the winding number method.
Fig 1 .
Fig 1. Topological defects in confluent tissues.Examples of (a) comet-shaped +1/2 and (b) trefoil-shaped −1/2 defects in a confluent cell layer with the orientation of the long axis of each cell plotted in red.
Fig 2 .
Fig 2. Defect identification and classification procedure.(a) Example of an active vertex model configuration.We find the x and y coordinates of each cells centre of mass, the orientation of each cells long axis is then plotted at these points.(b) This information is then interpolated to a finer grid to form the nematic field of the system, where the average local scalar nematic order parameter S can be calculated at each grid point using a sliding window.Areas of low order (S th < 0.15) are identified as possible defect regions and the centres of mass of these regions are identified (blue dots).The nematic field around these points is then cropped to form a region of interest (ROI) (blue box).(c) These ROIs are then input into a machine learning model which classifies them as a +1/2 defect, a −1/2 defect or not a defect.
Fig 3 .
Fig 3. Machine learning model outperforms winding number classifier.(a) The loss and validation accuracy of the neural network as it is trained.The dashed line represents the accuracy of the winding number classification on the training data (0.812).Error bars represent the standard error in the mean over 50 realisations.(b)An example domain with defects detected using each method: our ground truth (GT), neural network (NN) and winding number (W).
Fig 4 .Fig 5 .
Fig 4. Less data is needed to characterise defect properties.Average velocity fields around +1/2 defects for (a) manually labelled defects, (b) defects detected using the neural network and (c) defects detected using the winding number.The single point correlation function ( v GT • v ) between the ground truth (GT) field and the neural network (NN) and winding number (W) fields is also shown.
S1
Appendix Parameter values used in AVM, hyperparameters used in machine learning model and details of the grid size study.S1 Fig. Calculating a defects winding number.(a) Example of a trefoil-shaped −1/2 defects in a confluent cell layer with the orientation of the long axis of each cell plotted in red.(b) Characterising this defect by its winding number.As a closed loop is traversed around the −1/2 defect, the orientation of the cells rotate by π radians (half a full rotation), hence the defect is half-integer.The sign of the defect is negative as the cells rotate in the opposite direction to the direction of travel around the loop (a) (b) S2 Flow fields around +1/2 defects.Mean tissue velocity fields around 150 +1/2 defects detected using (a) our CNN model and (b) the winding number at predefined points in the domain, used previously [18]. | 7,582.4 | 2023-03-14T00:00:00.000 | [
"Computer Science",
"Biology",
"Materials Science"
] |
Visualizing the distribution of flavonoids in litchi (Litchi chinenis) seeds through matrix-assisted laser desorption/ionization mass spectrometry imaging
Flavonoids are one of the most important bioactive components in litchi (Litchi chinensis Sonn.) seeds and have broad-spectrum antiviral and antitumor activities. Litchi seeds have been shown to inhibit the proliferation of cancer cells and induce apoptosis, particularly effective against breast and liver cancers. Elucidating the distribution of flavonoids is important for understanding their physiological and biochemical functions and facilitating their efficient extraction and utilization. However, the spatial distribution patterns and expression states of flavonoids in litchi seeds remain unclear. Herein, matrix-assisted laser desorption/ionization mass spectrometry imaging (MALDI-MSI) was used for in situ detection and imaging of the distribution of flavonoids in litchi seed tissue sections for the first time. Fifteen flavonoid ion signals, including liquiritigenin, apigenin, naringenin, luteolin, dihydrokaempferol, daidzein, quercetin, taxifolin, kaempferol, isorhamnetin, myricetin, catechin, quercetin 3-β-d-glucoside, baicalin, and rutin, were successfully detected and imaged in situ through MALDI-MSI in the positive ion mode using 2-mercaptobenzothiazole as a matrix. The results clearly showed the heterogeneous distribution of flavonoids, indicating the potential of litchi seeds for flavonoid compound extraction. MALDI-MS-based multi-imaging enhanced the visualization of spatial distribution and expression states of flavonoids. Thus, apart from improving our understanding of the spatial distribution of flavonoids in litchi seeds, our findings also facilitate the development of MALDI-MSI-based metabolomics as a novel effective molecular imaging tool for evaluating the spatial distribution of endogenous compounds.
Introduction
Litchi (Litchi chinensis Sonn.; order Sapindales, family Sapindaceae), also known as Lizhi, Danli, and Liguo, is a subtropical fruit tree with a cultivation history in China of more than 2,300 years . It is the only species of Litchi (Yao et al., 2021). Litchi is an important fruit crop in southern China and is planted on more than 550,000 ha with an annual output of more than 2.2 million tons . The cultivation area and output of litchi in China account for more than 60% of global production . Litchi seeds are a major product, but only a small portion is processed for biological utilization, and many litchi seeds are discarded as waste. The abandonment of fruit seed residues is not only a considerable problem for the environment but also a waste of global resources. Litchi seeds are rich in various bioactive compounds, such as flavonoids, saponins, volatile oils, polyols, alkaloids, steroids, coumarins, fatty acids, amino acids, and sugars (Dong et al., 2019;Punia and Kumar, 2021), resulting in a variety of biological functions, including antiviral and anti-oxidation activities, reducing the degree of liver damage and lowering blood glucose levels (Choi et al., 2017;Dong et al., 2019;Punia and Kumar, 2021). Accumulating evidence has confirmed the antitumor/anticancer effects of litchi seed extracts (Emanuele et al., 2017;Tang et al., 2018;Zhao et al., 2020).
Flavonoids are polyphenolic compounds and endogenous bioactive components, which act as secondary metabolites with extensive pharmacological activities. Flavonoids exert important pharmacological properties, including cardioprotective, anticancer, anti-inflammatory, and anti-allergic activities (Maleki et al., 2019;Ciumaȓnean et al., 2020;Liskova et al., 2021;Rakha et al., 2022). Regarding anticancer activity, many preclinical studies indicated the antiproliferative effects of flavonoids on lung (Berk et al., 2022), prostate (Vue et al., 2016), colorectal (Park et al., 2012;Li et al., 2018b), and breast (Pan et al., 2012) cancers. Furthermore, flavonoids have anticancer effects on breast tumors through multiple mechanisms (Martinez-Perez et al., 2014;Magne Nde et al., 2015;Zhang et al., 2018;Sudhakaran et al., 2019). Flavonoids can inhibit procarcinogen bioactivation and estrogenproducing and estrogen-metabolizing enzymes (Surichan et al., 2012;Miron et al., 2017), as well as breast cancer resistance protein (BCRP) (Fan et al., 2019). Administering flavonoids could inhibit inflammation, proliferation, tumor growth, and metastasis (Peluso et al., 2013;Khan et al., 2021;Guo et al., 2022). Although many studies have shown the pharmacological effects of flavonoids widely distributed in litchi seeds, almost all such studies were based on the extraction, enrichment, and separation of bioactive components, and few have focused on the spatial distribution and expression states of flavonoids. In fact, the precise reveal of the distribution of these flavonoids in litchi seeds is important for understanding the physiological and biochemical functions of these compounds and facilitating their extraction and utilization.
Matrix-assisted laser desorption/ionization mass spectrometry imaging (MALDI-MSI) has emerged as a molecular-imaging tool for simultaneously detecting and characterizing the spatial distribution and relative abundance of endogenous and exogenous compounds, such as lipids, proteins, metabolites, peptides, and drugs (Van De Plas et al., 2015;Qin et al., 2018;Piehowski et al., 2020). Although MALDI-MSI has been used in plant science with endogenous molecular profiling to determine the spatial distribution of small molecules in plant tissues (Zaima et al., 2010;Taira et al., 2015;Huang et al., 2016;Li et al., 2018a), to the best of our knowledge, no previous study has utilized MALDI-MSI to characterize the spatial distribution of flavonoids in litchi seeds.
This study is the first to use MALDI-MSI for the in situ detection and imaging of flavonoids in litchi seed tissues. The results clearly showed the heterogeneous distribution of flavonoids in litchi seeds, indicating the potential of litchi seeds as a source for flavonoid extraction. MALDI-MS-based multi-imaging enhanced the visualization of spatial distribution and expression states of flavonoids. Our findings provide insights into the spatial distribution of flavonoids in litchi seeds and support the development of MALDI-MSI-based metabolomics as an appealing and credible molecular imaging technique for evaluating the spatial distribution of endogenous compounds.
Materials and reagents
Fresh litchi fruit was collected from the Yongfuda litchi orchard (Haikou, Hainan, China) in June 2022. Haikou is located on Hainan Island in China. It has a typical tropical marine climate and annual sunshine duration of over 2,000 h. The climate is humid, the temperature rises fast, and the average annual precipitation is approximately 260 mm. The Yongfuda litchi orchard is located in a volcanic rock soil planting area. Once harvested, the peel and flesh of the litchi were immediately removed, and the litchi seeds were flash-frozen with liquid nitrogen by slow immersion to prevent seed shattering and endogenous compound changes. The commonly used MALDI matrix, 2-mercaptobenzothiazole (2-MBT), was obtained from Sigma-Aldrich (St. Louis, MO, USA). Amino acid and oligopeptide standards, including His, Gly-Gly-Leu (tripeptide), Ala-His-Lys (tripeptide), Leu-Leu-Tyr (tripeptide), and Arg-Gly-Asp-dTyr-Lys (pentapeptide), were purchased from Bankpeptide Biological Technology Co., Ltd. (Hefei, Anhui, China). Trifluoroacetic acid (TFA) and liquid chromatography-mass spectrometry (LC-MS)-grade methanol and ethanol were obtained from Merck & Co., Inc. (Darmstadt, Germany). Ultrapure water in the whole process of the experiments was prepared using a Millipore Milli-Q system (Bedford, MA, USA). All other reagents and chemicals were purchased from Merck, unless otherwise noted.
Tissue sectioning
For tissue sectioning, a Leica CM1860 cryostat (Leica Microsystems Inc., Wetzlar, Germany) was used. The frozen litchi seeds were cryo-sectioned into 12-mm-thick slices at a temperature of −20°C, and then the cryo-sectioned samples were thaw-mounted instantly on the conductive indium tin oxide films of microscope glass slides purchased from Bruker Daltonics (Bremen, Germany) ( Figures 1A, B).
Matrix coating
After being air-dried, the serial litchi seed tissue sections were used for MALDI matrix coating ( Figure 1C). A 2-MBT matrix solution was prepared at an optimal concentration of 15 mg/ml and dissolved in methanol/water/TFA (80:20:0.2, v/v/v). Air-dried tissue sections were coated with the 2-MBT matrix solution by a GET-Sprayer (III) (HIT Co., Ltd, Beijing, China). Briefly, the 2-MBT matrix solution 15 cycles (5 s spray, 10 s incubation, and 20 s drying time) was sprayed on the surface of the tissue sections to pre-seed a thin layer of the 2-MBT matrix. After the tissue sections were completely air-dried in a vented fume hood, the matrix solution was evenly sprayed for 50 more of the same cycles.
Histological staining
In order to obtain the histological images of litchi seed tissue sections, a slightly modified hematoxylin and eosin staining method was carried out based on an established procedure (Casadonte and Caprioli, 2011). Briefly, the tissue sections were washed in a series of ethanol solutions (100%, 95%, 80%, and 70% aqueous ethanol; 15 s/ wash). After 10-s ultrapure water washing, tissue sections were stained with hematoxylin solution for 2 min and then washed with ultrapure water and 70% and 95% aqueous ethanol for 30 s each. The eosin solution was applied for another 1 min. Then, all tissue sections were washed with 95% and 100% ethanol and xylene for 30-s dehydration.
Optimal image acquisition
Optical images of the tissue sections were acquired using an Epson Perfection V550 photo scanner (Seiko Epson Corp, Suwa, Japan) according to previous studies Shi et al., 2022).
MALDI-MS
An Autoflex Speed MALDI time-of-fight (TOF)/TOF mass spectrometer (Bruker Daltonics) with a MALDI source equipped with a 2,000-Hz solid-state Smartbeam Nd : YAG UV laser (355 nm, Azura Laser AG, Berlin, Germany) was used for profiling and imaging ( Figure 1D).
To acquire in situ (+) MS profiling data of flavonoids from the tissue sections, all mass spectra were obtained over the m/z range of 100 to 700, each mass spectrum included an accumulation of 50 laser scans, and each scan was amassed from 500 laser shots. Three biological replicates of the sample and three technical replicates of Schematic diagram of MALDI-MSI procedure for imaging flavonoids in litchi seeds. (A) Whole litchi seeds were used for transection into 12-mm-thick slices in a cryostat microtome. (B) Serial tissue sections were immediately thaw-mounted on the conductive sides of indium tin oxide (ITO)-coated microscope glass slides. Optical images of the litchi seed section were obtained using a scanner. (C) To assist ionization, the sections were coated with the organic matrix. (D) MALDI-TOF-MS was used to detect analytes in situ on the surface of litchi seed tissue sections. The mass spectra of ionized analytes were acquired at each detected pixel point. (E) MS images of analytes were reconstructed from the MS spectra obtained at each laser spot using specific imaging reconstruction software. MALDI-MSI, matrix-assisted laser desorption/ionization mass spectrometry imaging; TOF, time of fight; MS, mass spectrometry. each biological replicate were performed for MALDI-MS data acquisition (n = 3 × 3). To acquire the images of flavonoids, a 250-mm laser raster step-size was utilized for flavonoid in situ detection in tissues, and each pixel (scan spot) included 300 laser shots. With the use of FlexImaging 4.1 (Bruker Daltonics), the three "teaching points" for the correct positioning of the solid-state UV laser (Smartbeam Nd : YAG) for spectral acquisition were marked around a tissue section using a white ink correction pen. The m/z values of the compound ions that can be used for external mass calibration were listed as follows: His ( MS spectra were acquired in collision-induced dissociation (CID) mode, and argon was used as the collision gas. The flavonoid fragment ions were acquired under the following condition: ion source 1, 19.0 kV; ion source 2, 17.4 kV; lens, 8.8 kV; reflector 1, 21.0 kV; reflector 2, 9.8 kV; and accelerating voltage, 20.0 kV. The UV laser power ranged from 65% to 90%. MS/MS spectra were recorded based on no less than 5,000 laser shots over the m/z range of 0 to 100 with a sampling rate of 2.00 G/s, a detector gain of 9.5×, and an electronic gain of 100 mV.
Data analysis
For the MS profiling and MS/MS data analysis, Bruker FlexAnalysis 3.4 (Bruker Daltonics) was used for the preliminary viewing and processing of the mass spectra. Once the monoisotopic peak list was generated and exported, two metabolome databases (METLIN and HMDB) (Tautenhahn et al., 2012;Wishart et al., 2022) were used for the search of the detected m/z values of precursor ions and CID fragment ions against potential metabolite identities within an acceptable mass error of ±5 ppm. Three ion adduct forms (i.e., [M + H] + , [M + Na] + , and [M + K] + ) were considered for the database search. For MALDI tissue imaging, Bruker FlexImaging 4.1 software was used for the reconstitution of the ion maps of the detected flavonoids ( Figure 1E). For the generation of the ion images using FlexImaging, the mass filter width was set at 5 ppm.
Morphological characteristics of litchi seeds
As shown in Figures 2A, B, under a light microscope, the litchi seed showed the following structures: testa, micropyle, embryo, cotyledon, and cotyledon gap. Among these structures, the testa was dark coffee-colored, the embryo was brown, and the cotyledon was oyster white. In addition, a gap was observed in the middle of the cotyledon. After hematoxylin and eosin staining, litchi seeds were observed again under a light microscope ( Figure 2C). The anatomical structure of the litchi seeds is illustrated in Figure 2D. . Naringenin and catechin were concentrated throughout the litchi seed, their distribution being more homogeneous and without obvious tissue specificity. Quercetin, quercetin 3-b-D-glucoside, and apigenin were distributed at the periphery of the cotyledons and in the embryo. The compound daidzein was uniformly distributed, whereas isorhamnetin was more distributed at the apical part of the cotyledons. Finally, the taxifolin (m/z 305.065, [M+H] + ) content was low and mainly distributed in the inner seed testa.
Four compounds were mainly distributed in the embryo: liquiritigenin, luteolin, dihydrokaempferol, and kaempferol. As the embryo is the most important part of the seed in plant development, these flavonoids may provide essential substances for growth and development and improve seed resistance. Luteolin was highly concentrated in the embryo and less concentrated in other parts. Luteolin, through inducing root nodulation, plays an important role in nitrogen metabolism in nitrogen-fixing plants and enhanced plant stress tolerance by promoting its nitrogen enrichment (Peters et al., 1986). Liquiritigenin was also mainly concentrated in the embryo and lesser in the cotyledons close to the embryo. Liquiritigenin increases ultraviolet irradiation, indicating its anti-radiation function (Sun et al., 2012). Dihydrokaempferol and kaempferol were interconvertible; therefore, both had similar distribution characteristics and are distributed in the cotyledons as well as the embryo. Many studies have demonstrated that kaempferol, as a precursor of ubiquitin-ketone (coenzyme Q) biosynthesis, is an atypical node between primary and specialized metabolism (Soubeyrand et al., 2018;Berger et al., 2022). Kaempferol is involved in plant defense and signaling in response to stressful conditions (Soubeyrand et al., 2018;Jan et al., 2022). Dihydrokaempferol is involved in plant growth and development. As a precursor of orange pelargonidin-type anthocyanins, dihydrokaempferol plays a role in flower coloring (Johnson et al., 2001). Liquiritigenin rapidly inactivates the PI3K/AKT/mTOR pathway. In vivo studies demonstrated that liquiritigenin can significantly inhibit tumor growth, increase cell autophagy, and accelerate cell apoptosis. In addition, it attenuates the malignantlike biological behaviors in triple-negative breast cancer cells through its induction of autophagy-related apoptosis via the PI3K/AKT/mTOR pathway (Ji et al., 2021), decreased DNMT activity, and elevated BRCA1 expression and transcriptional activity . Dihydrokaempferol has strong antiinflammatory and antioxidant activities, which can improve the inflammatory performance and oxidative stress state of acute pancreatitis (Liang et al., 2020;Zhang et al., 2021). In contrast, kaempferol shows more pharmacological activities, such as antibacterial (Yeon et al., 2019), anti-inflammatory (Yeon et al., 2019), anti-oxidant (Chen and Chen, 2013), antitumor (Calderón-Montaño et al., 2011), and anti-diabetic activities (Yang et al., 2021b), and are cardio-protective (Chen et al., 2022b) and neuroprotective . Currently, kaempferol is also commonly used in cancer chemotherapy (Ren et al., 2019). The mechanisms of kaempferol's anticancer include apoptosis, cell cycle arrest at the G2/M phase, downregulation of epithelialmesenchymal transition-related markers, and repression of overactivation of the phosphatidylinositol 3-kinase/protein kinase B signaling pathway (Imran et al., 2019;Wang et al., 2019). Luteolin sensitizes cancer cells to treatment-induced cytotoxicity via suppressing cell survival pathways and enhancing apoptosis pathways, including the apoptosis pathway of the tumor suppressor protein p53 (Lin et al., 2008). These compounds can be extracted from the embryo of litchi seeds, which is convenient for obtaining a higher content of target substances for pharmaceutical and mass production in the future.
Myricetin, baicalin, and rutin were mainly found in the cotyledons of litchi seeds. Myricetin was mainly concentrated on one side of the cotyledon gap, while rutin and baicalin were mainly distributed at the periphery (Figure 4). From a physiological point of view, flavonoids such as myricetin and baicalin assist in the reinforcement of plant tissues, maintenance of seed dormancy, and longevity of seeds during storage (Shirley, 1998). Rutin may participate in strengthening the plant's defense system against environmental stresses, including UV exposure, low-temperature stress, drought stress, and bacterial pathogen infection (Suzuki et al., 2015;Yang et al., 2016). Myricetin has therapeutic effects on a variety of diseases, such as inflammation, cerebral ischemia, Alzheimer's disease (AD), cancer, diabetes, pathogenic microorganism infection, thrombosis, and atherosclerosis (Song et al., 2021). Furthermore, myricetin has been reported to regulate the expression of STAT3, PI3K/AKT/mTOR, AChE, IkB/NF-kB, BrdU/NeuN, Hippo, eNOS/NO, ACE, MAPK, Nrf2/HO-1, TLR, and GSK-3b (Song et al., 2021). Rutin shows clear antioxidant and anticancer effects, including a strong ability to inhibit tumors in breast cancer, especially triple-negative breast cancer (Iriti et al., 2017;Liang et al., 2021). Baicalin, similar to rutin and myricetin, has inhibitory effects on lung, breast, and bladder cancers, through different signaling pathways and mechanisms (Ge et al., 2021;Kong et al., 2021;Zhao et al., 2021). Owing to their important pharmacological effects, our study of their spatial distribution provided a basis for the precise extraction of flavonoids for developing drugs.
Seven flavonoids, i.e., naringenin, apigenin, daidzein, quercetin, isorhamnetin, catechin, and quercetin-3-b-D-glucoside, were mainly found in both the cotyledon and embryo of litchi seeds. Among these compounds, catechin, naringenin, daidzein, apigenin, and quercetin-3-b-D-glucoside have homogeneous distributions with relatively high abundance. Isorhamnetin was mainly distributed in the radicle and tip of the cotyledon, while quercetin was distributed at the periphery of the cotyledon. Flavonoids are secondary metabolites in plants that play a critical role in impairing ultraviolet irradiation, regulating the oxidative stress response, and influencing the transport of plant hormones, flower coloring, and pathogen resistance (Buer et al., 2010;Chen et al., 2022a). Naringenin plays various roles in plant-microbe interactions . Lignin biosynthesis and coenzyme ligase (4CL) are involved in plant growth, and naringenin is one of the metabolites in this pathway that inhibit enzymes such as 4-CL (Deng et al., 2004). Apigenin (4′,5,7-trihydroxyflavone) is a bioactive compound that belongs to the flavone class, and it is the aglycone of many C27H30O16 129.053, 303.046, 465.103, 611.146, 649.118 129.054, 145.053, 147.051, 303.048, 465.102, 611.147, 649.117 Structurally naturally occurring glycosides. It ameliorates the damaging effects of salinity on rice seedlings, presumably by regulating selective ion uptake by roots and translocation to shoots, thus maintaining the higher K + /Na + ratio critical for normal plant growth under salinity stress (Mekawy et al., 2018). Daidzein, as an isoflavonoid, plays crucial roles in the expression of the nod genes of rhizobial bacteria. The expression of this compound in roots will increase the synthesis and secretion of nodulation factors, promoting a series of physiological changes in plant cells and initiating the formation of nodules (Bosse et al., 2021). Quercetin promotes a series of physiological and biochemical processes in plants, including seed germination, pollen growth, photosynthesis, and antioxidant machinery, thus facilitating proper plant growth and development (Singh et al., 2021). In addition, quercetin is an antioxidant that enhances plant resistance to some biotic and abiotic stresses. Quercetin-3-b-D-glucoside is a quercetin-derived compound with attached glucose instead of the 3-OH group of quercetin. Isorhamnetin is a methylated flavonoid derived from quercetin. Catechins, as a type of flavonoid, also belong to phenolic compounds. Making up more than 70% of polyphenols, catechins consist of ester and non-ester catechins. The multifunctional catechins contribute to decreased reactive oxygen species and better adaptability of plants to the environment (Jiang et al., 2020). Some of these flavonoids have been previously extracted from litchi seeds, for example, catechin and naringenin (Zhu et al., 2019). Similar to other flavonoids, most of these compounds have many pharmacological effects, including anti-inflammatory, antioxidant, and antidiabetic activities. In particular, since Ion images of 15 detectable flavonoids in litchi seed tissue sections from MALDI-TOF-MS using 2-MBT as the matrix in positive ion mode. MS imaging was acquired at 250-mm spatial resolution. MALDI, matrix-assisted laser desorption/ionization; TOF, time of fight; MS, mass spectrometry; 2-MBT, 2-mercaptobenzothiazole.
the start of the COVID-19 epidemic, antiviral activity has been reported for catechin (Mishra et al., 2021) and quercetin (Bernini and Velotti, 2021). The antitumor effects of flavonoids have also been extensively studied, with the following mechanisms reported: inducing oxidative stress (Souza et al., 2017), enhancing chemotherapy drug effect (Yang et al., 2021a), and regulating signaling pathways (Amado et al., 2014). Notably, daidzein is a phytohormone similar to estrogens and thus may have a therapeutic effect on estrogen-dependent diseases (Meng et al., 2017). Therefore, flavonoid compounds are useful for developing drugbased therapies, and exploring the distribution of flavonoids will facilitate efficient extraction and utilization.
Although taxifolin was successfully detected in sections in situ using MALDI-MSI, the abundance of this compound was low. As shown in Figure 4, taxifolin was mainly found in the testa and peripheral part of the cotyledons, indicating that the compound can protect seed embryos from external biotic and abiotic factors, such as soil microbes (e.g., fungi and bacteria) and saline-alkali abiotic stress, thus improving seed vitality and germination rate (Ninfali et al., 2020;Wan et al., 2020). By regulating the aromatic hydrocarbon receptor/cytochrome P450 1A1 (CYP1A1) signaling pathway, taxifolin can significantly inhibit the proliferation, migration, invasion, and viability of gastric cancer cells (Xie et al., 2021). Similarly, the same effect of taxifolin has been observed on breast cancer by promoting mesenchymal-to-epithelial transition (EMT) through b-catenin signaling (Von Minckwitz et al., 2019).
Conclusion
MALDI-MSI was used for in situ detection and imaging of flavonoid distribution in litchi seeds for the first time. Overall, 15 flavonoids were successfully imaged. Among them, four (dihydrokaempferol, liquiritigenin, luteolin, and kaempferol) were distributed in the seed embryo, three (rutin, baicalin, and myricetin) were mainly found in the cotyledons, seven (quercetin, naringenin, isorhamnetin, daidzein, apigenin, catechin, and quercetin 3-b-Dglucoside) were enriched in both the embryo and cotyledons, and one (taxifolin) was mainly detected in the inner testa. Our MALDI-MSI results showed clear tissue distribution heterogeneity for the different flavonoid compounds in litchi seeds. Such information will be important for further study to understand the physiological and chemical functions of such flavonoid compounds. Furthermore, our study provides a basis for further improving the efficiency of extracting and utilizing bioactive compounds from litchi seeds.
Data availability statement
The original contributions presented in the study are included in the article/Supplementary Material. Further inquiries can be directed to the corresponding author.
Conflict of interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Publisher's note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher. | 5,320 | 2023-02-24T00:00:00.000 | [
"Chemistry"
] |
Intergenerational Transmission of 'Religious Capital': Evidence from Spain
The paper examines intergenerational transmission of "religious capital" from parents to their offspring, within an economic framework of a production function of "religiosity" where parental inputs serve as factors of production. A sample of Catholic Spaniards who grew up in Catholic households is used for the empirical study. A rich unique data base is employed with data on several aspects of religiosity: two dimensions of the individual's religiosity - mass attendance (6 levels) and prayer (11 levels); information on the mother's and father's church attendance when the respondent was a child (9 levels) as well as the respondent's mass participation at the age of 12. The use of detailed religiosity measures (rather than one dichotomous variable, e.g., goes to church - yes/no; practicing Catholic - yes/no), facilitates a more sophisticated analysis with robust conclusions. A theoretical framework is followed by stylized facts on household composition. Then the effect of the parents' input on respondent's religiosity is examined - first using cross-tabulation and then using Ordered Logit regression. The inputs of the parents are proxied by the mother's and father's intensity of church attendance when the respondent was a child. The output (respondent's religiosity) is measured using detailed data on mass attendance and prayer. Exposure to mass services during childhood and socio-economic variables are also considered. All in all we find that parental religious inputs significantly affect individuals' religiosity BUT the route of intergenerational transmission is from mother to daughter and from father to son. Women are not affected by paternal religiosity and men are unaffected by maternal religiosity. Current religiosity is also affected by own exposure to mass services during childhood - own experience has a more pronounced effect on the private/intimate activity of prayer than on the social/public activity of church attendance. Current mass participation is more affected by parental than by own mass attendance during childhood.
Motivation
This paper explores intergenerational transmission of 'religious capital' for a representative sample of Spanish Catholics. It extends a previous paper by the authors (Brañas-Garza and Neuman, 2004) that used the same sample to analyze religiosity patterns (expressed by church attendance and prayer) of Spaniards, within an economic framework 3 .
The basic idea of this study is that the accumulation of an individual's 'religious capital' starts at childhood when he is watching his parents' religious activities and he is exposed to religious practice, such as mass attendance. The mother and father are passing on religious knowledge and attitudes to their children (Hoge et al., 1982;Clark and Worthington, 1987;Ozorak, 1989;Thomson et al., 1992;Hayes and Pittelkow, 1993). The parents' religious behaviours are factors of production in the process of building the child's 'religious capital'. The more intensive is the parents' practice, the more religious the person will be when he grows up. This investment of the parents in their offspring's religious capital forms the solid basic basis that might be subsequently extended by a spouse when the person gets married to a practicing spouse 4 .
For the empirical analysis we are using a unique rich database that was collected in 1998 by the Centro de Investigaciones Sociológicas (Center for Sociological Research, Spain), under the International Social Survey Program: Religion II, supported by UNESCO. It is based on 2488 personal interviews that were carried out in all 47 Spanish provinces. It includes information on respondents' religious denomination; his religious activity as evidenced by two dimensions of religiosity: mass attendance (a public religious activity with utilitarian/social motives-has six alternative levels) and prayer (an intimate/private religious activity with pure religious motives-11 levels); religious denomination and church attendance of the mother and father when the respondent was a child (9 alternative levels); church attendance of the individual when he was 12 years old (9 levels); and a battery of personal socio-economic background questions (e.g. age, education, maritalstatus, number of children, personal income, household income) 5 . While most empirical studies are employing one dichotomous variable to measure religiosity (e.g. goes to church-yes/no; practicing Catholic-yes/no), our data base provides much more details on religious activities of respondents and their parents, thus facilitating a more sophisticated analysis with more robust conclusions. The relationship between the respondents' religiosity and the parental religious inputs is examined using two types of statistical analysis: One is a descriptive table that relates parental inputs to children's religiosity (measured by the mean, median and mode of religiosity levels). The second type uses Ordered Logit regression analysis to present 'religiosity equations'. The estimated equations include in addition to the variables that are the focus of our study (parental religious inputs) also other socioeconomic variables that affect religiosity, in order to control for their effects and to arrive at net effects of parental variables. The analysis is done for each of the genders separately (allowing gender differences).
We restrict our study of intergenerational transmission of 'religious capital' to Catholic respondents who grew up in household of Catholic parents, in order to form a homogenous sample where all players belong to the same religion and are subject to the same rules of religious conduct.
The paper is structured as follows: The next section presents background information on the composition of our sample in terms of religious denomination of the respondents and their parents (that reflects the religious composition of the Spanish population). In the third section a formal framework of production of religiosity is suggested and testable hypotheses are presented. The theoretical framework is followed by an empirical analysis of the effect of parents' religiosity (proxied by their church attendance) on the respondents' religiosity (measured by church attendance and by prayer). The results facilitate the testing of our hypotheses. The last section summarizes and concludes.
Religious Denomination of Respondents and Parents
Our empirical study of the transmission of 'religious capital' from parents to children is restricted to Catholic households where the respondents and the two parents have the same Catholic denomination. This forms a more homogenous sample and avoids potential measurement and estimation problems that arise from different religious conduct in the various religious denominations.
Eighty three percent of the respondents in our sample 6 define themselves as Catholic. One percent belongs to other religions and the rest 16% declare that they have no religion. This distribution reflects the share of Catholics in the Spanish population. According to data from the Spanish Bureau of Statistics, close to 90% of the population are Catholic, about 1.5% has other religious affiliations and around 8.5% claim to have no religion. These figures have been fairly stable since 1990 (Brañas-Garza and Neuman, 2004) An examination of the parents' religious affiliation of Catholic individuals is presented in Table 1 of the household the two parents were non-Catholic and in 5.9% of the cases there was inter-marriage of a Catholic person with a non-Catholic spouse. In most of these 120 cases the non-Catholic spouse was the father. It is interesting that 120 individuals who lived in households with inter-marriage 'converted' to Catholicism.
On the other hand, our sample includes 270 individuals who were raised in homogenous Catholic families and they do not define themselves as Catholic anymore 8 . Our analysis will be restricted to the 1916 households where both parents are catholic (and so is the respondent).
Framework
Men and women devote time inputs to time intensive religious activities (such as church attendance) as an investment in their own religiosity and also in order to expose their kids to religious practice and hence invest in the children's 'religious capital' and transmit religious attitudes and values to the next generation. 9 This accumulation of 'religious capital' during childhood will result in a more religious adult (as reflected in devoting more time to activities such as mass attendance and prayer). It has been extensively documented that religious (and ethnic) traits are usually adopted in early formative years of childhood and that family and other role models play a crucial role in this socialization process Feldman,1973, 1981;Clark and Worthington, 1987;Cornwall, 1988;Ozorak, 1989;Thomson et al., 1992;Hayes and Pittelkow, 1993;Verdier, 2000, 2001).
Formally, let's denote by R i the respondent's current religiosity level and by F(.) the production function of the individual's religiosity. The factors of production are: Input of time devoted by the mother to religious activity when the respondent was a child (lm i ) and time devoted by the father to religious practice when the individual was a child (ld i ). Obviously there are more factors of production in the process of producing the individual's religiosity, such as: The educational system, the social impact of the community and of friends, religiosity level of the spouse (for married individuals) 10 . As we focus on parental intergenerational transmission of 'religious capital' and due to data limitations we will concentrate on lm i and ld i . A distinction will be made between the two genders.
Based on economic and sociological literature on intergenerational transmission of cultural values and on gender roles and gender differences (cited above), the following testable hypotheses can be stated: (a) Positive marginal products of the inputs of the mother and the father i.e., the derivatives of both lm i and ld i are positive (∂R i /∂lm i >0 and 9 Even when this is not done with the specific intention of affecting the kid's religiosity, this is most probably the outcome -children that are exposed to religious practice of their parents, accumulate religious specific human capital. This accumulation is intensified if the child actively participates in religious practice (goes to church with parents). 10 See, Johnson (1980); Grossbard-Shechtman and Neuman (1986); Erickson (1992); and Bisin and Verdier (2000).
∂R i /∂ld i >0): religious attitudes and practise are transmitted from parents to children, we therefore expect a significant positive effect of the mother's and the father's religious practice (lm i and ld i , respectively) on the respondent's religious practice (R i ).
(b) The effect of lm i will be stronger in the case of female respondents (i.e. ∂R i /∂lm i >∂R i /∂ld i ) while the opposite will be true in the case of males Because mothers serve as role models for their daughters, while boys look up to their fathers as their role models.
(c) The effect of parents' church attendance on the offspring's church attendance is more pronounced and more significant than its parallel effect on prayer (both ∂R i /∂lm i and ∂R i /∂ld i are expected to be larger and more significant in the 'mass participation equation' than in the 'prayer equation'): Similar religious activities are supposed to be more closely related, mainly because children tend to simulate the parents' behaviour.
(d) Larger positive effects in the sample of female respondents compared to the sample of males: A production function of type (1) exists for both men and women; however the coefficients, that express the transformation of parental inputs into religiosity of the offspring, might differ for the two genders. As women are more spiritual, we expect to find larger coefficients of parental inputs in women's 'religiosity equations'.
(e) Stronger (relatively) effects of parents' (lm i ,ld i ) when lm i =ld i : It is documented in the literature that homogamous families in which parents share the same religion enjoy a more efficient socialization technology than families composed of parents with mixed religions, and that children of mixed religious marriages are less likely to conform to any parental religious ideology or practice like church attendance (Heaton, 1986;Hoge et al., 1982;Ozorak, 1989). Most of these studies relate to ethnic minorities and look at the religious affiliation only and not at the intensity of religious practice (within the same religion). Obviously, we have a different setting: We are examining the effect of parents who share the same Catholic denomination and we focus on homogamy in the sense of the same intensity of religious practice. Also, our sample consists of Spanish natives and not of respondents who belong to minority groups. However, a similar rationale might lead to the hypothesis stated above, that parents who are more homogenous in terms of religiosity level (lm i =ld i ) will be more efficient in the transmission of religious traits. We have no a priori assumptions on the second derivatives or on the cross derivatives of the two factors of productions. While in a standard production function maximization of profit implies decreasing marginal products of factors of production, in the case of production of religiosity we might observe increasing marginal products (f´>0). The cross derivatives might be either negative (indicating substitution between factors of production, ∂lm i /∂ld i <0) or positive (indicating complementarily, ∂lm i /∂ld i >0). There is also the option of indifference between the two factors of production (∂lm i /∂ld i =0 when factors do not affect each others).
(f) Erosion of the effect of exposure to parental religious practice, as time passes by and the respondent gets older: Behavioral economists (e.g Kahneman et al., 1997) claim that experience affects preferences but the effect of experience erodes with time. If this is true also for religious experience and for preferences for religiosity, then we expect to observe stronger effects of parental inputs on young respondents. The effects will become weaker at advanced ages 11 .
(g) A negative relationship between the probability to 'convert out' of the Catholic faith and (lm i ,ld i ): Our statistical analysis is restricted to the sample of Catholic respondents (i.e. R>0), with two Catholic parents (lm i >0, ld i >0).
This restriction was imposed in order to have a homogenous sample in terms of religious rules of conduct. However, it is possible to extend the 11 An alternative explanation for an expected negative relationship between age and parental effect (everything else being equal) could be the following: a child tends to simulate and mimic his parents' behavior (e.g. mass attendance), as he grows up he updates his preferences/taste that might than deviate from those of his parents'. sample and include also non-Catholic respondents who grew up in Catholic families, in order to test the hypothesis that the tendency to leave the Catholic faith (R=0) is also related to parental inputs and is higher if parental religious inputs were lower.
Measurement of input and output variables
The independent input variables lm i and ld i are proxied using data that relates to mass participation of the mother and father when the respondent was 12 years old.
For each of theses variables there is data on a scale of 1 to 9 (1-never attended church services to: 9-attended several times a week) 12 .
The responses to the questions that relate to childhood are retrospective and might be inaccurate; we therefore created a variable with three broader categories by combing responses that are close (see Iannaccone, 2003, for justification).
The 9 original options are reduced to the 3 following categories: (1) lm i /ld i =1: For original values of: 1 (she/he never attended); 2 (once a year); and 3 (one or two times a year). This category relates to lowpracticing Catholic mothers/fathers.
(2) lm i /ld i =2: For original values of: 4 (attended few times at year); 5 (once a month); and 6 (two or three times a month). This category includes mediumlevel practicing Catholic mothers/fathers.
(3) lm i /ld i =3: For original values of: 7 (attended almost all weeks); 8 (every week); and 9 (several times a week). This is a category that is composed of intensively-practicing Catholic mothers/fathers. lm i and ld i therefore belong to L where L=(1,2,3). l=1, is for the case where the mother (father) rarely attended church services; l=2 if the mother (father) eventually attended; and l=3 if they regularly attended. The pairs (lm i ,ld i ) ∈ R 2 ++ represent (mother, father) combinations of intensity of mass participation during respondent's childhood. For example (3,1) represents a household where the mother regularly attended mass services and the father rarely attended.
The dependent variable (R) -level of religiosity of the respondent -is estimated using two dimensions of religiosity, mass participation and prayer habits. Mass participation is measured on a scale from 1 to 6 (1-never participates; to 6participates every week) 13 . Prayer is measured on a scale from 1 to 11 (1-never prays; to 11-prays several times every day) 14 . These two aspects of religiosity have different motives: while church attendance is a public activity that also has utilitarian/social/network motives, prayer is a private/intimate activity with a pure religious salvation motive. The costs of the two activities are also different: church attendance is more time consuming and therefore has higher alternative costs.
The values of R (either mass participation or prayer) refer to current practice and should not have any measurement errors. The full spectrum of values will be used for regression analysis and for the computation of central values.
Descriptive statistics of input (lm i , ld i ) and output (R) variables
Before we turn to the statistical analysis of the relationship between parents' inputs and the offspring's religiosity, it might be useful to have some descriptive statistics on the respondents' and parents' religious activities. Table 2 presents a cross tabulation of the mother's and father's mass participation levels (lm i , ld i ), where lm i , ld i ∈L = (1, 2, 3). Table 2 indicates that the modal combination is (lm i , ld i ) = (3,3). In more than 41% of households of origin, both the mother and the father intensively practiced religious activities.
The other two figures on the diagonal that represent homogenous households, are significantly lower: In 11% of households both parents were rarely practicing mass, (lm i ,ld i ) = (1,1) and in about 15% of families the parents attended mass occasionally, (lm i ld i ) = (2,2). Notes: -Sample of Catholics with Catholic parents -lm i /ld i =1 for a Catholic low-practicing mother/father; lm i /ld i =2 for a Catholic medium-level practicing mother/father; and lm i /ld i =3 for a Catholic intensively practicing mother/father (see page 8 for definition) The figures above the diagonal represent a more active father (lm i <ld i ).
Interestingly, there is a negligible number of families of this type (40 out of 1735) that constitute a mere of 2.4%. In about 30% of the households the mother was more active (figures below the diagonal, where lm i > ld i ). We can therefore summarize that most households in the sample are homogenous in terms of parents' level of religious practice and the great majority are intensive practitioners.
In non-homogenous families, it appears that the mother is the more religiously active person. This is also reflected in the figure that about two thirds of mothers compared to about 40% of fathers have the largest level, l=3. On the other hand, the percentage of non-religious individuals (l=1) is more than double for men compared to women (28% and 12%, respectively). These gender differences in religiosity are documented in multiple studies (e.g. Beit-Hallahmi, 1997;Brañas-Garza and Neuman, 2004;Brañas-Garza, 2004). insert Table 3 about here 13 who pray at least once a day is almost three times larger compared to the respective share of men (35% and 14%, respectively). At the other extreme, only 11% of women and 25% of men never pray. These gender differences are also reflected in the mean, median and mode of the distributions (the respective means are 6.98 and 4.90; the median is 8 for women and 4 for men; the respective modes are 10 and 1). These major gender differences in prayer habits reflect gender differences in religious and spiritual attitudes and values. The narrowing of gender differences in attending mass services might be explained by the different nature of this religious practice: it has utilitarian motives as well. The church serves as a network and as a social club. Men who value networking more than women, attend services in order to create and maintain social and business ties.
In order to get a more visual presentation of the distributions, The diagrams add a visual reassurance that women are more religious than men, in particular in terms of prayer that has a more private/intimate nature and is the better reflection of 'pure' religiosity.
Intergenerational Transmission of 'Religious Capital'
We are now acquainted with the religious performance of the respondents and their parents and are ready to examine the interrelationship between the two generations and test our hypotheses (Section 3.1, page 7) First, a descriptive statistical analysis will be presented and then regression analysis will be employed in order to arrive at more compact results and to control for socio-economic background variables that might also affect respondents' religious behaviour and should therefore be considered. The dependent variable is categorical and therefore Ordered Logit will be used for estimation. The regression coefficients reflect marginal productivity of inputs and can be used to test our hypotheses. Table 2) we refer only to household where both parents have similar religiosity levels or where the mother is more active. Parental inputs are denoted by pairs of (lm i ,ld i ) and individuals' religiosity levels are measured using several central measures: the mean of the various categories (1-6 for church attendance and 1-11
Descriptive analysis of parental effect on mass attendance and prayer
for prayer), the modal category and the median. A distinction is made between the two genders.
insert Table 4
about here
As is evident from Table 4 there is a pronounced positive relationship between parental religious inputs (lm i ,ld i ) and individuals' religiosity levels. The individual's intensity of church attendance and of prayer is clearly increasing with parental inputs (in terms of their church attendance during the individual's childhood).
Interestingly, in households where the two parents rarely practiced, ie (lm i ,ld i ) = (1,1), the modal value for the kids (both women and men) is also the lowest possible: (1)-never attends mass; and (1)-never prays. At the other extreme, when the two parents attended mass intensively (lm i ,ld i ) = (3,3), the kids follow and the modal value (for women and for men) is 6 for church attendance (every week) and 10 for prayer (every day).
Women seem to be affected mainly the mother and men's religious behaviour seems to be more closely related to the father's religious activity (for instance: moving from (3,1) to (3,2) or from (2,1) to (2,2) where only the father's mass attendance increases, leads to a very small change in the mean of the women's mass attendance and a much larger change in the case of men's mass attendance.
Women's prayer habits even show a small decrease).
Our statistical analysis of the relationship between parental religious inputs and respondents' religiosity is restricted to Catholic individuals with two Catholic parents. However it is interesting and informative to also examine the relationship between parental inputs and the absence of Catholic religious belief i.e., the probability to 'convert out'. indication that women tend to be more religious than men. The share of individuals who 'converted out' is clearly related to parental levels of religiosity. This negative relationship is more pronounced for men. The effect of (lm i ,ld i ) on the probability to leave the Catholic faith is not linear and is gender specific.
If the two parents were rarely practicing -less than 80% of their daughters and less than half (48%) of their sons will stay Catholic. About 20% of daughters and over half(!) of sons will have no religion. In the case that the two parents were practicing occasionally -the percentages with 'no religion' drop to 14% for women and 5% (down from 50.3%!) for men. They further drop to 4% for women and, surprisingly, rise to 11% for men who grew up in households where the two parents were practicing intensively.
Parental religious inputs are therefore responsible for the tendency to stay Catholic and furthermore, to the level of religiosity of those who are Catholic.
This descriptive presentation in Table 4 on the relationship between parental input and Catholic respondents' religiosity suffers from two methodological limitations: first, the results are somewhat diffuse and one has to consider the whole array of numbers in order to draw conclusions. Second, it does not control for other variables that might be responsible for the respondent's level of religiosity (e.g., education, age, marital status, number of children). Regression analysis solves these two problems. he was 12 years old is also included in order to net out the effects of parental inputs that are most probably correlated with this variable 18 .
Ordered Logit regression analysis
Another set of independent variables is a series of socio-economic and geographical background variables: marital status; number of children; number of years of schooling; age group; population size in place of residence; type of place of residence (within the metropolitan area of a big city or not, typically for small cities around Madrid); region of residence (the so-called "Autonomías" in Spanish).
The two alternative dependent variables are categorical and ordered from low to high (Mass participation: 'never participates' to 'every week' -6 categories; Prayer: 'never prays' to 'several times a day' -11 categories). An Ordered Logit econometric model that estimates relationships between an ordinal dependent variable and a set of independent variables is therefore used for the estimation of 'religiosity equations' 19 . Table 6 presents the results of the Ordered Logit regressions, whereby an underlying score is estimated as a linear function of the independent variables and a set of cut points. The probability of observing outcome i, that correspond to the estimated linear function, plus a random error, is within the range of the cut points estimated for the outcome where is assumed to be logistically distributed. In either case, the coefficients The coefficients of the Ordered Logit estimation cannot readily be interpreted, but could be the estimated marginal effects of each variable on the unobserved latent variable from which the ordered outcomes are derived. However, such marginal effects include the normalizing to one of the error variance, which is not identified.
Regressions of each of the religious activities were estimated for men and women separately.
Effects of parental religious inputs
We now turn to the effects of parental religious inputs (proxied by their religiosity levels, in terms of mass participation, when the respondent was a child) on the respondents' religiosity that is reflected by their mass participation and by prayer activities. Regression results will also be referred to our hypotheses (a) to (f) (see insert Table 6 about here 20 We also experimented with a continuous version of the parents' inputs (treating the parent's mass attendance as a continuous variable with 6 possible values). This version utilizes more information but has two problems: it assumes a linear relationship bewteen the input and the output and it suffers from potential measurement errors, as the responses to the questions on parental mass attendance are retrospective and relate to past far experience. Nevertheless the basic results were similar.
(c) Parental mass participation has a more pronounced effect on the respondent's mass attendance than on his prayer habits, as evidenced by the larger and more significant coefficients in the 'mass participation religiosity equation'. Moreover, as mentioned above, the effects on payer are significant (at a 0.05 significance level) only if the (same gender) parent was practicing intensively (l=3).This is true for the two genders. These results support our third hypothesis. Table 6), indicating that the time distance between the religious experience during childhood and the current religious behaviour is irrelevant. In this sense, religious experience might be different from other experiences with a good/service/event. This indicates that the religious experience is more profound and deeply rooted in the individual and its effect prevails along the individual's life time.
(g) Parental religious inputs have a positive significant effect on the individual's tendency to stay Catholic, or alternatively: a negative effect on the probability to 'convert out'. In order to test hypothesis (g) we extended the sample to all respondents who grew up in Catholic households. Both the cross-tabulation descriptive statistics (Table 5) and a logistic regression (dependent variable is equal to 1 if stayed Catholic and 0 if not, results not reported) 22 show a clear positive relationship between parental levels of religiosity and the probability of the individual to remain Catholic. This is the case for both women and men. The effect is even more pronounced for men-boys who grew up in households where the two parents were rarely practicing, have a higher probability to leave the Catholic faith than to remain Catholic (probabilities of 51.5% and 48.5%, respectively).
Effects of other variables
Respondent's exposure to mass services during childhood: This variable is included as one of the independent variables in order to arrive at net effects of the mother's and the father's inputs. It relates to the respondent's mass attendance (l=2,3) when he was 12 years old 23 . Excluding this variable does not change the basic results that relate to parental inputs (in terms of relative magnitude and of significance). The various coefficients are somewhat smaller when this variable is not included due to its positive correlation with parental inputs (for instance, in the 22 Alternatively, it is possible to run the 'religiosity equations' on the extended sample and add the option of R=0 to the dependent variable (to mass attendance and to prayer). However, this will result in a less homogenous sample and will not yield a clear distinction between R=0 and R>0. 23 Based on questions #30: "When you were 11-12 years old, how often did you attend mass services at the church?". The options are: Never (1); once a year (2); one or two times a year (3); a few times a year (4); once a month (5); two or three times a month (6), almost every week (7); every week (8); several times a week (9). The alternatives are identical to those related to parental mass attendance. Therefore here too the nine options are reduced to three categories (l =1, 2, 3), that are identical to the three categories for parental mass attendance. female sample, the effect of lm i =2 on mass participation increases from 0.905 to 1.001 and the effect of lm i =3 increases from 1.332 to 1.536. In the male sample, the effect of ld i =2 on mass participation goes up from 0.759 to 0.802 and the coefficient of ld i =3 increases from 1.143 to 1.272).
The effect of childhood religious exposure is also interesting by itself (in addition to its role to net out the effects of parental inputs): Exposure to mass attendance during childhood has a positive significant effect on the respondent's current mass participation, but only if he was intensively exposed to mass services (l=3).
Regression coefficients of l=2 are not significant in mass participation equations of the two genders. Also, the effect of own exposure is less pronounced than the effect of parental mass attendance (a coefficient of 0.886 versus 1.332 in the female sample and respective coefficients of 0.716 and 1.143 in the male sample).
It appears that the same gender parent serves as a role model and his participation in mass services is more influencial on future mass participation than own exposure.
Interestingly, own exposure to mass services has a stronger effect on current prayer habits than on current mass participation (respective coefficients of 1.048 and 0.886, for l=3 for women; coefficients of 1.114 and 0.716, for l=3, for men).
Also, prayer is more affected by own childhood experience of mass participation than by watching the parent attending mass services. This whole set of findings indicates that the intimate/private activity of prayer is more closely related to religious private experience during childhood than the more social/public activity of mass participation. Most pronounced is the effect of advanced age on religious behavior. This reflects both cohort effects and age effects that are related to the salvation motive. Age effects are much more pronounced in the female sample. Marital status and number of children do not affect women's religiosity. Married men tend to go to church more often and the number of kids has a negative effect on male prayer habits. Schooling has a positive significant effect on the intensity of religious behaviour of the two genders 25 .
The effects of socio-economic and geographical variables (see Appendix
Three socio-geographical variables have also been included: the population size in the city of residence; its location (whether it is within the metroplitan area of a big city) and the geographical region. The size of the city was included using several dummy variables that relate to different sizes of city population (10,000 or less; 10,001-to-100,000; 100,001-to-1,000,000; over one million inhabitants). The number of dummies was then reduced to one: 10,000 inhabitants or less with the reference group of more than 10,000, due to insignificant differences between all the rest. As indicated by Table 6, women in small rural cities go to church more often, reflecting socializing motives of church attendance. They also have a slight tendency to pray more (at a significance level of 10%). The effects of the small city are not prevalent for men. Men (but not women) are affected by the metropolitan location of their place of residence-those living within metropolitan areas of big cities go to church more, reflecting social networking motives. They are also more active in praying. We do not have a reasonable explanation for this finding.
In order to control for regional differences, 16 region dummy variables have been added (not reported in Table 6, Cantabria is the reference group). All region dummies are insignificant in the women's religiosity regressions, indicating no significant effect of the regional location. In the male equations, only residents of Castilla la Mancha go to church significantly more than all others (coefficient of 1.376, z=2.11). The size of the coefficient is quite impressive-larger even compared to ld i =3. Here too, this finding is a reflection of social/cultural motives of church attendance: Castilla la Mancha is the most traditional, old-style, rural region of Spain and going to church is integrative component of tradition and culture.
Summary and Discussion
This paper addresses a fundamental question of the parental role in shaping the individual's religiosity. The basic statement that is formulated and tested empirically is that parents transmit 'religious capital' to their offspring via a process of serving as role models and exposing him during his childhood to mass attendance. This exposure serves as an input that helps him to produce his stock of 'religious capital' that is reflected in his current church attendance and prayer habits.
The analysis of the intergenerational transmission of 'religious capital' from parents to their offspring is presented within a setting of an economic framework of a production function of 'religiosity' where parental inputs serve as factors of production. Several testable hypotheses are derived and presented.
To test the hypotheses we use a large representative Spanish database of Catholics. The parental inputs are the mother's and father's intensity of church attendance during the individual's formative years of childhood. The output is the respondent's current religiosity level as reflected by two different dimensions: mass attendance and prayer. Socio-economic background variables, that might affect religiosity, are also considered.
The paper has several innovative features: -Unlike most empirical papers that proxy the individual's religiosity using church attendance, we have data on two different dimensions of religiosity, namely, mass attendance and prayer habits 26 . These two facets of religiosity have different motives and a comparison of the production processes of the two, adds to our understanding of religious behaviour and in particular of the inter-generational transmission of 'religious capital'.
-Moreover, in most empirical studies church attendance is a dichotomous variable (yes or no) while we have information on the intensity of church attendance that has six alternative levels. For prayer we have eleven alternative levels that reflect the intensity of prayer. This information is most valuable and enables the estimation of Ordered Logit 'religiosity equations' and the derivation of more robust conclusions.
-Separate information on maternal and paternal religious inputs and on religiosity of female and male respondents facilitates a comparison of the differential effects of the mother and father on daughters and sons, thus improving our understanding of gender roles and gender differences in the process of transmitting religious values and attitudes.
-Information on the respondent's own exposure to mass services when he was a child, facilitates the netting out of the parental effects and also leads to a more comprehensive understanding of the inter-generational transmission process.
The main and most interesting findings are the following: -There is clear evidence of inter-generational transmission of 'religious capital' BUT only from the same gender parent: the mother has a significant impact only on the daughters' religiosity, while the father significantly affects only the sons' religious behaviour. The effects are not linear -in some cases only an intensive practicing mother/father (l=3) has an impact on the individual's religiosity. Parents of the same gender serve as role models and play a crucial role in the process of building and formatting the individual's stock of religiosity. Parental religious inputs also affect positively the tendency to stay Catholic and not 'convert out'.
-There is a closer relationship between parental input and the respondent's current mass attendance compared to the link between the former and his prayer habits.
-Prayer is more closely related to the respondent's own exposure to church services when he was a child than to the parental example of church attendance.
This might indicate that the private/intimate practice of prayer is transmitted mainly through own experience rather than via 'simulation' of parental religious practice.
-There are no interactions between the effects of the two parents. Moreover, homogenous parental inputs (lm i = ld i ) do not add to the separate effects of the mother and the father. This also follows from the insignificant effect of the other gender parent on the respondent.
-The effect of the experience of exposure to parental mass attendance during the individual's childhood is persistent and does not erode as time passes by and the respondent gets older and more distant from this experience.
-Parental impact is, generally, larger for women. This is another example of the specialization of women towards religious tasks. These findings also comply with theories of the Sociology of Religion that claim that women have a larger taste for religiosity compared to men.
Religion within the European Union is one of the focal topics on the research agenda of the Union. We believe that the study presented in this paper forms one of the building blocks of this line of research and hope that more studies will follow and improve our understanding of the multi-cultural religious patterns in Europe. The men are slightly more educated than the women. The average number of years of schooling is 10.3 for men and 9.5 for women, with a standard deviation of around 5 for both groups. This is also reflected in the distribution of the level of formal education -17% women and 13% men have not completed primary school, while 34% women, compared to 41% men, have some academic education (including college, polytechnic and university). The percentages of primary-and secondary-school graduates are similar for men and women (around 25% of the men and women in each group) i .
About two thirds of women and of men in the survey are married and the average number of children at home is 1.8, ranging from 0 to 12 ii .
As evidenced in many other countries, women earn less than men. Women and men in our sample have a similar age distribution and the men are only slightly more educated than the women. Yet we find more men than women in the higher monthly income intervals: 9.4% men and 6.7% women have monthly incomes between 200 and 500 thousand pesetas. A mere 0.7% men and 0.3% women earn more than 500 thousand pesetas. The great majority of women (70%) earned less than 100 thousand pesetas compared to 37% men. This group included respondents who did not participate in the labor force. The majority of men (53%) have a monthly income ranging from 100-to-200 thousand pesetas. The parallel figure for women is 23%.
The monthly family income distribution is more similar for women and men. The majority of respondents have a household income in the 100-to-200-thousand peseta range. Less than 4% (2.4% female respondents and 1.6% male respondents) enjoy a household income of over 500 thousand pesetas. Around one quarter are in the under-100-and 200-to-500 thousand peseta range. Comparing the distribution of personal versus family income, shows that women 'moved up', reflecting the fact that a significant proportion either work part-time or not all.
About one quarter of women and of men live in small rural towns (with a population of 10,000 inhabitants or less). Around one third of our respondents live in medium-size cities of 10,001-to-100,000 residents, close to 30% reside in large cities (population of 100,001to-1,000,000) and around 10% have their homes in very large cities of over one million inhabitants. Fifteen percent live in metropolitan areas.
The regional distribution reflects the population sizes of the 17 Spanish regions: The largest are the regions of Andalucia, Cataluña, Madrid and Valencia (with 11-to-18 percent of population) and the smallest is La Rioja with less than 1% of the population. End Notes (for Appendix):
APPENDIX
i Among 15 European countries, Spain ranked second from last, and Portugal last (at 37.7%) in percentage of population (aged 25-to-59) with at least a secondary-school education. Germany ranked first with 81.6%. ii Lehrer (1996) predicted that spouses with the same religious affiliation would have lower divorce rates and more children than couples with different religious affiliations. This hypothesis is not supported by our data: while in the great majority of couples (over 95%), both spouses are Catholic, they have quite low fertility rates. | 9,784.6 | 2006-06-01T00:00:00.000 | [
"Economics",
"Sociology"
] |
Knot Energy, Complexity, and Mobility of Knotted Polymers
The Coulomb energy E C is defined by the energy required to charge a conductive object and scales inversely to the self–capacity C, a basic measure of object size and shape. It is known that C is minimized for a sphere for all objects having the same volume, and that C increases as the symmetry of an object is reduced at fixed volume. Mathematically similar energy functionals have been related to the average knot crossing number 〈m〉, a natural measure of knot complexity and, correspondingly, we find E C to be directly related to 〈m〉 of knotted DNA. To establish this relation, we employ molecular dynamics simulations to generate knotted polymeric configurations having different length and stiffness, and minimum knot crossing number values m for a wide class of knot types relevant to the real DNA. We then compute E C for all these knotted polymers using the program ZENO and find that the average Coulomb energy 〈E C〉 is directly proportional to 〈m〉. Finally, we calculate estimates of the ratio of the hydrodynamic radius, radius of gyration, and the intrinsic viscosity of semi–flexible knotted polymers in comparison to the linear polymeric chains since these ratios should be useful in characterizing knotted polymers experimentally.
Experiments have shown a remarkable correlation between the migration speed of knotted DNA in gel electrophoresis and average knot crossing number, the number of places where a knotted polymer crosses itself when projected onto a surface 1,2 . An early study indicated a correlation between DNA mobility in gels with the minimum knot crossing number m 1 , but Stasiak et al. 2 later found a better correlation of knotted DNA electrophoretic mobility with the crossing number averaged over all polymer conformations, 〈m〉. The minimum crossing number m is a topological invariant found in knot classification 3,4 , but numerical studies have established that the configurationally averaged 〈m〉 varies with chain length, chain stiffness, and the strength of the excluded volume interaction 5 . These previous experimental and computational studies raise questions 6 about which property of knotted polymers dominates the DNA separation process by electrophoresis and about the accurate computation of traslational friction coefficient of knotted polymers for comparison to both sedimentation and electrophoresis measurements.
The utilization of energy functionals to classify object shape has a long history. For example, it has been appreciated since the time of the ancient Greeks that a sphere has the minimum surface area of all the objects having a given volume and it is common to classify particle shape in terms of the relative surface area of a particle to a sphere having the same volume, i.e., "sphericity" 7,8 . In many applications, minimum surface area directly corresponds to a minimizing energy, e.g., the interfacial energy of a droplet defines an "energy functional" and fluid droplets of ordinary fluids are accordingly spherical in order to minimize their interfacial energy and thus their surface area. Poincaré first proved that the electrostatic capacity C of a finite volume region is similarly minimized by a spherical shape in connection with his study of the rotation of liquid droplets 9 , and Szegö later proved this "isoperimetric" relation rigorously 10 . Pólya and Szegö embarked on a ambitious program of object shape classification in terms of C and other energy functionals related to the Laplace's equation (hydrodynamic virtual mass, magnetic, and electric polarizability) [11][12][13] . In each case, a scalar energy functional can be defined which is minimized by a sphere for all objects having a finite volume. Garboczi et al. have illustrated the dependence of these functionals on shape in the case of ellipsoidal particles as part of a study of the shape dependence of the percolation threshold of overlapping objects 14 .
Capacity is an especially important energy functional because of its many physical applications. It governs the rate of heat transfer from an object based on Newton's law of cooling, the rate of diffusion-limited reactions, scattering lengths in acoustic and quantum theory, as well as its well known interpretation in electrostatics 12,14,15 . Historically, the use of this functional for shape classification has been limited by the difficulty of calculating C for complicated shaped objects. For example, the calculation of C for a cube is still an unsolved analytic problem 12 . Numerical path-integration methods, however, have recently allowed the accurate numerical computation of C for regions having essentially arbitrary complexity 16,17 . Tests against exactly solvable cases show these path-integral calculations provide accurate estimates of C for reasonable computational times 15,16 . We are now in position of calculating C for the purpose of shape and topology classification and we are interested in the present paper in classifying knotted polymers in terms of C. This enables an extension of the shape classification program initiated by Pólya and Szegö to describe the topological properties of objects using basic energy functionals 12 .
Since C is important in our discussion below, it is worth recalling its mathematical definition. Consider a conductive object Γ having a fixed charge q that is distributed at the equilibrium on the object surface ∂Γ. The "Coulomb energy" E C of the equilibrium charge distribution on a conductive object equals, where ε is the dielectric constant of the medium in which the charged object is placed. The "Coulomb constant" (1/4 π ε) in this relationship defines the proportionality factor of the Coulomb potential, and following mathematical conventions, we take this quantity, along with q, to be equal 1 so that E C = 1/2 C. The Coulomb energy is familiar in a physical chemistry context as the basis of the Born theory for calculating ion solvation energies 18,19 , where ions are modeled as charged spheres. Duhr and Brown 20 have argued that the solvation energy of duplex DNA can be estimated from an extended Born model where the effective radius of the charged DNA molecule is estimated from the DNA hydrodynamic radius, which as we will see below is related to C. The Coulomb energy can also be defined as an energy functional (Kelvin's principle) 12,17 , C which is minimized by the equilibrium charge density ρ(R) normalized so that, ∫ ∂Γ ρ(R)dR = 1, where R is a point on ∂Γ. Alternatively, C can also be defined through the minimum energy of the potential field gradient exterior to the conductive object (Dirichlet's principle). This complementary definition of C shows that this equation describes the asymptotic decay of the solution of the Laplace's equation on the exterior of a region at distances far from the object where the solution of Laplace's equation is constant on the boundary. This is the classic exterior Dirichlet problem 17 . This last interpretation forms the basis of a probabilistic understanding of C involving the hitting region with random walk trajectories launched from the exterior of the particle 21 . Numerical methods utilizing this idea has been developed in the recent years 22,23 , and presently we can compute C with an accuracy easily better than 1% for particles of essentially any shape 16,22,23 .
Returning to our discussion of knots, there has been some previous interest in classifying knots in terms of energy concepts similar to E C . These definitions usually involve unphysical force laws or mathematical devices that insure that E C remains finite for smooth curves (See discussion below). It is known that C for any smooth curve or any finite collection of smooth curves equals zero in three dimensions, so that E C is then formally infinite 24 . This property evidently makes E C unsuitable for discussing the topology of closed smooth curves 25 , but this limitation disappears as soon as the curve is endowed with a finite thickness or becomes fractal as in the case of the trajectories describing Brownian motion.
Hubbard and Douglas 15,26 have recently shown that the translational friction coefficient f t of Brownian particles having general shape is directly related to C, t to a high degree of approximation, (1 %). In Eq. (3), η is the fluid viscosity of the liquid where the particles are immersed, and the units of C are chosen so that the capacity of a sphere equals its radius. Eq. (3) has a simple physical interpretation; f t describes the steady state diffusion of momentum away from diffusing object since η is the momentum diffusion constant. The generalized Stokes-Einstein law, Eq. (3), provides a direct relation between knot shape, as measured by C, and the mobility μ of knotted DNA undergoing diffusion in solution, The sedimentation coefficients of a Brownian particle of general shape is proportional to μ 27 . The general hydrodynamic-electrostatic relation, Eq. (3), is restricted to uncharged and rigid objects with a stick hydrodynamic boundary condition. Eq. (3) is based on the simple observation that angular averaging of the Oseen tensor gives rise to the Green's function for the Laplacian 26 . Kholodenko and Rolfson employed an angular, and an additional configurational preaveraging approximation to relate the average knot crossing number 〈m〉 to an ensemble averaged "knot energy" 28 , and this work, in part, stimulated the present study. Our calculations of the knot energy using ZENO do not require a configurational preaveraging approximation, reducing the uncertainty in the analysis of the relation between E C and 〈m〉.
In the present work, we argue about an approximate relationship between the Coulomb energy of a curve E C and 〈m〉, thus giving a relation between chain mobility μ and 〈m〉. We first describe the coarse-grained molecular model used to generate polymeric knot configurations having different knot complexity and we use these configurations to test the aforementioned approximation, as well as to determine other shape descriptors that are related to knot complexity. We then explore the energy and shape properties of knotted rings having fixed length and different rigidity, as well as the properties of knotted rings having fixed rigidity and different length. We summarize our findings in the conclusion section.
Knot Energies and Crossing Number
The Coulomb energy functional in Eq. (2) is a natural functional to define the complexity of knotted DNA since DNA is a highly charged macromolecule. It is well known that E C is infinite for any smooth curve in 3 dimensions so this functional has not much been considered in relation to quantifying knot complexity. A generalized knot energy based on the potential |R| −2 , the "Mobius energy", has been much considered because this energy is invariant to reparametrization of the arc length, suggesting that this quantity might be a useful measure of knot complexity [28][29][30] . In this direction, Freedman et al. 30 showed that the "regularized Mobius energy", where β = 2, is directly related to the average crossing number of knotted curves, Kholodenko and Rolfson 28 considered an extension of the Mobius energy to the generalized potential |R| −β and and achieved further simplification by ensemble averaging E F over random walk paths. This simplification is associated with the fact that 〈E C 〉 is finite for Brownian paths in 3 dimensions so that E C divergence issues no longer exist when the paths defining the knotted structures are highly irregular 31,32,33 . Evidently, we need to consider the shape regularity (i.e., differentiability) of knotted curves in connection with the determination of 〈E C 〉.
Based on the combined arguments of Freeman et al. 30 and Kholodenko and Rolfson 28 , we conjecture that the Coulomb knot energy E C should be nearly linear in the average crossing number 〈m〉, C where the constant of proportionality in this scaling relation is unspecified. In the next sections, we explore the validity of this theoretically motivated approximation through a consideration of knotted polymer chains generated by molecular dynamics simulations and we use ZENO to determine 〈E C 〉. Below, we find evidence supporting Eq. (7) for a selected family of knots of significance for the characterization of real DNA molecules and determine the prefactor in Eq. (7).
Molecular Dynamics Simulations of Knotted Rings
Generation of Semi-flexible Knotted Rings. In this section, we describe a coarse-grained molecular model utilized in previously modeling of DNA in solution 36,37 to generate the knotted polymeric configurations ( Fig. 1). In this molecular model, each polymeric knot is represented by L = (63, 126, 200, or 252) connected beads (bead-spring model 38 ). To generate the steric interaction among the beads, we use a Weeks-Chandler-Andersen (WCA) potential, Here, r is the radial distance between the centers of two beads, and σ and ε are the length and energy Lennard-Jones parameters, respectively. Neighboring beads along the chain are connected via a finitely extensible, nonlinear elastic (FENE) anharmonic spring potential, with the bond strength k = 30 ε/σ 2 and maximum bond extension R 0 = 1.5 σ. In particular, we are interested in the properties of double-stranded DNA (dsDNA), we relate σ ≈ 2.8 nm to the diameter of dsDNA, a representative value for dsDNA in ≈1 mol l −1 of NaCl solution 35 , so that the knotted polymer lengths analyzed in this study corresponds to L = (176.4, 353.8, 560.0, 705.6) nm. We include a three-body bending potential U bend among every three neighboring beads forming an angle θ, bend bend where k bend is the bending constant and we consider k bend = (1, 3, 5, 10, 20) ε to vary the knot polymeric stiffness. We characterize the rigidity of the polymer by computing the persistence length l p for the linear polymeric chains, where l p is defined as the average projection of the chain end-to-end distance R e on the first bond of the chain l 1 39 , The values of k bend indicated before lead to l p = (5.8, 9.2, 13.7, 25.5, 50.2) nm for polymers having a linear topology.
Molecular dynamics (MD) simulations on this coarse-grained polymer model were performed to generate large ensembles of knot configurations. All simulations were performed at fixed number of particles, volume, and temperature (NVT ensemble). We chose temperature in the range 0.2 ≥ T ≥ = 3.0 ε/k B and a Nosé-Hoover thermostat 40,41 to generate and equilibrate chain ensembles. Here k B is the Boltzmann constant. We first carry out our simulations for periods of time ≥ 10 7 time steps δt, where δt = 0.006 σ(m/ε) 1/2 to achieve the thermal equilibrium for each system. We performed our MD simulations by using the Large-scale Atomic Molecular Massively Parallel Simulator (LAMMPS) 42 . We report the average property for each system resulting from 4000 different configurations after it has reached thermal equilibrium. The property calculations were obtained by using the path-integration program ZENO 22,23,43 based on a sampling of 10 6 random walks. We use Visual Molecular Dynamics VMD 44 to render representative configurations of the different knotted polymers.
Generation of Canonical Knotted Rings.
The characterization of knotted polymers is evidently complicated by the vast number of configurations that these polymers can have subject to topological constraints that define the knotted polymer type. This is a general problem in the recognition of a classes of objects sharing common topological or geometrical properties. A human, for example, normally has a head, two arms and two legs and articulation, points, or joints that allow a large number of possible configurations that humans explore in the course of their daily activities. The objective identification of humans, and other objects in data bases, has been facilitated by the identification of unique "canonical forms" associated with the entire class of objects that can serve to identify the object class [45][46][47] . In image recognition algorithms, canonical forms have been defined by associating an energy functional with a schematic representation of a member of the shape ensemble and the We follow the Alexander-Briggs knot classification notation 34 where the main number is the minimal crossing number m and the subscript is an arbitrary number specifying sub-classes of knots having the same m. Knots having a subscript equal to unity are usually referred to "prime" knots and tend to be relatively "symmetric" in shape as class. All of the polymers on this figure are formed by 126 beads, corresponding to a length L = 352.8 nm, and a diameter d = 2.8 nm, appropriate for dsDNA 35 . The bead size has been scaled in the figure so that the whole polymer can be visualized on a common scale.
Scientific RepoRts | 7: 13374 | DOI:10.1038/s41598-017-12461-w shape is then adjusted incrementally until the energy functional is extremized, subject to the geometrical invariants that define the class of objects.
We follow this approach to classify knot types based on the Coulomb knot energy. In particular, we take any representative knot polymer configuration where the polymer chain is considered to be a conductor, and each bead has a fix charge and the beads are connected. We then allow the knotted rings to relax to the equilibrium configuration that minimizes the ensemble average Coulomb knot energy. By iterating this process and progressively increasing the charge magnitude on the beads of the polymer chain, we find the knotted polymers approaches an apparently unique "canonical" knot form for each class of knotted polymers. In particular, we achieve the generation of canonical knots by adding an electrostatic repulsive interaction U coul among all the beads that form the polymer, where the charge Z is Z = 10. The qualitative idea of generating knots having minimal energy has been explored before before 25,[48][49][50] which are usually based on knot energies motivated by mathematical convenience rather than physical concerns. Figure 2 shows representative images for a 4 1 34 knot before and after introducing the repulsive charge interaction.
The application of this charging procedure reveals that a particular knot invariant as being of primarily significance in this classification scheme of knots, the minimal or essential crossing number m. The average crossing number 〈m〉 is obtained by averaging over the whole ensemble of possible knot configurations and this quantity is evidently larger than the minimal crossing number. In particular, 〈m〉 is larger for flexible chains than for rigid ones. Since m and 〈m〉 are important in our discussion below, we illustrate in greater detail how they are calculated. Figure 3 illustrates ideal canonical forms for knotted polymers having fixed m. It is apparent that the Coulomb canonical knots closely resemble "ideal" knots generated by increasing the polymer diameter incrementally rather than charging. In each case the knotted rings adapt an apparently unique "swollen" configuration 3 , although we are not aware of any rigorous proof of the uniqueness of this structure. Coulomb canonical knots provide a reference point in our discussion below of semi-flexible knotted polymers which exhibit highly complex and diverse configurations which are better characterized by 〈m〉 than m.
Properties of Knotted Rings: Effect of Polymer Stiffness
In this section, we define the average crossing number 〈m〉 and explore its relation to 〈E C 〉 and m. Additionally, we report the shape and size properties for knotted rings having fixed length L = 352.8 nm and fixed diameter d = 2.8 nm, and variable knot complexity and rigidity. The classification of the polymeric chains based on their rigidity is given by the calculation of their persistence length 39 . In our discussion below, we refer the persistence length l p of linear polymeric chains having the same bending rigidity parameter k bend as our measure of polymer rigidity.
Average Crossing Number of Semi-Flexible Knotted Rings.
We initially describe the methodology used to compute the average crossing number 〈m〉 for the thermally equilibrated knot configurations. For simplicity, we calculate 〈m〉 for a simple knot geometry. Figure 4 shows an example calculation of 〈m〉 for an initial configuration of a 3 1 knotted polymer in the conveyed knot classification scheme 34 . Here, a knot having a crossing number m = 3 is projected onto the xy, xz, and yz planes. We count the crossing points in each plane, giving by the intersection of the monomers (e.g., 3 red circles shown on projected polymer in the xy plane) and then we average over the crossing value determined in each plane to determine 〈m〉. The calculation of 〈m〉 for an individual knotted polymer involves projecting the polymer onto an infinite number of planes having a relative angular distribution having uniform distribution. This type of avergaing is simplified by our Monte Carlo Sampling procedure which explores all angular orientations of the knotted polymer while the planes remain fixed. Having three orthogonal planes improves the sampling, but is not required in our method of angular averaging through our molecular dynamics based exploration of polymer conformational space. For the thermal equilibrated knotted polymers, 〈m〉 is obtained by averaging the 4000 configurations. Figure 4 shows 〈m〉 as a function of m for knotted polymers having different stiffness and fixed length L = 352.8 nm.
For flexible knot polymers, it is more challenging to visualize its minimal crossing number m, but using the procedure described above, we can compute 〈m〉. Figure 5 shows 〈m〉 as a function m for knotted polymers having different stiffness, but a fixed length, L = 352.8 nm. These images correspond to representative knot configurations for the polymers interacting with a bending energy amplitude k bend = 10 ε (blue triangles). We find an approximately proportional relation between 〈m〉 and m where increasing polymer stiffness shifts the curves downwards, reflecting the fact that rigid polymers have a smaller average number of crossing points.
Relations between 〈E C 〉, m, and 〈m〉. We next compute the average Coulomb energy 〈E C 〉, m, and 〈m〉 for the polymeric knots configurations generated using the coarse-grained model described in the previous section for a selected family of knot types relevant to the characterization of real DNA. Figure 6 shows the average Coulomb energy 〈E C 〉 as a function of the minimum crossing number m (a) and average crossing number 〈m〉 (b) for polymers having the same length L = 352.8 nm and different degree of stiffness. All our data has been normalized by the Coulomb energy for the canonical unknotted polymer, E 0 , which corresponds to the lowest energy structure among all knotted rings. We find a linear relationship between 〈E C 〉 and m, where the intercept evidently depends on the rigidity of the polymer chain. However, 〈E C 〉 normalized by E 0 , is nearly a universal function of 〈m〉, as its indicated in Fig. 6(b). We also see from Fig. 6(b) that the average mobility 〈μ(m)〉 of knotted polymers having a fixed m is directly proportional to the Coulomb knot energy to within the good approximation; See Eq. (4). The mobilities of knotted polymers in sedimentation measurements have been observed to exhibit the same linear scaling between 〈μ(m)〉 and 〈m〉 49 . Recent sedimentation measurements on knotted polymers having ideal knot configurations (defined by strong repulsive excluded volume interaction rather than charge interaction) also follow this scaling relation to a reasonably good approximation, although the data is somewhat noisy. Figure 6(b) confirms the proposed relation between 〈E C 〉 and 〈m〉 in Eq. (7). We have not made an exhaustive sampling of all knot types in our analysis here, but rather have focused on knots that seen to be of practical significance in
Basic Measurement of Size and Shape of Knotted Polymers Relevant to Experimental
Characterization. To characterize the shape of polymers and particles, it is common to determine the radius of gyration tensor, R g , which can be experimentally obtained by scattering techniques and it is formed by 9 components, where Λ i are the eigenvalues of R g 2 and Λ 1 ≤ Λ 2 ≤ Λ 3 . The ratios Λ 3 /Λ 1 and Λ 2 /Λ 1 constitute shape descriptors and represent each knot as an ellipsoid which main axis are Λ 1 , Λ 2 , Λ 3 and also define the average anisotropy of the particle or polymer 52 . The ratio C/R g is another important shape descriptor, which indicates the changes between and open structure (small values) to a more closed compact one (higher values) 53 . For instances, C/R g = 0 approaches 0 for a needle and this ratio for a solid spherical particle, C/R g = 1.29 54 . Figure 7 shows these shape descriptors for knotted polymers having fixed length (L = 352.8 nm) and different polymer rigidities.
We next consider basic measures that are commonly used to determine the topological structure of macromolecules 21 . In particular, we directly compare the knotted polymer properties to those of a linear polymer having the same molecular mass, Here, 〈E C 〉 has been normalized by the energy of the canonical form of the unknotted polymer, E 0 , which corresponds to the lowest energy configuration for a polymer of fixed length, diameter, and rigidity. We find a linear relationship between 〈E C 〉 and m. However, when we plot 〈E C 〉/E 0 as a function of 〈m〉, all the data collapse onto a universal curve (lower panel).
[η] is proportional to the average electric polarizability tensor and describes how the addition of the polymer alters the viscosity of the polymer solution in the low polymer concentration limit 14 . We plot these transport property ratios as a function of the minimum crossing number m in Fig. 8. We see that these ratios generally decrease with increasing m and increasing chain stiffness. Comparison between our g h , g s , and g η calculations with experiments on synthetic polymers is complicated by the fact that the knot complexity of ring polymers is normally "locked in" at the time of synthesis, leading to structures that are topologically polydisperse, meaning that many different types of knotted polymers can be generated in the synthesis process. Specifically, the end-linking of the linear polymer chain precursor molecules is often performed in a poor solvent where there are appreciable self-attractive polymer-polymer interactions in order to increase the probability of the chain ends to react and to form a ring. While these thermodynamic conditions do enhance ring formation, they can also be expected to greatly influence the probability of the rings to be knotted. This general trend can be appreciated by considering how 〈m〉 varies with T for rings having a fixed m. Figure 9 shows 〈m〉 as a function of the reduced temperature T/ε, where ε is the well depth parameter of the Lennard-Jones interaction potential. We see that 〈m〉 varies from a large value at low temperature where the knotted rings are in a collapsed configuration towards a value that gradually seems to be approaching the minimal crossing number m with increasing T. Unexpectedly, 〈m〉 becomes insensitive to m for a specific value of T. This variation of 〈m〉 with T is remarkably similar to the variation of the number of nearest-neighbor contacts of self-avoiding walks with an attractive nearest-neighbor interaction 55,56 , which is natural since the projected structure of the knotted polymers on a planar surface (See Fig. 4) has the form of a branched polymeric structure. In the upper panel, the ratio between the largest Λ 3 and the smallest Λ 1 eigenvalues of the radius of gyration tensor. The inset shows Λ 2 /Λ 1 to complete the shape description of the knots as an object embedded in a spheroid whose main axis are Λ 1 , Λ 2 , Λ 3 . Higher ratios indicate higher anisotropy of the polymers. In the lower panel, we show the ratio R h /R g , where R h is simply equal to C. This ratio indicates the changes between and open structure (small values) to a more closed compact one (higher values).
This "memory effect" of the knot complexity of ring polymers in solutions on the thermodynamic conditions of cross-linking means that the g-ratios are defined as a weighted average, where P(i, m) is the probability of the ring polymer is in a topological state having a minimal cross-linking number m and subclass i. For polymeric rings of moderate length L that is synthesized in a good solvent, we can expect almost all the rings to be in the unknotted state (m = 0) so that g ≈ g 0 (m = 0), but knots of increasing complexity should arise if the synthesis is performed under poor solvent conditions. The g-ratios for molecules synthesized in this way are inherently non-universal, even in the limit, L → ∞. This type of memory effect also arises in the cross-linking of macroscopic polymer networks 57 and individual polymers 58 .
The fact that the knot complexity depends on chain length for even self-repelling chains adds to the variation of these g-ratios. Future work will evidently need to focus on the dependence of P(i, m) on solvent quality and chain flexibility to enable the computation of g-ratios for quantitative comparisons of our calculations to experiment.
Properties of Knotted Rings: Effect of Polymer Length
The calculations above were made for knotted polymers having a fixed length and diameter. Short chains are inherently "stiffer" so that we might expect that increasing the chain length should have an effect size similar to increasing the chain rigidity. In this section, we confirm this expectation through direct computation of the properties of knotted rings having different lengths and chain stiffness, l p = 50.2 nm, and chain diameter d = 2.8 nm. This choice of chain parameters is appropriate to describe double-stranded DNA in solution at salt concentrations sufficiently high for charge interaction to be screened (1 M NaCL).
Dependence of 〈E C 〉 and 〈m〉 on Chain Length. Figure 10 Fig. 10(c,d), respectively. Larger chains exhibit a larger number of average crossing points 〈m〉 for all m. Correspondingly, Fig. 5 shows that more flexible chains having a fixed length have a greater average number of crossing points.
Size and Shape of Knotted Polymers
Having Different Chain Length. The size ratios, g h (m), g s (m), and g η (m) also depend on the chain length; these ratios being larger for larger chains in the case of polymers having a stiffness and diameter compatible with double-stranded DNA, as it is illustrated on Fig. 11. These basic size ratios become progressively smaller with increasing m, a trend similar to star branched polymers having an increasing number of arms 59 . Again, we note that this is a natural trend since the projection of a knotted polymer onto a plane is a branched polymer and the g i ratios for branched polymers tend to decrease with the degree of branching 59 .
The chain length range in our study is rather limited and our uncertainties in estimating asymptotic power and law scaling of size measurements (R h , R g , and [η]) and correspondingly dimensionless ratios g h (m), g s (m), and g η (m) in the long chain limit are probably large. Previous studies have compared the ZENO model estimates of R h , R g , and [η] for linear chain dsDNA over a very large range of mass range where quantitative agreement of the modeling with a worm-like chain model with an appropriate diameter and persistence length was found 60 . The scaling properties of knotted polymers relating to size in the long chain limit were also investigated extensively in a previous computational study 61,62 . However, this former work did not emphasize the ratios g h (m), g s (m), and g η (m) and the importance of the knot complexity m on the properties of knotted polymers having fixed length or the influence of chain rigidity on the properties of knotted polymers. Figure 12(a), and its inset, shows the eigenvalue ratios, Λ 3 /Λ 1 and Λ 2 /Λ 1 , respectively, of the radius of gyration tensor as a function of the minimum crossing number m for polymeric knots having fixed rigidity l p = 50.2 nm and diameter d = 2.8 nm. The variation of the average shape of knotted polymers with chain length is rather complex, exhibiting a relatively sharp change near m = 4, similar to star polymers having 5 to 6 arms 63 . With increasing crossing number m, the knotted polymer becomes more spherical, when the polymer is stiff. An examination of the resulting knotted stiff polymers for m > 4 indicates that these structures are more like woven sheets than a sphere, a phenomenon that we did not expect.
Conclusions
Some important conclusions can now be drawn from the above relations. First, we have confirmed that the average Coulomb energy 〈E C 〉 provides a natural measure of knot complexity that is directly related to measurements of DNA and often knotted structures. Simple knotted chains should have a lower average knot energy E C and the "unknot" (m = 0) should have the lowest or "ground state" knot energy (conjuncture). From the discussion above, the average crossing number 〈m〉 is a related measure of knot complexity and indeed these functionals are proportional to a good approximation, Eq. (7), for the knots considered in this studied. Knots with higher complexity and less symmetry will be tested in a follow up study. The hierarchy of knot complexities, as reflected by the energy 〈E C 〉, is directly reflected in the translational mobilities of knotted polymers through the generalized Stokes law, Eq. (3). The Coulomb energy thus directly pertains to an understanding of the mobility of knotted DNA. We emphasize that the proportionality relation between 〈E C 〉 and m does not provide an obvious explanation of the proportionality Figure 11. The knot-linear polymeric ratios as a function of m polymer length L for chains having fixed chain rigidity (l p = 50.2 nm) and diameter (d = 2.8 nm). The ratios decrease with knot complexity and increase with the knotted ring length.
between the electrophoretic migration speed and 〈m〉. While Stokes law is clearly appropriate for dilute polymer solutions the factors that governs chain mobility in gels are still not completely understood 64,65 .
We also draw on ideas introduced in the field of image recognition to define "ideal knots" characterizing distinct families of knots sharing a knot complexity defined by the crossing number m, along with other indices prescribed by convention. In particular, we define canonical knots as polymer configurations that results from charging the beads of our knotted polymer. Increasing this charge progressively, and then letting the system relax after each step, leads to an apparently unique knotted ring conformation that maximizes C and minimizes the Coulomb energy. In the imaging processing context, the energy function is normally taken to be a more complex than the Coulomb energy, while this energy functional is quite natural for DNA, a highly charged polyelectrolyte in the absence of much salt added to solution. These "canonical knots", and their properties provide a natural point of comparison to rings of variable flexibility and no charge where the conformations have numerous complex forms with statistically define average properties that define their properties. We note that there has been a previous definition of ideal knots by progressively increasing the size of the beads in the chain where excluded volume repulsion is enforced 3 . Their conformations appear to be rather geometrically similar to our canonical form knots, but we prefer our definition since it has a more physical motivation in relation to DNA.
One of the other problems arising in characterizing knotted ring polymers, and predicting the properties that derive from such topologically defined polymers, is that it is often difficult to control the knot complexity in their synthesis. Nature has evolved enzymes to regulate chain topology in DNA, but synthetic chemists have a much more limited control of knot complexity, e.g., controlling the solvent quality conditions under which the chains are linked together. Under such conditions, it is imperative to have measurement methods and validated theoretical models that quantify how knot complexity influences average molecular shape along with standard measurements of hydrodynamic properties (R h , [η], S) and static (R g ) size that are normally used to characterize polymers in solution 27 . We calculate all these basic polymer solution characterization properties as a function of the knot complexity m chain stiffness l p and over a range of chain length, allowing an estimation of long chain limit values of g h , g η , and g s . The range of knot complexities explored is not exhaustive, but rather representative of the classes of knots found in the characterization of real DNA and presumably real synthetic macromolecules. We expect these results to be of great use in characterizing knotted polymers in solution. We emphasize that the analytic calculation of the hydrodynamic properties of even flexible linear polymers, beyond a mean-field theory approximation, has long eluded theoretical description. The errors in existing theories in case of flexible polymers can be as large as (20%) 15 , creating a significant uncertainty in analytic theoretical modeling of how chain topology influences polymer solution hydrodynamic properties. Our numerical treatment of these problems does not alleviate the inherent errors in this type of analytic calculation, but we hope that our precise numerical estimates of knot energy functionals will provide impetus for further theoretical efforts aimed at calculating the hydrodynamic properties of polymers in solution. The main problem here is that relatively rare, more extended conformations can give a disproportionate contribution to the hydrodynamic properties so that the properties calculated for "typical" configurations do not describe ensemble average properties. The problem is especially great for flexible polymers when these fluctuation effects are large. Now that we have characterized many of this basic solution properties of knotted polymers over a range of knot complexities, chain stiffness, and chain length, we plan to extend this work to MD of knotted polymers in the melt state. Since many of the properties of knotted polymers in solution are altered as the knot crossing number m is varied, which is similar to prior findings for star polymers having a variable number of arms, we expect to see similar trends relating to the configurational properties of knotted rings and star polymers in the melt state and the properties of the resulting materials when the topological structure of the molecules causes them to have similar average molecular shapes. Simulations of knotted ring and star polymers in the melt state the problems are currently in progress. | 8,965.6 | 2017-10-17T00:00:00.000 | [
"Materials Science"
] |
BET bromodomain inhibition potentiates radiosensitivity in models of H3K27-altered diffuse midline glioma
Diffuse midline glioma (DMG) H3K27-altered is one of the most malignant childhood cancers. Radiation therapy remains the only effective treatment yet provides a 5-year survival rate of only 1%. Several clinical trials have attempted to enhance radiation antitumor activity using radiosensitizing agents, although none have been successful. Given this, there is a critical need for identifying effective therapeutics to enhance radiation sensitivity for the treatment of DMG. Using high-throughput radiosensitivity screening, we identified bromo- and extraterminal domain (BET) protein inhibitors as potent radiosensitizers in DMG cells. Genetic and pharmacologic inhibition of BET bromodomain activity reduced DMG cell proliferation and enhanced radiation-induced DNA damage by inhibiting DNA repair pathways. RNA-Seq and the CUT&RUN (cleavage under targets and release using nuclease) analysis showed that BET bromodomain inhibitors regulated the expression of DNA repair genes mediated by H3K27 acetylation at enhancers. BET bromodomain inhibitors enhanced DMG radiation response in patient-derived xenografts as well as genetically engineered mouse models. Together, our results highlight BET bromodomain inhibitors as potential radiosensitizer and provide a rationale for developing combination therapy with radiation for the treatment of DMG.
Introduction
Diffuse midline gliomas (DMGs) with H3K27M mutation (histone H3 lysine 27 replaced with methionine) are diffusely infiltrating glial neoplasms affecting midline structures of the CNS (1).DMG is one of the most malignant childhood tumors, with a median survival of 9-12 months from diagnosis (2).Factors that contribute to the dismal prognosis include the infiltrative nature and anatomic location of the tumor within the pons, which precludes surgical resection.The identification of effective therapies has been extremely challenging, with more than 250 clinical trials involving different combinations of chemotherapeutic agents commonly used in adult glioma proving ineffective in treating DMG (3).Fractionated focal radiation to a total dose of 54-60 Gy over a 6-week period remains the only standard treatment modality that can provide transient symptom relief and a delay in tumor progression in about 70%-80% of patients.However, radiation-treated children with DMG show evidence of disease progression within the first year of completing radiation therapy (4,5).Given this reality, the identification of efficacious therapeutic agents that enhance the antitumor effects of radiation is urgently needed for improving treatment outcomes for this patient population.
In contrast to adult gliomas, DMG is uniquely dependent on the H3K27M mutation for its initiation and maintenance (6)(7)(8)(9).H3K27M mutation occurs in H3F3A and HIST1H3B/C genes, encoding histone H3 variants H3.3 and H3.1, respectively, in as much as 80% of DMGs and is associated with shorter survival among patients with DMG (6,7,10).We and others have identified a key functional consequence of H3K27M mutation: mutant protein sequestration of the polycomb repressive complex 2 (PRC2) methyltransferase resulting in functional inactivation of PRC2 (8,9,11,12).This inactivation leads to a global reduction of H3K27 di-methylation (K27me2) and tri-methylation (K27me3), which, in turn, leads to extensive transcriptional reprogramming of mutant cells and promotes a stem cell-like, therapy-resistant phenotype.
Diffuse midline glioma (DMG) H3K27-altered is one of the most malignant childhood cancers.Radiation therapy remains the only effective treatment yet provides a 5-year survival rate of only 1%.Several clinical trials have attempted to enhance radiation antitumor activity using radiosensitizing agents, although none have been successful.Given this, there is a critical need for identifying effective therapeutics to enhance radiation sensitivity for the treatment of DMG.Using high-throughput radiosensitivity screening, we identified bromo-and extraterminal domain (BET) protein inhibitors as potent radiosensitizers in DMG cells.Genetic and pharmacologic inhibition of BET bromodomain activity reduced DMG cell proliferation and enhanced radiation-induced DNA damage by inhibiting DNA repair pathways.RNA-Seq and the CUT&RUN (cleavage under targets and release using nuclease) analysis showed that BET bromodomain inhibitors regulated the expression of DNA repair genes mediated by H3K27 acetylation at enhancers.BET bromodomain inhibitors enhanced DMG radiation response in patient-derived xenografts as well as genetically engineered mouse models.Together, our results highlight BET bromodomain inhibitors as potential radiosensitizer and provide a rationale for developing combination therapy with radiation for the treatment of DMG.
BET bromodomain inhibition potentiates radiosensitivity in models of H3K27-altered diffuse midline glioma bromodomain inhibitors were subsequently validated for their antiproliferative effects in DMG cells.AZD5153 and molibresib (I-BET762) in combination with radiation showed strong additive cytotoxic effects relative to each monotherapy (white dotted line, Figure 1C).However, methotrexate and temozolomide, which have been using in combination with radiation in adult glioblastoma (GBM), did not show additive radiosensitizing effects or monotherapy cytotoxic effects in DMG cells.Our results are consistent with the results from clinical trials, which show that DMG transiently responds to the combination of temozolomide and radiation, but with no survival benefit from the combination therapy (21).
Targeted inhibition of BET bromodomain activity reduces cell proliferation and induces apoptosis in K27M-mutant DMG cells.To address whether BET bromodomain activity is required for K27M-mutant DMG cell growth, we studied the effects of depletion of BRDs (BRD2, -3, -4) on DMG cell proliferation using CRISPR/ Cas9 KO (Supplemental Figure 2).KO effects were confirmed at the protein level (Supplemental Figure 2A), and the effects of BRD depletion on cell proliferation were analyzed by use of the MTS assay (Supplemental Figure 2B).BRD4 depletion reduced H3K27ac and reciprocally increased H3K27me3 protein expression, whereas depletion of BRD2 and BRD3 did not affect the expression levels of H3K27ac and K27me3 (Supplemental Figure 2A).In addition, only BRD4 depletion suppressed the growth of DMG cells (Supplemental Figure 2B).We further analyzed the effects of BRD4 depletion on DMG cell growth (Figure 2 and Supplemental Figure 3).BRD4 KO and shRNA knockdown (KD) were confirmed at the protein level (Figure 2A and Supplemental Figure 3A), and the effects of BRD4 depletion on cell proliferation were analyzed by the MTS assay (Figure 2B and Supplemental Figure 3B), colony formation assays (Figure 2C and Supplemental Figure 3C), and BrdU incorporation assay (Figure 2D) in 2 K27M-mutant DMG cell lines (SF8628 and DIPG007).BRD4 depletion significantly reduced DMG cell growth relative to scramble control (Figure 2B and Supplemental Figure 3B).BRD4 depletion also suppressed colony formation in DMG cells (Figure 2C and Supplemental Figure 3C).The BrdU-positive cell population in S phase was decreased by BRD4 depletion in DMG cells (Figure 2D).These results indicate that BRD4 activity is required for DMG cell proliferation and suggest the possibility of a rational therapeutic target in DMG.
Through an unbiased high-throughput radiosensitivity screen, we found the BET bromodomain inhibitors to be potent radiosensitizers of H3K27M-mutant DMG cells.The depletion of BRD using shRNAs and sgRNAs, and BET bromodomain inhibition using small molecule inhibitors, reduced DMG cell proliferation and enhanced radiation-induced DNA damage by inhibiting DNA repair pathways.Moreover, BET bromodomain inhibition downregulated expression of DNA repair genes associated with H3K27 acyetylation (H3K27ac) occupancy and enhanced DMG radiation response in vitro and in vivo.Together, these results highlight BET bromodomain inhibition as a potential radiosensitizer and provide a rationale for developing combination therapy with radiation for the treatment of this deadly pediatric brain cancer.
Results
BET bromodomain inhibitors are identified as potent radiosensitizers by high-throughput drug screening.We first performed an unbiased high-throughput radiosensitivity screen in H3.3 WT and K27M-mutant DMG neurosphere cells using a total of 2,880 compounds, including 1,280 FDA-approved drugs and 1,600 clinical compounds (mainly small molecule inhibitors of epigenetic processes) in the presence or absence of 10 Gy irradiation.Radiosensitizing effects were quantified by cell death number using confocal image analysis combined with Hoechst nuclear staining and propidium iodide (PI) DNA staining (Figure 1A).We identified several clinical-grade BET bromodomain inhibitors as potent radiosensitizers in the screen (Supplemental Table 1; supplemental material available online with this article; https://doi.org/10.1172/JCI174794DS1),which increased cell death in combination with radiation (Figure 1B and Supplemental Figure 1).H3.3 K27M-mutant DMG neurosphere cells were more sensitive to BET bromodomain inhibitors in combination with radiation than H3.3 WT DMG neurosphere cells (Supplemental Figure 1).These BET vitro and in vivo.AZD5153 treatments induced dose-dependent inhibition of cell growth in 5 K27M-mutant DMG cell lines as well as human astrocytes expressing the K27M H3F3A transgene (Astro-KM cells), with IC 50 values of 0.41 μM (SF8628), 0.053 μM (DIPG007), 0.022 μM (SU-DIPG36), 0.063 μM (SU-DIPG4), 0.013 μM (genetically engineered mouse model-DMG [GEMM-DMG]), and 0.020 μM (Astro-KM) (Figure 3B).Normal human astrocytes (NHAs) and Astro-WT cells showed less sensitivity to NCT04817007, NCT02419417, NCT05372354).However, the preclinical efficacy of BMS-986158 was disappointing in orthotopic (brainstem) DMG PDX models (Supplemental Figure 3D) due to its poor penetrance across the blood-brain barrier (BBB), with a brain penetration ratio of 5.02% ± 1.32 % (Supplemental Table 2).AZD5153 showed a brain penetration ratio of 12.9% ± 1.25 %, higher than that of BMS-986158.Thus, we tested the efficacy of AZD5153 for cytotoxicity and radiosensitivity in DMG in Figure 1.High-throughput drug screening with radiation identified BET bromodomain inhibitors as radiosensitizers in DMG cells.Tumor cells isolated from GEMM-DMG (Ntv-a; p53fl/ fl; PDGFB; H3.3K27M; Cre) mice were cultured ex vivo as neurospheres and used to drug screen for radiosensitizers.(A) Representative image of neurospheres in transmitted light, Hoechst staining (nuclear), PI staining (dead cell), and Hoechst/PI overlay.Evaluation of the number, area, and dead cell intensity of neurospheres.4x magnification, with each pixel being 3.4156 mm in size, for a raw image of 3.5 mm.Images were subsequently enlarged to enhance visibility.(B) A library of 1,280 FDA-approved drugs and 1,600 clinical candidates was screened in the presence or absence of 10 Gy radiation.Left: Compounds to the right of the diagonal light green line are identified as having an additive effect on neurospheres in the presence 10 Gy radiation Compounds to the right of the dark green diagonal line are identified as those that radiosensitized neurospheres beyond the additive drug effect when combined with 10 Gy radiation (at ≥3σ).Right: Representative images of neurospheres treated with BET bromodomain inhibitor (BRDi) in the presence or absence of 10 Gy radiation.norm., normalized.4× magnification, with each pixel being 3.4156 mm in size, for a raw image of 3.5 mm.Images were subsequently enlarged to enhance visibility.(C) Radiosensitizing effect (orange) and cytotoxic effect (gray) with methotrexate, AZD5153, temozolomide, and molibresib (I-BET762).AZD5153, with IC 50 values of 1.35 μM and 1.47 μM, respectively (Figure 3B).AZD5153 IC 50 also inhibited DMG cell growth in a time-dependent manner (Figure 3C) and reduced colony formation in DMG cells (Figure 3D).
BET bromodomain inhibition sensitizes DMG cells to radiation.To verify the radiosensitizing effect of BET bromodomain inhibition in K27M-mutant DMG cells, we conducted a clonogenic survival assay in 3 DMG cell lines (SF8628, DIPG007, GEMM-DMG).Cells were treated with AZD5153 (Figure 4A) and JQ1 (Supplemental Figure 4A) and depleted of BRD4 with shBRD4 and sgBRD4 (Supplemental Figure 4B), concurrently with being subjected to ionizing radiation (IR).To quantify the radiosensitizing effect, we calculated the dose enhancement factor (DEF), which represents the ratio of the dose with IR alone to the dose with combination of BRD4 inhibitor plus IR and BRD4 inhibition at 10% survival.If the DEF is greater than 1, the BRD4 inhibition will be functioning as a radiosensitizer.AZD5153 treatment showed a radiationenhancing effect, with DEFs of 1.22 (SF8628), 1.32 (DIPG007), and 1.10 (GEMM-DMG) (Figure 4A).JQ1 had similar effects on radiation response in DMG cells, with DEFs of 1.25 (SF8628), 1.40 (DIPG007), and 1.14 (GEMM-DMG) (Supplemental Figure 4A).shBRD4 KD and sgBRD4 KO also increased the radiation response of DMG cells, with DEFs of 1.22 (shBRD4-484 and -487, SF8628), 1.34 (shBRD4-484 and -487, DIPG007), and 1.18 (sgBRD4-1, SF8628), 1.20 (sgBRD4-2, SF8628), 1.28 (sgBRD4-1 and -2, DIPG007) (Supplemental Figure 4B).Furthermore, we conducted BrdU incorporation (Figure 4B), apoptosis (Figure 4C), senescence (Supplemental Figure 5, A and B), and sphere formation (Supplemental Figure 5, C and D) assays.AZD5153 treatment resulted in a decreased BrdU-positive S-phase cell population relative to control (Figure 4B).Combination treatment with AZD5153 plus IR further decreased the S-phase cell population when compared with AZD5153 alone (Figure 4B).The annexin V apoptosis assay showed that either AZD5153 or IR monotherapy increased the level of annexin-positive cells compared with control (Figure 4C).Combination treatment of AZD5153 plus IR increased the level of annexin V-positive cells, outperforming each monotherapy.The β-galactosidase assay revealed increasing senescence-associated β-galactosidase staining in DMG cells treated with either AZD5153 or IR monotherapy (Supplemental Figure 5A).Combination treatment with AZD5153 and IR further increased β-galactosidase-positive DMG cells.Cell size is known to be associated with senescence.To quantify cell size, we gated DMG cells for G 1 DNA content and sorted them with the side scatter parameter (SCC) using flow cytometry (Supplemental Figure 5B).Similar to the results with β-galactosidase staining, combination treatment with AZD5153 and IR further increased cell size relative to either monotherapy.Combination treatment also reduced self-renewal activity (Supplemental Figure 5C) and neurosphere formation compared with either monotherapy (Supplemental Figure 5D).These results suggest that when compared with monotherapy, combination treatment with AZD5153 plus IR further increased radiosensitivity in DMG cells by decreasing the population of radioresistant S-phase cells and stemness and increasing apoptosis and senescence.
BET bromodomain inhibition downregulates the genes involved in DNA repair and cell cycle in K27M-mutant DMG cells.We have previously shown that JQ1 treatment causes a change in the expression of genes that promote tumor growth in K27M-mutant DMG (15).In our current RNA-Seq analysis, we performed unsupervised principal component analysis of SF8628 DMG cells treated with DMSO and BET bromodomain inhibitors (AZD5153, JQ1) for 24 and 48 hours.We found a global gene expression shift in AZD5153-treated DMG cells compared with DMSO-treated samples (Figure 5A).We compared the RNA-Seq data from the samples treated with DMSO or AZD5153, and the previous RNA-Seq data from the samples treated with JQ1.The differentially expressed genes were highly correlated between the samples treated with JQ1 and AZD5153 (Figure 5B), including 3,301 genes upregulated and 3,591 downregulated in response to the BET bromodomain inhibitor.Gene set enrichment analysis (GSEA) (Figure 5, C and D, and Supplemental Figure 6, A and B) and gene ontology (GO) pathway analysis (Figure 6, A and B, and Supplemental Figure 6, C and D) showed that cell cycle (e.g., CDK6, CDCA7, and UHRF1) and DNA double-strand breaks (DSB) repair pathways (e.g., BRCA1, RAD51, XRCC1, XRCC4, and POLQ1) were among the most significantly downregulated with the BET bromodomain inhibitor treatment.AZD5153 and JQ1 treatments also upregulated gene pathways involved in autophagy (e.g., ATGA4, MAP1LC3B) and catabolism pathways including glycolysis and protein/macromolecule catabolic pathways (e.g., SIRT1, MTOR) (Figure 5, C and D, and Supplemental Figure 6, A and B).The senescence-associated genes CDKN1A and HMGA1 were upregulated by AZD5153 treatment (Supplemental Figure 7, A and B).However, CDKN2A was downregulated by AZD5153 treatment.This could be due to increased H3K27me3, which repressed the PRC2 targets, including CDKN2A (Supplemental Figure 7, A and B).
BET bromodomain inhibition is known to suppress gene expression by dissociating BRD from the active chromatin mark histone H3K27ac (26).We have shown that genomic occupancy of H3K27ac and BRD is required for enhancer activity and gene expression in DMG cells (15).To determine the effects of BET bromodomain inhibition on gene expression associated with H3K27ac occupancy, we performed CUT&RUN followed by next-generation sequencing in DMG cells treated with AZD5153 (Figure 7).The CUT&RUN data showed that the majority of H3K27ac enrichments occurred in introns (first intron: 13.47%; other intron: 28.97%) and intergenic regions (38.8%) (Figure 7A).Metaplots and heatmaps showed the enrichments of H3K27ac signal near the previously defined enhancer regions in DMG cells (Figure 7B).AZD5153 treatment dramatically reduced H3K27ac occupancy at enhancer regions.To investigate the enrichment of transcription factors among the H3K27ac DNA-binding sites, we used the DiffBind R package to determine differential peaks between DMSO-and AZD5153-treated samples (FDR < 0.05).We performed simple enrichment motif analysis in SF8628 DMG cells and found a significant enrichment of DNA sequencing motifs involving neuronal developmental transcriptional factors such as LHX1-3 and HOX13 (e-value < 0.05; Supplemental Table 3).Interestingly, the H3K27ac peaks of 2 representative DNA repair genes, BRCA1 and RAD51, were diminished at enhancer regions in the SF8628 cells treated with AZD5153 (Figure 7C).Expression of these DNA repair genes was significantly downregulated in AZD5153-treated samples in the RNA-Seq analysis (Figure 5, B and C).Senescence-associated gene expression was not correlat-ing radiation (Figure 9B and Supplemental Figure 4D).Expression of BRCA1, RAD51, and RAD50 was also induced by radiation and peaked at 3-6 hours following radiation (Figure 9B).BET bromodomain inhibitors extended radiation-induced γH2X expression over 6 hours following radiation (Figure 9B and Supplemental Figure 4D).In contrast, expression of BRCA1, RAD51, and RAD50 was decreased by BET bromodomain inhibitors over the time of radiation.These results suggest that BET bromodomain inhibition extends radiation-induced DNA damage signaling by suppressing the DNA repair pathway.
DNA damage is repaired by 2 major pathways: homologous recombination (HR) and nonhomologous end-joining (NHEJ) repair (27,28).HR repairs DNA DSB during S and G 2 phases and provides a template for error-free repair.In contrast, NHEJ is active throughout the cell cycle and directly involves ligation of DNA ends without homology.To analyze the DNA damage repair pathways in DMG cells, we transfected GFP reconstitution reporter cassettes for HR and NHEJ (29) into SF8628 DMG cells in the presence or absence of AZD5153.AZD5153 treatment reduced DNA repair ability through both the HR and NHEJ DNA repair pathways (Figure 9C), which is consistent with the RNA-Seq results showing that AZD5153 downregulated the genes involved in both HR and NHEJ repair pathways (Figure 5).Collectively, our results indicate that BET bromodomain inhibition increased radiation-induced DNA damage by inhibiting the HR and/or NHEJ DNA repair pathway in K27M-mutated DMG cells.
BET bromodomain inhibitors enhance radiation-mediated antitumor effects in patient-derived and genetically engineered DMG animal models.Based on the radiosensitizing effect of BET bromodomain inhibition on the growth of K27M-mutant DMG cells, we hypothesized that BET bromodomain inhibition increases radiation-mediated antitumor activity and survival benefit in DMG mouse models.To address this, we implanted SF8628 or GEMM-DMG cells into mouse pons and treated them with AZD5153 (50 mg/kg) or JQ1 (30 mg/kg) for 2 weeks in the presence or absence of radiation at total dose of 9 Gy (1.5 Gy per day for 3 days a week for 2 weeks) (Figure 10A).BET bromodomain inhibitor monotherapy inhibited tumor growth and extended the survival of mice with SF8628 DMG PDX (Figure 10, B and C) as well as GEMM-DMG models (Supplemental Figure 8).Similarly, radiation monotherapy provided a significant therapeutic benefit (Figure 10, B and C, and Supplemental Figure 8).We found that combination treatment with AZD5153 and radiation therapy significantly prolonged animal survival (Figure 10B and Supplemental Figure 8).Similarly, the combination treatment with JQ1 and radiation showed a significant survival benefit (Figure 10C).These in vivo efficacy studies included euthanizing the mice at the end of treatment to obtain brainstem tumor samples for analysis of tumor cell proliferation (Ki-67; Figure 10D), apoptosis (TUNEL, Figure 10D), senescence (p21, p16; Supplemental Figure 9), and migration (normal human nuclear antigen [NHNA]; Supplemental Figure 9, bottom).Analysis of intratumor Ki-67 staining showed that all therapies significantly reduced SF8628 DMG cell proliferation relative to the control group (Figure 10D).There were significantly fewer Ki-67-positive cells in the samples treated with combination therapy compared with either monotherapy.TUNEL staining results showed that the proportion of positive cells was highest in tumors ed with H3K27ac occupation (Supplemental Figure 7C).Taken together, our results suggest that BET bromodomain inhibition promotes a transcriptionally silent chromatin state by reducing H3K27ac occupancy and represses the expression of the genes involving DSB repair in K27M-mutant DMG cells.
BET bromodomain inhibition enhances radiation-induced DNA damage.We next analyzed the effects of BET bromodomain inhibition on radiation-induced DNA damage and repair pathways in DMG cells.We examined fluorescence immunocytochemistry of the DNA DSB marker γH2AX and repair marker 53BP1 to quantify the extent of DNA damage and repair in irradiated SF8628 DMG cells in either the presence or absence of the BET bromodomain inhibitors AZD5153 (Figure 8A) and JQ1 (Supplemental Figure 4C).The number of γH2AX and 53BP1 foci increased 1 hour following IR, indicating increased DNA DSB damage and repair by IR.At 24 hours after IR, γH2AX and 53BP1 foci were largely reduced in those cells due to successful repair upon DNA damage.However, irradiated DMG cells treated with BET bromodomain inhibitors sustained high levels of γH2AX at 24 hours compared with cells treated with IR alone, while 53BPI foci were decreased (Figure 8A and Supplemental Figure 4C).Similarly, comet assays showed that IR increased comet tail formation in SF8628 and DIPG007 DMG cell lines, indicating increased unrepaired DNA damage (Figure 8B).DNA damage further increased comet tail formation in irradiated DMG cells treated with AZD5153 compared with cells treated with IR alone.These results suggest that BET bromodomain inhibition may contribute to the DNA repair process to enhance radiationinduced DNA damage.Western blotting showed that AZD5153 treatment decreased expression of the DNA repair markers BRCA1, RAD51, and XRCC1 in DMG cell lines (Figure 9A).H3K27ac was also decreased by AZD5153 in a dose-dependent manner (Figure 9A).Radiation-induced γH2X expression peaked at 1 hour follow- successfully integrated the HTS with neurosphere-based assays using automated fluorescence live-cell imaging in the presence or absence of radiation and identified several clinical-grade BET bromodomain inhibitors as top candidates for radiosensitization (Figure 1, Supplemental Figure 1, and Supplemental Table 1).In this assay, we used PI to detect the dead cell population.However, PI staining may not capture the long-term mechanism of radiation-induced cell death.Further evaluation of proliferative cell death caused by radiation, such as mitotic catastrophe (31), would be needed to understand the mechanism of radiation-induced cell death.
BET bromodomain inhibitors disrupt the binding between acetylated histone and BRD proteins and inhibit active transcription, leading to enhancement of the radiation effect in DMG (Figure 11B) (14-16).K27M-mutant DMG cells are vulnerable to BET bromodomain inhibition due to transcriptional dysregulation resulting from the mutation (14,15,20,32).We demonstrated that BET bromodomain inhibition, by shRNA-or sgRNA-mediated BRD depletion (Figure 2, Supplemental Figure 2, and Supplemental Figure 3) and treatment with small molecule inhibitors (AZD5153 and JQ1) (Figure 3), suppressed the growth of human and mouse K27M-mutant DMG cells.Importantly, BET bromodomain inhibition in combination with radiation further increased radiosensitivity of DMG cells by reducing the radioresistant derived from mice receiving combination therapy of AZD5153 and radiation relative to those receiving either monotherapy (Figure 10D).No TUNEL positivity was evident in normal brain surrounding tumor in mice receiving any of the combination treatments.Senescence marker p21 staining revealed an increase in positive cells in tumors treated with AZD5153 and in combination with radiation (Supplemental Figure 9).However, p16-positive cells were decreased by the treatment.This could be due to downregulation of the expression of the CDKN2A gene, which codes p16 protein.NHNA staining revealed decreasing NHNA-positive cells in the tumors of mice treated with either AZD5153 or radiation (Supplemental Figure 9).Combination treatment further decreased NHNA positive cells relative to each monotherapy.
Discussion
DMG is one of the most malignant childhood cancers, with a limited response to radiation therapy, resulting in a dismal prognosis: median overall survival is less than 12 months.There is a critical need for new therapeutics that enhance the effect of radiation in the treatment of DMG.Here, we identified BET bromodomain inhibitors as potent radiosensitizers in DMG using unbiased high-throughput radiosensitivity library screening (Figure 11A).High-throughput screening (HTS) is a useful tool for identifying candidate compounds from a large chemical library (30).We JQ1) revealed significant decreases in transcripts from the genes involved in DNA repair pathways for both HR and NHEJ, including BRCA1, RAD51, XRCC1, and XRCC4 (Figures 5 and 6, and Supplemental Figure 6).To advance understanding of the transcriptional regulation in DNA repair gene pathways by BET bromodomain inhibition, we mapped genome-wide occupancy of H3K27ac in K27M-mutant DMG cells using cleavage under targets and release using nuclease (CUT&RUN; Figure 7).We have previously profiled the epigenome of K27M-mutant DMG cells and shown that K27M mutation associates with increased H3K27ac and that the heterotypic H3K27M-K27ac nucleosomes colocalize with BET BRD2 and BRD4 at the loci of actively transcribed genes (15).We analyzed S-phase cell population and stemness, and increasing apoptosis and senescence (Figure 4 and Supplemental Figure 5).
DNA damage is thought to be the most important consequence of the radiation effect, and the genetic alterations of DNA repair pathways are frequently detected in pediatric high-grade glioma, including DMG (6,10,33,34).We and others have demonstrated that the majority of DNA DSB caused by radiation are repaired within 24 hours of completing radiation (35)(36)(37).Thus, DNA repair is a key factor in radiosensitivity and can be a therapeutic target to enhance radiation-mediated antitumor activity in K27Mmutant DMG.Our gene expression profiling of K27M-mutant DMG cells treated with BET bromodomain inhibitors (AZD5153 and (Figure 7).In fact, the BET bromodomain inhibitors AZD5153 and JQ1 inhibited HR and NHEJ DNA repair pathways and prolonged radiation-induced DNA damage in K27-mutant DMG cells (Figures 8 and 9, and Supplemental Figure 4).
The molecular mechanisms of BET bromodomain inhibition in transcription and chromatin machinery for DNA damage repair in DMG are not fully understood.Upon binding to the chromatin, BET BRDs are known to function in the assembly of complexes that facilitate chromatin accessibility to transcription factors, allowing for the recruitment of RNA polymerases II (RNAPII) (39)(40)(41)(42).In particular, BRD4 is required for subsequent progression of RNAPII through hyperacetylated nucleosomes during transcription elongation through interactions of its bromodomains with acetylated histones in order to prevent transcriptional stalling (40)(41)(42).Yaffe's group demonstrated that deregulated transcription following inhibition or loss of BRD4 in cancer cells leads to the accumulation of RNA:DNA hybrids (R-loops) and collisions with the replication machinery causing replication stress, DNA damage, and apoptotic cell death during S phase (43).We observed that BET bromodomain inhibition decreased the radioresistant S-phase cell population, diminished self-renewal activity, and resulted in increased apoptotic cell death and cellular senescence (Figure 4 and Supplemental Figure 5).In a study of H3K27me3-deficient medulloblastoma cells (44), JQ1 inhibition sensitized medulloblastoma cells to radiation by enhancing the apoptotic response through suppression of Bcl-xL and upregulation of Bim.Loss of H3K27me3 caused an epigenetic switch from H3K27me3 to H3K27ac at specific genomic loci, altering the transcriptional profile, which associated with a radioresistant phenotype in H3K27me3-deficient medulloblastoma.Stemness is a key characteristic of radioresistance in glioma (45).BET bromodomain inhibition may sensitize H3K27me3-deficient tumors to radiation by diminishing the radioresistance phenotype and enhancing apoptotic response and cellular senescence.We will further investigate the role of BET bromodomain inhibition in the transcription machinery associated with histone modification of H3K27me3 and H3K27ac to understand the radiation-induced DNA damage response in DMG.
Consistent with in vitro experiments, our animal studies demonstrated that the combination therapy with BET bromodomain inhibitor and radiation inhibited tumor growth and increased survival in human and murine DMG mouse models compared with either therapy alone (Figure 10).The improvement resulting from the combination therapy proved to be modest.One limitation to the in vivo efficacy of BET bromodomain inhibitors is poor brain penetration (Supplemental Table 2).To increase drug concentrations in the brain, we would further investigate new drug delivery systems such as disrupting the BBB using focused ultrasound (46,47) or bypassing the BBB using convection-enhanced delivery (48) and intranasal delivery (49).Nevertheless, our findings support the possible use of BET bromodomain inhibitor to increase the radiation-mediated antitumor effect for the treatment of DMG.
Methods
Further information can be found in Supplemental Methods.
Sex as a biological variable.Our study examined 6-week-old female athymic mice (rnu/rnu genotype, BALB/c background).The animals the specific loci with H3K27M-K27ac occupation from the previous study (15) and found that BET bromodomain inhibition diminished a genome-wide distribution of H3K27ac at enhancer regions including 2 representative DNA repair genes, BRCA1 and RAD51 (Figure 7).DNA damage induces cellular senescence (38).We found that the senescence-associated genes, CDKN1A and HMGA1 were upregulated by BET bromodomain inhibition (Supplemental Figure 7, A and B).However, CDKN2A was downregulated by BET bromodomain inhibition.There was no association between H3K27ac occupation and senescence-associated gene expression (Supplemental Figure 7C).It is possible that senescenceassociated genes are controlled by different epigenetic regulation mechanisms, such as H3K27me3.Indeed, BET bromodomain inhibition reciprocally increased H3K27me3 (Supplemental Figure 2A), which resulted in silencing of PRC2 target genes such as CDKN2A.Our results indicated that BET bromodomain inhibitors downregulate the genes involved in DNA damage repair mediated by H3K27ac at enhancers, providing a basis for the possibility that BET bromodomain inhibitors act as radiation enhancers in DMG the identity of the cell lines.All cells were cultured in an incubator at 37°C in a humidified atmosphere containing 95% O 2 and 5% CO 2 and were mycoplasma-free at the time of testing with a Mycoplasma Detection Kit (InvivoGen).
shRNAs and sgRNA treatments.BRD4 and scrambled control shRNAs (BRD4 shRNAs: V3THS_378004, V3THS_326487, V3THS_326484, Control shRNA: RHS4346; Dharmacon IDs) were used to generate lentivirus and infected tumor cells according to the manufacturer's instructions.At 24 hours after lentiviral infection, cells were selected using 2 mg/mL puromycin for 5 days prior to in vitro assays.We also generated sgRNAs to knock out BRD2, BRD3, and BRD4 expression.sgRNA for the ROSA26 gene was used as control (see Supporting Data Values).The lentiCRISPRv2 vector (a gift from F. Zhang; Addgene plasmid 52961) was digested with BsmBI and inserted the sgRNAs into the vector (53).The ligation reactions were transfected into Stbl3 cells.Positive clones were confirmed with Sanger sequence.These plasmids were cotransfected into HEK293T cells with psPAX2, pMD2.G with PEI reagents (23966, Polysciences).Supernatants containing virus particles were collected at 48 and 72 hours and were used to infect DMG cells.After 48 hours of lentiviral infection, cells were selected with 2 μg/mL puromycin for 5 days prior to the in vitro assay.
Clonogenic survival assay.Six-well tissue culture plates were seeded with 400-10,000 cells and allowed to adhere for 12 hours.The modified cells with BRD4 shRNAs or sgRNAs, or unmodified cells treated with 50 nM AZD5153 or 50-100 nM JQ1 alone, were irradiated at doses of 0.5, 1, 2, 3, 4, 6, and 8 Gy.Radiation was delivered by a gamma irradiator.Cells were incubated at 37°C for 2 weeks, after which colonies were counted following staining with 0.05% crystal violet.Plating efficiencies were calculated as the ratio of the number of colonies formed to the number of cells seeded.Colonies of more than 50 cells were used to indicate surviving fractions.Surviving fractions were calculated as the plating efficiency of treated cells divided by the plating efficiency of control cells.DEFs were calculated as the ratio of the dose with radiation alone to the dose with radiation and BRD4 inhibition at 10% survival.
DNA repair assays.GFP reconstitution reporter cassettes for detection of HR and NHEJ have been previously reported (29,34).Plasmids containing HR or NHEJ reporter cassettes were linearized and transfected into cells to measure HR or NHEJ as a function of GFP expression.Transfections were performed using Lipofectamine 2000 (ThermoFisher Scientific 11668027).Cells with integrated reporter constructs were selected by adding 1 mg/mL geneticin (ThermoFisher Scientific 10131-035).HR or NHEJ cassette-expressing cells were treated with 1 μM AZD5153 for 72 hours, then transfected with a mixture of 5 μg ISceI-expressing plasmid and 2 μg pDsRed2-N1 (Clonetech 632406).Four days following transfection, cells were harvested, suspended in PBS, and placed on ice.Cells were then analyzed by FACS using LSRFortessa cell analyzer (BD Biosciences).Cells expressing GFP, pDsRed2-N1, or no fluorescent protein were used as calibration controls.Data were analyzed using FlowJo software.DNA repair efficiency was determined as a ratio of GFP + to DsRed + cells normalized to 100% of vehicle control (DMSO).
Comet assay.Cells were treated with 500 nM AZD5153 or 0.5% DMSO, followed by 4 Gy irradiation, and alkaline comet assays were performed (54).Briefly, 10,000 cells were resuspended in 75 μL of 0.5% (wt/vol) low-melting-point agarose and pipetted on slides pre-were purchased from Envigo and housed under aseptic conditions.The animals are well established and were used to develop DMG PDXs in our published studies (15,35,37,50,51).There are no reported sex differences among DMG patients.
Statistics.Survival plots were generated and analyzed using the Kaplan-Meier method and Graph-Pad Prism v9.5 software.Differences between survival plots were estimated using a log-rank test with Holm's adjustment.For other analyses, 1-way ANOVA was applied for multiple-group comparison with a post hoc Tukey's test and a 2-tailed unpaired t test for comparison in 2 groups using the Prism software.
Study approval.All animal protocols were approved by the Northwestern University Institutional Animal Care and Use Committee.
Histology and Phenotypic Core, and NUSeq Core facilities.The initial drug screen was supported by a developmental research project awarded to OJB, RH, and MRC through the NU Brain SPORE award mechanism (5P50CA221747).
Address correspondence to: Oren J. Becher, Department of Pediatrics, Icahn School of Medicine at Mount Sinai, 1468 Madison Avenue, 4th Floor, New York, New York 10029, USA.Phone: 212.241.7022;Email<EMAIL_ADDRESS>to: Rintaro Hashizume, Department of Pediatrics, University of Alabama at Birmingham, 1670 University Boulevard, G019A, Birmingham, Alabama 35294, USA.Phone: 205.975.0285;Email<EMAIL_ADDRESS>of data.All authors commented on the manuscript and approved the included data.
Figure 7 .
Figure 7. BET bromodomain inhibition altered genome-wide H3K37ac occupancy and transcription in DMG cells.CUT&RUN was performed using H3K27ac antibody in SF8628 DMG cells treated with 1 μM AZD5153 or 0.5% DMSO for 48 hours.(A) Pie chart showing the distribution of H3K27ac across the DMG genome.(B) Heatmaps showing H3K27ac occupancy with DMSO versus AZD5153 treatment.Metaplots above indicate corresponding H3K27ac occupancy.Each plot is centered on the summit of the average occupancy and extended 5 kb upstream and downstream (-5 kb and +5 kb, respectively).Corresponding gene expression at the H3K27ac binding sites generated from RNA-Seq are shown to the right.(C) Gene annotation tracks showing H3K27ac occupancy and gene expression for the BRCA1 and RAD51 loci.The enhancer region is highlighted with a square for each gene.
Figure 9 .
Figure 9. BET bromodomain inhibition induced DNA damage and suppressed DNA repair in DMG cells.(A) Western blotting showing the effect of AZD5153 (0-10 μM) on expression of DNA repair marker: BRCA1, RAD51, and XRCC1, DNA damage marker: γH2AX, and H3K27ac.(B) Western blot showing effects of AZD5153 (5 μM) on expression change over time after 6 Gy IR in SF8628 DMG cells.(C) DNA repair assay showing effect of AZD5153 (500 μM) on HR and NHEJ pathways in SF8628 DMG cells.Flow plots represent fluorescent signals from HR and NHEJ reporter cassettes.Repair efficiency represents the ratio of GFP + to DsRed + cells normalized to 100% of vehicle control (0.5% DMSO).Values (mean ± SEM) shown are based on averages from quadruplicate samples.Unpaired t test values for comparisons between control and AZD5153 samples: ***P = 0.0004 (HR), ****P < 0.0001 (NHEJ).
Figure 11 .
Figure 11.Working model.(A) High-throughput drug screening.BET bromodomain inhibitors (BRDi) were identified as radiosensitizers using high-throughput drug screening in the DMG cells treated with radiation.BRDi decreased H3K37ac occupancy at enhancer regions, which led to suppressed transcription involving DNA repair in DMG cells.(B) Epigenetic inhibition of DNA repair genes.BRDi disrupts the interaction between acetylated histone (Ac) and BRDs, inhibiting active transcription for the genes involving radiation-induced DNA damage repair, which results in enhancement of the radiation effect in DMG. | 7,936.6 | 2024-05-21T00:00:00.000 | [
"Medicine",
"Biology"
] |
Preliminary Results in Innovative Solutions for Soil Carbon Estimation: Integrating Remote Sensing, Machine Learning, and Proximal Sensing Spectroscopy
: This paper explores the application and advantages of remote sensing, machine learning, and mid-infrared spectroscopy (MIR) as a popular proximal sensing spectroscopy tool in the estimation of soil organic carbon (SOC). It underscores the practical implications and benefits of the integrated approach combining machine learning, remote sensing, and proximal sensing for SOC estimation and prediction across a range of applications, including comprehensive soil health mapping and carbon credit assessment. These advanced technologies offer a promising pathway, reducing costs and resource utilization while improving the precision of SOC estimation. We conducted a comparative analysis between MIR-predicted SOC values and laboratory-measured SOC values using 36 soil samples. The results demonstrate a strong fit (R 2 = 0.83), underscoring the potential of this integrated approach. While acknowledging that our analysis is based on a limited sample size, these initial findings offer promise and serve as a foundation for future research. We will be providing updates when we obtain more data. Furthermore, this paper explores the potential for commercialising these technologies in Australia, with the aim of helping farmers harness the advantages of carbon markets. Based on our study’s findings, coupled with insights from the existing literature, we suggest that adopting this integrated SOC measurement approach could significantly benefit local economies, enhance farmers’ ability to monitor changes in soil health, and promote sustainable agricultural practices. These outcomes align with global climate change mitigation efforts. Furthermore, our study’s approach, supported by other research, offers a potential template for regions worldwide seeking similar solutions.
Introduction
Soil organic carbon (SOC) is a key component of the global carbon cycle [1][2][3] and stores more carbon than the atmosphere and biosphere combined [4].Soil organic carbon (SOC) is not only composed of plant residues and animal wastes, but also includes organic matter that has been transformed by a diverse array of soil microorganisms, such as fungi and bacteria.While cyanobacteria are an exception, most of these organisms primarily utilise carbon derived from plants.They play a vital role in the soil ecosystem, contributing their living biomass and the products of decomposition to the SOC pool, thereby facilitating the recycling and modification of plant-originated carbon [5][6][7][8][9].The global carbon balance is significantly affected by even small changes in SOC [3].SOC profoundly affects the physical, chemical, and biological properties of the soil [7] and therefore has a direct contribution to agricultural productivity and soil fertility [8].For example, increasing concentrations of SOC enhance the soil's water-holding capacity and promote the formation of a stable soil structure.Furthermore, the decomposition of SOC by soil microorganisms releases essential nutrients, which are then available for plant uptake.SOC also serves as a vital food source for these soil micro-organisms [9].
Despite widespread recognition of the importance of SOC, challenges remain in measuring and monitoring its concentration in soil [10,11].Traditional methods of measuring SOC, such as wet chemical analysis and dry combustion [12,13], require extensive laboratory analysis of numerous soil samples, which is resource-intensive [14].In addition, a limitation of the traditional approach is its inability to assess and monitor changes in SOC concentrations over spatio-temporal scales [15].With advances in remote sensing technique and machine learning, researchers have begun to explore integrated methods of SOC measurement [16][17][18][19][20].These new methods of SOC estimation facilitate the development of spatial strata, thereby reducing the number of sampling profiles needed to produce high-resolution SOC maps, leading to effectively reducing the cost of measuring SOC [16,21].Additionally, researchers have used Mid-infrared spectroscopy (MIR) as a popular proximal sensing tool for SOC prediction [22,23], thus integrating the MIR with remote sensing and machine learning to provide incentives for resource efficiency.
The integration of techniques aims to overcome the limitations of traditional methods and provide cost-effective, accurate, and scalable solutions for SOC measurement [24,25].Remote sensing allows for the collection of large-scale, spatially explicit data on soil properties, including SOC, by utilizing various sensors mounted on satellites or aircraft [20,25].Machine learning algorithms, such as random forests and support vector machines, effectively analyze and interpret, enabling the prediction and mapping of SOC with high precision [26].MIR spectra provide valuable information about the physical structure and chemical composition of soils, including SOC [27,28].By analyzing MIR spectra collected from soil samples profiles, machine learning models can be trained to accurately predict SOC, eliminating the need for costly and time-consuming laboratory analysis [29].These approaches not only reduces the cost of SOC measurement but also allows for rapid and non-destructive assessment of SOC in large areas [30].
To address the current limitations in accurately measuring SOC within Australian soils, we propose an innovative approach that combines remote sensing, machine learning, and MIR spectroscopy [16,23,27].This integration aims to refine SOC estimation methods, achieving higher resolution (10 m × 10 m) SOC maps at a reduced cost of AUD 3/ha/yr.The current carbon project in Australia will validate these two complimentary SOC estimation methods not yet fully validated for Australian conditions.
In summary, this report aimed to shed light on the integration of remote sensing, machine learning, and MIR in SOC measurement.By addressing the limitations of traditional methods, these technologies offer resource efficient and scalable solutions for assessing and monitoring SOC across the spatio-temporal scale.The potential for the commercial application in Australia, combined with its benefits leading to sustainable farming and climate change mitigation, make the approach a promising avenue for future research and implementation.Embracing these novel approaches allows for a better understanding and management of SOC, contributing to sustainable agriculture and aligning with Sustainable Development Goals (SDGs).
Remote Sensing and Soil Organic Carbon
Remote sensing and machine learning have emerged as important tools in modern environmental sciences and have shown great promise in the field of SOC measurement [20].Remote sensing presents the advantage of collecting information for a wider spatial coverage and frequent temporal repeatability [31].Various satellites such as the Landsat program [32,33], the Sentinel series [34,35] and the Moderate Resolution Imaging Spectroradiometer (MODIS) offer a wide range of spectral data that are being used for SOC estimation [19,36].Spectral indices derived from these remote sensing data, such as the Normalised Difference Vegetation Index (NDVI) and the Soil-Adjusted Vegetation Index (SAVI), have been found to correlate with SOC concentration [20,37], as vegetation cover is often directly related to the SOC concentration [38].Thus, these indices provide a very useful clue for predicting SOC concentrations in soils.In addition, multispectral and hyperspectral imaging has the ability to capture soil reflectance information that can be used to predict SOC [39].These techniques not only provide data with high spatial resolution, but also provide more detailed information for the estimation of SOC concentrations.In summary, remote sensing techniques provide powerful tools for SOC concentration estimation through their broad spatial coverage and temporal repeatability, as well as the availability of multiple spectral data.These tools, such as NDVI, SAVI, and multispectral and hyperspectral imaging, not only contribute to the understanding of the distribution and variability of SOC in soil, but also have potentially important applications in sustainable land management and environmental protection.
Remote sensing can provide valuable insights into SOC concentrations, but the accuracy of predictions depends on several factors, including the specific remote sensing technology, spatial resolution, and the use of robust machine learning models [17,40].Ground validation is crucial to ensure the reliability of SOC estimates obtained through remote sensing.SOC concentration affects the spectral properties of the soil, and spectral reflectance from certain wavelengths of light can be correlated with SOC concentration.The variances among machine learning models also contribute to the discrimination of SOC values [41].Machine learning algorithms, including regression models and deep learning models, can be trained on spectral data along with ground-truth SOC measurements.These models can then predict SOC concentrations across large areas based on the spectral information.The spatial resolution of remote sensing data is crucial.Higher-resolution data can provide more accurate predictions of SOC concentrations at a finer scale [42].For example, data with a spatial resolution of 10 m × 10 m can provide more detailed information compared to coarser-resolution data [43].However, there is the detection limit (MDL) in remote sensing [44].The MDL varies depending on the specific sensor and spectral bands used.It is essential to understand the MDL of the remote sensing system being employed to assess its suitability for a given application.To ensure the reliability of SOC predictions, it is essential to validate remote sensing-derived estimates with ground-based measurements of SOC [45].This involves collecting soil samples from various locations and measuring SOC concentrations using laboratory methods.The remote sensing estimates can then be compared to these ground-truth measurements to assess accuracy.Remote sensing data can capture changes in SOC concentrations over time.Repeated data acquisition allows for monitoring changes in SOC due to factors such as land use, climate, and management practices [46,47].
Machine Learning in Soil Organic Carbon Measurement
Machine learning techniques have revolutionised interpretation and analysis of large data sets [48].In the context of SOC estimation, machine learning algorithms such as Random Forests [49][50][51], Support Vector Machines [51] and Artificial Neural Networks [52] have been used extensively.These algorithms can model complex and non-linear relationships between remotely sensed spectral data and SOC [51].Recently, more advanced machine learning techniques such as deep learning have also been applied [16].Convolutional Neural Networks (CNNs), a type of deep learning model, have demonstrated their ability to handle high-dimensional spectral data with increased precision of SOC prediction [53].
Integration of Remote Sensing and Machine Learning
The synergy between remote sensing and machine learning leverages the distinct strengths of each technique [17].Remote sensing provides a wide-reaching, high-resolution view of the Earth's surface, capturing valuable spectral data.Meanwhile, machine learning algorithms excel at processing vast datasets and extracting intricate patterns and insights.When combined, remote sensing data enriches machine learning models with comprehensive environmental information, enabling precise analysis and prediction [17].The multiple, spatially extensive spectral data from remote sensing serve as input to the machine learning models, leading to model the complex relationships for SOC estimations [24].This approach can reduce the cost of measuring SOC by reducing the number of sampling profiles required for estimation of SOC, therefore providing a more cost-effective and scalable solution for large-scale SOC mapping [17,18,51,54].
Limitation of Remote Sensing and Machine Learning Model
Remote sensing and machine learning offer powerful tools for estimating SOC levels, but they are accompanied by several potential limitations and constraints [17].One primary limitation is the spatial resolution, which may impede their ability to capture fine-scale variations in SOC, particularly in regions with complex terrain or significant land use changes.Additionally, the presence of cloud cover and atmospheric conditions can adversely affect the quality and availability of remote sensing data.Clouds can obstruct the view of the ground, while atmospheric conditions may introduce errors.Various data preprocessing and correction steps, such as atmospheric correction and land cover classification, can add uncertainty and require specialised expertise.The ground validation of SOC estimates poses a challenge, involving labor-intensive and costly soil sampling and laboratory analysis.Furthermore, inconsistencies can exist between different remote sensing sensors and data sources, and the cost of high-resolution remote sensing data may be a limiting factor in certain applications.Importantly, the performance of machine learning models hinges on the quality and quantity of training data, as well as the selection of features and algorithms.Inappropriate models or training data can lead to inaccurate estimates.Considering these challenges and limitations, it is imperative to complement remote sensing with ground-based observations to enhance the accuracy and reliability of SOC estimation.
Principle and Advantages of MIR in Soil Organic Carbon Prediction
MIR has shown high potential in the prediction of SOC due to its cost-effectiveness and high and predictive accuracy ability.MIR is a spectroscopic method that involves the use of mid-infrared light, typically in the range of 2.5-25 µm.The interaction of MIR light with the molecules in the soil sample causes the molecules to vibrate at specific frequencies, generating a unique spectral pattern or "fingerprint" [28,55].While MIR remote sensing has a longer wavelength compared to traditional optical remote sensing, it is important to note that ground penetration capabilities depend on the wavelength and soil conditions.Generally, the penetration depth is about half the wavelength.Therefore, while MIR can provide some level of subsurface information, its penetration depth may not be as significant as that of longer wavelengths, such as those in the microwave range.This spectral information is used to predict various soil properties, including SOC concentration [56], and bulk density [57]; hence, the estimation of SOC stocks (SOC concentration × bulk density × soil depth).There are numerous advantages of MIR to estimate concentrations of SOC.Firstly, it is a rapid and non-destructive method [30], and requires minimal sample preparation [22,58].Secondly, it is capable of providing continuous and real-time measurements, which can be particularly useful in monitoring changes in SOC over time [57,59].Consequently, MIR can analyse multiple soil properties simultaneously [58,60], making it a versatile tool for soil analysis.
MIR and Machine Learning in Soil Organic Carbon Prediction
The spectral data derived from MIR is typically high-dimensional and complex.Therefore, advanced statistical or machine learning models are required to extract meaningful information from the spectral data and predict the SOC concentration [61].Partial Least Squares Regression (PLSR), Support Vector Machines (SVM), and Artificial Neural Networks (ANN) are commonly used for modelling the complex and non-linear relationships between MIR spectra and SOC concentration [62].The deep learning techniques such as Convolutional Neural Networks (CNNs) have been applied to MIR data for SOC prediction [60].These methods have improved the prediction and precision of SOC compared to that of traditional machine learning method [63].This demonstrates the potential of applying MIR in combination with advanced machine learning techniques for SOC concentration and stock prediction.
Integration of MIR, Remote Sensing, and Machine Learning
The combination of MIR, remote sensing and machine learning techniques provides a comprehensive approach to SOC prediction and mapping.MIR can provide highly accurate, local-scale SOC predictions [30], while remote sensing data can provide broader, landscape-scale information [17].Machine learning techniques, on the other hand, can integrate these different types of data and handle their complex relationships to provide more accurate and spatially comprehensive SOC predictions [54].Overall, the approach may facilitate a speedy, precise, and low-resource base approach.
To delve deeper into the integration of these methods, let us examine some successful research examples.Forkuor et al. (2017) [64] conducted high-resolution SOC measurements of soil samples using MIR.Subsequently, they employed satellite remote sensing data to acquire information about land cover and vegetation indices in the corresponding regions [64].Finally, they applied machine learning algorithms, such as Random Forest, to amalgamate these datasets and generate high-resolution SOC maps.Similarly, Tziolas et al. (2020) [65] adopted a comprehensive approach, combining ground-based MIR measurements with satellite remote sensing data to predict SOC in different regions.They harnessed deep learning techniques, including Convolutional Neural Networks (CNNs), to process remote sensing imagery while leveraging the high accuracy of MIR for calibration and improved model precision [17,66].This integrated approach not only aids in precise SOC estimation but can also be employed for decision support in sustainable land management [66].For instance, agricultural sectors can utilize these high-resolution SOC maps to guide land-use planning, thereby enhancing crop productivity, reducing soil erosion, and promoting sustainable agricultural practices.By delving into these case studies of integrated methods, we gain a clearer understanding of how they combine MIR, remote sensing, and machine learning to achieve SOC prediction and mapping, offering valuable insights and inspiration for future research.
Limitation of MIR
MIR holds significant promise in predicting SOC due to its cost-effectiveness and high predictive accuracy.However, along with its advantages, it is important to recognise several potential limitations in its application [22,23].MIR spectral data can be complex and high-dimensional, necessitating advanced statistical or machine learning models for accurate SOC prediction.The technique's accuracy is highly dependent on sample quality and microscale representation, and soil water content.MIR often involves near-distance soil sample collection, limiting its application in remote or challenging-to-access areas.Lastly, costs associated with instrument acquisition and sample analysis should not be overlooked.
To address these limitations, the integration of remote sensing and machine learning techniques, alongside ground validation, offers a comprehensive approach to SOC prediction and mapping.This combined approach can yield precise and cost-effective results, making it valuable for various applications in soil science and environmental management.
The Application of Integrating Remote Sensing, Machine Learning, and MIR Techniques to the Precise Assessment of Soil Organic Carbon in Australia
In this section, we present an in-depth overview of a pivotal project aimed at optimising the measurement of SOC concentrations and estimation of SOC stocks in Australia.The project is funded by the Commonwealth Department of Industry, Science, Energy and Resources (Australia), and constitutes a collaborative effort involving multiple scientific disciplines.Key partners in this endeavor include the University of Queensland, FarmLab, AgriCircle, the University of Aberdeen, and Ziltek, (Figure 1).and high-dimensional, necessitating advanced statistical or machine learning models for accurate SOC prediction.The technique's accuracy is highly dependent on sample quality and microscale representation, and soil water content.MIR often involves near-distance soil sample collection, limiting its application in remote or challenging-to-access areas.Lastly, costs associated with instrument acquisition and sample analysis should not be overlooked.To address these limitations, the integration of remote sensing and machine learning techniques, alongside ground validation, offers a comprehensive approach to SOC prediction and mapping.This combined approach can yield precise and cost-effective results, making it valuable for various applications in soil science and environmental management.
The Application of Integrating Remote Sensing, Machine Learning, and MIR Techniques to the Precise Assessment of Soil Organic Carbon in Australia
In this section, we present an in-depth overview of a pivotal project aimed at optimising the measurement of SOC concentrations and estimation of SOC stocks in Australia.The project is funded by the Commonwealth Department of Industry, Science, Energy and Resources (Australia), and constitutes a collaborative effort involving multiple scientific disciplines.Key partners in this endeavor include the University of Queensland, FarmLab, AgriCircle, the University of Aberdeen, and Ziltek, (Figure 1).The overarching objective of this groundbreaking initiative was to substantially reduce the cost associated with SOC measurement while concurrently enhancing the accuracy and efficiency of estimating SOC in Australian soils.To achieve this, the project deploys an innovative fusion of remote sensing, machine learning, and MIR.By synergising remote sensing with advanced machine learning algorithms, our project aims to refine the stratification of carbon estimation zones, ultimately reducing the number of sampling points needed to generate high-resolution SOC maps.Moreover, the integration of MIR with the remote sensing and machine learning methodology leads to a more cost-effective The overarching objective of this groundbreaking initiative was to substantially reduce the cost associated with SOC measurement while concurrently enhancing the accuracy and efficiency of estimating SOC in Australian soils.To achieve this, the project deploys an innovative fusion of remote sensing, machine learning, and MIR.By synergising remote sensing with advanced machine learning algorithms, our project aims to refine the stratification of carbon estimation zones, ultimately reducing the number of sampling points needed to generate high-resolution SOC maps.Moreover, the integration of MIR with the remote sensing and machine learning methodology leads to a more cost-effective alternative to traditional laboratory analysis, thereby further reducing the overall analysis expenditure including reducing the efforts of scientific manpower.This approach empowers the stakeholders to make more accurate predictions regarding SOC concentration and stock in soil.
In addition, we have conducted a comprehensive examination of the uncertainty of SOC estimation in Australia, as depicted in Figure 2.This figure provides a striking visual representation of the prevailing challenge, portraying an uncertainty map of SOC of the country, with varying color intensities denoting uncertainty level.Notably, the map uncovers alarmingly high uncertainty levels across Australia, particularly in the central and western regions.Given this pressing international context and the critical need to address these challenges, our research aims not only to attain the accuracy of SOC estimation but also to significantly reduce associated costs and resources.In this global context, our dedication to research efforts in SOC estimation and cost-effectiveness is significant.
representation of the prevailing challenge, portraying an uncertainty map of SOC of the country, with varying color intensities denoting uncertainty level.Notably, the map uncovers alarmingly high uncertainty levels across Australia, particularly in the central and western regions.Given this pressing international context and the critical need to address these challenges, our research aims not only to attain the accuracy of SOC estimation but also to significantly reduce associated costs and resources.In this global context, our dedication to research efforts in SOC estimation and cost-effectiveness is significant.Precision sampling was used in this project.AgriCircle, one of the participating units (Figure 3), has developed a novel methodology that utilises multi-dimensional statistical methods to process remote sensing data and recommends representative sampling locations, which are then combined with the outputs of the machine learning models to increase the predictive accuracy of SOC estimation.This method primarily utilises multispectral (MS) and synthetic aperture radar (SAR) images from the Copernicus mission, predominantly collected during periods of sparse vegetation in agricultural fields, capturing high-resolution satellite imagery containing soil-related data [67].The central focus of the research lies in predicting the spatial distribution of soil zoning and topsoil properties, such as Soil Organic Matter (SOM), within the agricultural fields using a random forest algorithm.To achieve this goal, a comprehensive survey was conducted on samples collected from 120 different fields.Following model training, the prediction accuracy for SOM was 83%.This received strong support from a high level of agreement with observations made by farmers [67].This approach significantly reduces the number of required sampling points compared to traditional methods while maintaining high prediction accuracies.The optimisation and validation of this methodology will be conducted based on the results obtained from extensive site identification and sampling activities across different regions in Australia.Ziltek, another participating unit, specialises in MIR and has developed custom calibrations for SOC prediction (Figure 4).The current SOC model is Precision sampling was used in this project.AgriCircle, one of the participating units (Figure 3), has developed a novel methodology that utilises multi-dimensional statistical methods to process remote sensing data and recommends representative sampling locations, which are then combined with the outputs of the machine learning models to increase the predictive accuracy of SOC estimation.This method primarily utilises multispectral (MS) and synthetic aperture radar (SAR) images from the Copernicus mission, predominantly collected during periods of sparse vegetation in agricultural fields, capturing high-resolution satellite imagery containing soil-related data [67].The central focus of the research lies in predicting the spatial distribution of soil zoning and topsoil properties, such as Soil Organic Matter (SOM), within the agricultural fields using a random forest algorithm.To achieve this goal, a comprehensive survey was conducted on samples collected from 120 different fields.Following model training, the prediction accuracy for SOM was 83%.This received strong support from a high level of agreement with observations made by farmers [67].This approach significantly reduces the number of required sampling points compared to traditional methods while maintaining high prediction accuracies.The optimisation and validation of this methodology will be conducted based on the results obtained from extensive site identification and sampling activities across different regions in Australia.Ziltek, another participating unit, specialises in MIR and has developed custom calibrations for SOC prediction (Figure 4).The current SOC model is built exclusively on South Australian agricultural soils.All of these samples are surface measurements (0-30 cm) and were analysed using the Walkey-Black method.This is significantly different from the dry combustion method used in the current study.As such, we expect there to be some systematic differences in the predictions.The calibrations have shown promising precision levels in initial testing.However, further data collection and calibration are necessary to test a broader range of soils and land uses, ensuring the applicability and robustness of the MIR method under Australian conditions.nificantly different from the dry combustion method used in the current study.As such, we expect there to be some systematic differences in the predictions.The calibrations have shown promising precision levels in initial testing.However, further data collection and calibration are necessary to test a broader range of soils and land uses, ensuring the applicability and robustness of the MIR method under Australian conditions.Furthermore, the project also includes activities related to scaling and commercialising the measurement services developed.AgriCircle will collaborate with FarmLab and other partners, such as Nestlé, to build an open platform that connects farmers with carbon schemes.This platform will facilitate data collection, credit allocation, and engagement of landholders in SOC monitoring.Throughout the project, the University of Queensland and the University of Aberdeen provide expertise in project management, the validation of methodologies and platforms, and the communication of project findings to we expect there to be some systematic differences in the predictions.The calibrations have shown promising precision levels in initial testing.However, further data collection and calibration are necessary to test a broader range of soils and land uses, ensuring the applicability and robustness of the MIR method under Australian conditions.Furthermore, the project also includes activities related to scaling and commercialising the measurement services developed.AgriCircle will collaborate with FarmLab and other partners, such as Nestlé, to build an open platform that connects farmers with carbon schemes.This platform will facilitate data collection, credit allocation, and engagement of landholders in SOC monitoring.Throughout the project, the University of Queensland and the University of Aberdeen provide expertise in project management, the validation of methodologies and platforms, and the communication of project findings to Furthermore, the project also includes activities related to scaling and commercialising the measurement services developed.AgriCircle will collaborate with FarmLab and other partners, such as Nestlé, to build an open platform that connects farmers with carbon schemes.This platform will facilitate data collection, credit allocation, and engagement of landholders in SOC monitoring.Throughout the project, the University of Queensland and the University of Aberdeen provide expertise in project management, the validation of methodologies and platforms, and the communication of project findings to the scientific community.Overall, this project represents a significant effort to optimise SOC measurement in Australia by leveraging the potential of remote sensing, machine learning, and MIR (Figure 3).By reducing measurement costs, with less technical manpower and improving prediction accuracy, and facilitating the integration of farmers into carbon schemes, the project aims to enhance knowledge of Australian soils and improve their productivity.
In addition, the project will identify two SOC estimation areas per site at 430 locations spanning Western Australia (WA), South Australia (SA), Tasmania (TAS), Victoria (VIC), New South Wales (NSW), and Queensland (QLD).These sites will encompass major soil types found in cropping and grazing regions.At each site, we will provide landholders with the opportunity to assess SOC levels in two distinct areas characterised by different land management practices or land uses.Figure 5 demonstrates the process of soil sample collection during the course of the project.This approach allows us to examine the im-pact of varying management strategies on SOC and serves as an effective tool to engage landholders and promote active SOC monitoring.carbon schemes, the project aims to enhance knowledge of Australian soils and improve their productivity.
In addition, the project will identify two SOC estimation areas per site at 430 locations spanning Western Australia (WA), South Australia (SA), Tasmania (TAS), Victoria (VIC), New South Wales (NSW), and Queensland (QLD).These sites will encompass major soil types found in cropping and grazing regions.At each site, we will provide landholders with the opportunity to assess SOC levels in two distinct areas characterised by different land management practices or land uses.Figure 5 demonstrates the process of soil sample collection during the course of the project.This approach allows us to examine the impact of varying management strategies on SOC and serves as an effective tool to engage landholders and promote active SOC monitoring.
Preliminary Results and Discussion
A total of 36 samples were collected from two distinct soil depth intervals (0-30 cm and 30-60 cm) to capture a representative profile of the site's SOC variability.The sampling locations, as depicted in Figure 6, were chosen based on the precision sampling method by AgriCircle to ensure comprehensive coverage of the geographical heterogeneity present at the site.We have conducted a detailed analysis of soil samples from the Turretfield reseach site (farm) using MIR.All samples were promptly scanned in situ using the RemScan (a portable MIR spectrometer, Manufacturer is Ziltek, Adelaide, Australia), which offers a spectral range for the MIR is 2 µm to 13 µm, and the resolution is 8 cm −1 , providing immediate spectral data indicative of SOC content.Subsequently, these samples were also sent to an accredited laboratory for SOC analysis using the dry
Preliminary Results and Discussion
A total of 36 samples were collected from two distinct soil depth intervals (0-30 cm and 30-60 cm) to capture a representative profile of the site's SOC variability.The sampling locations, as depicted in Figure 6, were chosen based on the precision sampling method by AgriCircle to ensure comprehensive coverage of the geographical heterogeneity present at the site.We have conducted a detailed analysis of soil samples from the Turretfield reseach site (farm) using MIR.All samples were promptly scanned in situ using the RemScan (a portable MIR spectrometer, Manufacturer is Ziltek, Adelaide, Australia), which offers a spectral range for the MIR is 2 µm to 13 µm, and the resolution is 8 cm −1 , providing immediate spectral data indicative of SOC content.Subsequently, these samples were also sent to an accredited laboratory for SOC analysis using the dry combustion method, which is a standard procedure for SOC quantification due to its accuracy and reliability.This dual approach allows for a robust comparison between rapid, field-based MIR readings and conventional laboratory measurements.
In Figure 7, we present the comparison between Rem Scan-predicted SOC values and laboratory-measured SOC values.The line of best fit is also overlaid on top and indicates a good fit (R 2 = 0.83).Table 1 shows SOC density varies with depth: higher (1.44-2.15g/cm 3 ) at 0-30 cm with more variation, and lower (0.767-1.36 g/cm 3 ) at 30-60 cm with less variation.We want to clarify that the PLSR model used in our study is a pre-trained model.However, we applied additional steps to optimize its performance, as outlined below.The pre-trained model was constructed using an agronomy database, consisting of approximately 246 training samples and 106 testing samples.The division of data into training and testing sets was carried out using the Kennard Stone algorithm.Here, we need to point out that the current preliminary results do not separate samples based on depths.The current SOC model is built exclusively on South Australian agricultural soils.All of these samples are surface measurements (0-30 cm) and were analysed using the Walkey-Black method.This is significantly different from the dry combustion method used in the current study.As such, we expect there to be some systematic differences in the predictions.
With this in mind, the Supplementary Materials includes some additional information about the Ziltek Walkey-Black SA SOC model currently used for SOC predictions (R 2 = 0.91).
Remote Sens. 2023, 15, x FOR PEER REVIEW 10 of 17 combustion method, which is a standard procedure for SOC quantification due to its accuracy and reliability.This dual approach allows for a robust comparison between rapid, field-based MIR readings and conventional laboratory measurements.In Figure 7, we present the comparison between Rem Scan-predicted SOC values and laboratory-measured SOC values.The line of best fit is also overlaid on top and indicates a good fit (R 2 = 0.83).Table 1 shows SOC density varies with depth: higher (1.44-2.15g/cm³) at 0-30 cm with more variation, and lower (0.767-1.36 g/cm³) at 30-60 cm with less variation.We want to clarify that the PLSR model used in our study is a pre-trained model.However, we applied additional steps to optimize its performance, as outlined below.The pre-trained model was constructed using an agronomy database, consisting of approximately 246 training samples and 106 testing samples.The division of data into training and testing sets was carried out using the Kennard Stone algorithm.Here, we need to point out that the current preliminary results do not separate samples based on depths.The current SOC model is built exclusively on South Australian agricultural soils.All of these samples are surface measurements (0-30 cm) and were analysed using the Walkey-Black method.This is significantly different from the dry combustion method used in the current study.As such, we expect there to be some systematic differences in the predictions.With this in mind, the Supplementary Materials includes some additional information about the Ziltek Walkey-Black SA SOC model currently used for SOC predictions (R 2 = 0.91).To enhance the model's predictive accuracy, we applied a linear baseline correction to remove any underlying trends in the data.The training data was then subjected to a leave-one-out cross-validation approach within the PLSR framework.This iterative pro- To enhance the model's predictive accuracy, we applied a linear baseline correction to remove any underlying trends in the data.The training data was then subjected to a leave-one-out cross-validation approach within the PLSR framework.This iterative process helped us evaluate the model's performance and select the optimal configuration that minimizes the mean square error.
Regarding Figure 8, it illustrates the dominant SOC region employed by the PLSR model for predicting SOC concentrations.This information is derived from the Variable Importance in Projection (VIP) scores.The VIP scores represent the importance of each variable (wavelength) in the model.Higher VIP scores indicate variables that significantly contribute to the prediction, while lower scores are less influential.To optimise the performance of our PLS regression model, we employed leave-one-out cross-validation, ensuring that each data point contributed to validating the model's accuracy.In Figure 8, we present the Variable Importance in Projection (VIP) scores derived from the PLS model.This visualisation highlights the variables that play a crucial role in the model, thus providing insights into the key drivers of SOC variability.
Commercialization and Potential Application in Australia
The commercialization of SOC measurement technology based on an integrated methodology approach holds substantial promise for Australian agriculture, particularly in the context of carbon markets and sustainable farming.This promise stems from several key factors.Firstly, precise SOC measurement is essential for participation in carbon markets, allowing farmers to quantify and verify their carbon sequestration efforts, potentially increasing their revenue through carbon credits or offsets.Secondly, integrated SOC measurement aids in implementing sustainable farming practices by providing accurate assessments of SOC concentration and stock and its spatial distribution.This informs decisions on soil management, crop rotation, and organic matter turnover, resulting in more sustainable and productive agriculture.Additionally, it helps farmers comply with environmental regulations and demonstrates their commitment to environmental stewardship.Furthermore, the commercialization of this technology can drive research and de-
Commercialization and Potential Application in Australia
The commercialization of SOC measurement technology based on an integrated methodology approach holds substantial promise for Australian agriculture, particularly in the context of carbon markets and sustainable farming.This promise stems from several key factors.Firstly, precise SOC measurement is essential for participation in carbon markets, allowing farmers to quantify and verify their carbon sequestration efforts, potentially increasing their revenue through carbon credits or offsets.Secondly, integrated SOC measurement aids in implementing sustainable farming practices by providing accurate assessments of SOC concentration and stock and its spatial distribution.This informs decisions on soil management, crop rotation, and organic matter turnover, resulting in more sustainable and productive agriculture.Additionally, it helps farmers comply with environmental regulations and demonstrates their commitment to environmental stewardship.Furthermore, the commercialization of this technology can drive research and development, fostering innovations in remote sensing, machine learning, and data analytics.
These innovations benefit not only Australian agriculture but also global climate change mitigation efforts.Finally, the integrated approach's economic viability makes it accessible to a wide range of farmers, promoting its adoption and supporting the transition to sustainable and carbon-conscious agricultural practices.In essence, the commercialization of integrated SOC measurement technology aligns with carbon markets, enhances sustainability, ensures compliance, fosters innovation, and offers cost-effective solutions, benefiting both individual farmers and broader environmental and economic goals.By integrating remote sensing, machine learning, and MIR, the cost of SOC measurement can be significantly reduced, making it more accessible to farmers and potentially revolutionizing carbon trading markets.
Decreasing the Cost of Carbon Monitoring
The decreased cost of carbon monitoring and the systematic approach to measuring SOC over time is essential for Australian farmers [68].By optimizing AgriCircle's precision sampling techniques across a broad spectrum of soil types and land uses in Australia, and coupling this with MIR, the cost of SOC measurement can be decreased significantly [67].Precision sampling and MIR, once calibrated for a site, can also significantly reduce the cost of sampling over time, thus enabling more frequent monitoring at a significantly reduced cost.
Facilitating Entry to Carbon Markets
This technology also has the potential to decrease barriers for farmer entry into carbon markets and increase economic opportunities by building confidence among farmers with science-based technological information [69].With further validation and testing to reach a high Technology Readiness Level (TRL), the plan is to run trials with carbon developers and large food and beverage corporations like Nestlé Australia.Similar projects in Europe are in advanced stages, with Nestlé compensating farmers for carbon sequestered in their soils [70].There is a clear opportunity to adapt this platform to the Australian emissions trading system, utilising cost-effective soil sampling solutions to help farmers obtain credits while assisting companies in meeting their sustainability targets.
Increasing Farmer Engagement
SOC measurement technology can also drive farmer engagement and interest in SOC measurement with less effort and technological requirements, leading to generating high confidence.As part of the project, farmers will be encouraged to test soils from contrasting management and land uses on their farms, exploring the impact on SOC stocks [71].This would engage their interest in SOC measurement, driving demand for tools to monitor soil health and participate in carbon farming initiatives [72] leading to strengthening their economic capabilities.
Economic Stimulation of Regional Communities
Regional communities across Australia stand to benefit economically from the soil sampling needed for this project.Contracting regional companies for soil sampling creates employment opportunities, thus stimulating regional economies.It will also facilitate as a demonstration plot, thus leading to an increase in the opportunities for technical training for farmers besides increasing their knowledge and ability to protect the storage of SOC [73].In addition, by measuring SOC stock in diverse climatic regions, soil types, and land uses, this project will contribute valuable data to national soil databases, aligning with the National Soils Strategy and providing long-term public benefits [74].
Formation of Collaborative Partnerships
The project fosters collaborative partnerships between research organizations, agricultural solutions companies, industry suppliers of carbon measurement services, companies seeking to purchase carbon credits, and farmers.This broad collaboration will ensure the development of scientifically rigorous, innovative, and practical measurement solutions that are well-designed and will have high acceptance by their intended end users.It offers a cost-effective solution for carbon monitoring, facilitating entry to carbon markets, fostering sustainable farming, stimulating regional economies, and encouraging collaborative efforts in the agricultural sector.Precisely, the commercial application of integrated technologies-based SOC measurement has significant potential in Australia.
Conclusions and Outlook
The integration of remote sensing, machine learning, and MIR represents a promising approach to measuring SOC concentration and estimate SOC stock with improved precision and reduced costs.This triad of advanced technologies offers cost-effective, speedy, and scalable solutions, addressing the need for accurate SOC measurement techniques as we strive towards sustainable agricultural practices.In Australia, the potential of these technologies has already been demonstrated preliminarily, with optimized precision sampling techniques and the utilization of MIR showing significant reductions in SOC measurement costs for farmers.Moreover, these technologies enable more frequent and precise monitoring of SOC, allowing farmers to effectively track changes in soil health over time.This integration not only enhances our understanding of SOC dynamics but also promotes sustainable land management practices, contributing to both environmental stewardship and agricultural productivity.However, in this paper, we present an analysis of 36 sample datasets, which should be considered preliminary.As more data become available, we intend to update our findings accordingly.At this stage, the limited sample size precludes the development of distinct models for varying depths and does not yet support a detailed per-farm analysis.
Preliminary analysis of the data and the proposal of the project show that the commercialisation of SOC measurement technologies can lower barriers for farmers to enter carbon markets and create new economic opportunities through building confidence, being a technology-based measurement technique.Initial collaborations with carbon developers and large corporations have shown a strong interest in these technologies, indicating promising prospects for their application within Australia's emissions trading system.The regional economic stimulation resulting from employment opportunities and the incorporation of valuable data into national soils databases further highlights the wide-ranging benefits of these technologies.By fostering collaborations among research organisations, agricultural solutions companies, farmers, and carbon credit purchasers, innovative and practical SOC measurement solutions can be developed.Ultimately, the integration of remote sensing, machine learning, and MIR holds great potential to revolutionise SOC measurement, drive sustainable farming practices, enhance economic opportunities, and contribute to the global fight against climate change.
Figure 1 .
Figure 1.Project participating units and designated tasks.
Figure 3 .
Figure 3. Project process flow for SOC estimation and monitoring with integration of farmers.
Figure 3 .
Figure 3. Project process flow for SOC estimation and monitoring with integration of farmers.
Figure 3 .
Figure 3. Project process flow for SOC estimation and monitoring with integration of farmers.
Figure 5 .
Figure 5. Schematic diagram of the main process of soil organic carbon sample collection and analysis.
Figure 5 .
Figure 5. Schematic diagram of the main process of soil organic carbon sample collection and analysis.
Figure 6 .
Figure 6.The location of the MIR-scanned samples overlaid on the geographical map of the site.The RemScan predicted SOC values are also reported for each measurement.Left (a): measurements for the top layer (0-30 cm); right (b): measurements for bottom layer (30-60 cm).The yellow dots in the figure represent the sampling points.The numbers represent the organic carbon content values for each point.In addition, due to the small font size of the black values, we purposely enlarged them using blue font.
Figure 6 . 17 Figure 7 .
Figure 6.The location of the MIR-scanned samples overlaid on the geographical map of the site.The RemScan predicted SOC values are also reported for each measurement.Left (a): measurements for the top layer (0-30 cm); right (b): measurements for bottom layer (30-60 cm).The yellow dots in the figure represent the sampling points.The numbers represent the organic carbon content values for each point.In addition, due to the small font size of the black values, we purposely enlarged them using blue font.Remote Sens. 2023, 15, x FOR PEER REVIEW 11 of 17
17 Figure 8 .
Figure 8. MIR spectrum of the sample with largest lab (dry combustion)-measured SOC concentration.The two bands indicate the regions used by the PLS model to predict the SOC concentration, the blue highlighted regions in the spectrum indicate the dominant SOC region.
Figure 8 .
Figure 8. MIR spectrum of the sample with largest lab (dry combustion)-measured SOC concentration.The two bands indicate the regions used by the PLS model to predict the SOC concentration, the blue highlighted regions in the spectrum indicate the dominant SOC region.
cost way in Australia Sustainable farming Figure
1. Project participating units and designated tasks.
Table 1 .
SOC content statistics for the 36 samples.
Table 1 .
SOC content statistics for the 36 samples. | 9,955.4 | 2023-11-30T00:00:00.000 | [
"Environmental Science",
"Computer Science"
] |
Bi-Force: large-scale bicluster editing and its application to gene expression data biclustering
Abstract The explosion of the biological data has dramatically reformed today's biological research. The need to integrate and analyze high-dimensional biological data on a large scale is driving the development of novel bioinformatics approaches. Biclustering, also known as ‘simultaneous clustering’ or ‘co-clustering’, has been successfully utilized to discover local patterns in gene expression data and similar biomedical data types. Here, we contribute a new heuristic: ‘Bi-Force’. It is based on the weighted bicluster editing model, to perform biclustering on arbitrary sets of biological entities, given any kind of pairwise similarities. We first evaluated the power of Bi-Force to solve dedicated bicluster editing problems by comparing Bi-Force with two existing algorithms in the BiCluE software package. We then followed a biclustering evaluation protocol in a recent review paper from Eren et al. (2013) (A comparative analysis of biclustering algorithms for gene expressiondata. Brief. Bioinform., 14:279–292.) and compared Bi-Force against eight existing tools: FABIA, QUBIC, Cheng and Church, Plaid, BiMax, Spectral, xMOTIFs and ISA. To this end, a suite of synthetic datasets as well as nine large gene expression datasets from Gene Expression Omnibus were analyzed. All resulting biclusters were subsequently investigated by Gene Ontology enrichment analysis to evaluate their biological relevance. The distinct theoretical foundation of Bi-Force (bicluster editing) is more powerful than strict biclustering. We thus outperformed existing tools with Bi-Force at least when following the evaluation protocols from Eren et al. Bi-Force is implemented in Java and integrated into the open source software package of BiCluE. The software as well as all used datasets are publicly available at http://biclue.mpi-inf.mpg.de.
Bi-Force only requires the threshold to model the edges in the bipartite graphs generated from matrices. For each synthetic data, we tried 10 thresholds, from 19 20 e to 1 2 e, where e is the difference between the maximum and the minimum values in the matrix, decreasing 1 20 e each time. Then the thresholds with the best performance were used. For expression data sets, where no gold-standard result is present,t 0 was set to be 9 10 e, For the Cheng and Church algorithm, where δ controls the maximum variances in the biclusters and α regulates the speed of the algorithm. We implemented grid-search strategy with δ ranging from 0.1 to 2.5 and α from 1.5 to 3. Based on the performance, we chose δ = 0.1 and α = 1.5 for synthetic data sets. However, for the gene expression data sets which are much larger, δ = 0.1 seems to be over-stringent, then we took an empirically-beneficial value of δ = e/2000 (?). α was decreased from 1.5 to 1.1 to avoid over-slimming the biclusters.
For Bimax and Spectral which require minimum row and column sizes, a grid-search was conducted in the ranges from 2 to 20 for both rows and columns and finally 10 was chosen to be the minimum sizes for rows and columns in a bicluster. Moreover, for Spectral algorithm, we compared the performances of different normalization methods and finally chose "logarithmic normalization" for both synthetic and gene expression data sets.
Two important parameters largely influenced the performance of QUBIC: the range of the possible ranks r and the percentage of regulating conditions for each gene q. As suggested by the author, we conducted the grid-search, starting from a relatively small value of r, from 1 to the half of the number of columns in the matrix, which was 100. For q, we set our range from 0.02 to 0.08, centered by the default value 0.06. Afterwards we found the default values (1 for r and 0.06 for q) worked the best on synthetic data sets. The values of both parameters were kept the same for gene expression data sets.
Note that two algorithms (Bimax and xMOTIFS) require discretized data, thus the input matrices were all binarized into 0s and 1s, using the means of the corresponding matrices as thresholds.
For ISA algorithms, we tested different numbers of seeds, from 100 to 400 and chose 200 seeds for synthetic data. For gene expression data sets, which are expected to be more complex, we increased the number of seed to 400.
SUPPLEMENTARY FILE 2.
FABIA performed best on the constant-upregulated data sets, followed by the performance on shift-scale and plaid data sets. For the other data sets, less than half of the real biclusters were found because FABIA is optimized to perform better if the distribution of the data set is highly unsymmetric (?). If the values in the data sets are symmetrically distributed or have a Gaussian-like distribution, then the performances of FABIA suffer. (?) QUBIC recovered most of the constant-upregulated biclusters. It also successfully recovered part of the biclusters of scale data sets, shift-scale data sets and plaid data sets.
The Cheng and Church algorithm was expected to find biclusters with low mean square residues. It performed well on the constant data set. With over 80% of the pre-defined biclusters recovered it is the best among the nine algorithms on the constant model. However, for all the other models with data shifted from the background, the qualities of the results of Cheng and Church decrease significantly.
Plaid successfully identified most of the biclusters within constant-upregulated, shift, shift-scale, scale and plaid model. It achieved recovery and relevance scores almost as good as Bi-Force. However, no bicluster was found for the constant model, indicating a poor performance of Plaid to extract constant biclusters.
BiMax bi-discretizes the data elements in the matrix by using a given threshold. This over-simplifies many scenarios. Thus BiMax performed well only on constant-upregulated data where biclusters were largely shifted away from the background. For all the other models, BiMax's performances were relatively poor.
Similarly, xMOTIFs discretizes the data and thus only biclusters for the constant model were well recovered.
Spectral clustering, though the fastest tool, has a comparatively weak overall performance. Even for the constantupregualted data sets, only about 60% of the true biclusters were recovered.
ISA recovered most of the biclusters in all the models but constant model. However, ISA generated a number of redundant biclusters that lowered overall relevance scores. A post-running filter merging the highly overlapping biclusters might be beneficial for ISA. | 1,450.4 | 2014-03-20T00:00:00.000 | [
"Computer Science",
"Biology"
] |
Full Information H2 Control of Borel-Measurable Markov Jump Systems with Multiplicative Noises
This paper addresses an H2 optimal control problem for a class of discrete-time stochastic systems with Markov jump parameter and multiplicative noises. The involved Markov jump parameter is a uniform ergodic Markov chain taking values in a Borel-measurable set. In the presence of exogenous white noise disturbance, Gramian characterization is derived for the H2 norm, which quantifies the stationary variance of output response for the considered systems. Moreover, under the condition that full information of the system state is accessible to measurement, an H2 dynamic optimal control problem is shown to be solved by a zero-order stabilizing feedback controller, which can be represented in terms of the stabilizing solution to a set of coupled stochastic algebraic Riccati equations. Finally, an iterative algorithm is provided to get the approximate solution of the obtained Riccati equations, and a numerical example illustrates the effectiveness of the proposed algorithm.
Introduction
Markov jump systems belong to a class of multimodal stochastic dynamical models with regime switching governed by a Markov chain, and have found wide-spread applications ranging from manipulator robot [1], biology of viruses [2], portfolio selection [3] to transmission control of wire/wireless network [4]. According to the number of states included in the state space of Markov chain, finite and infinite Markov jump systems are classified and investigated, respectively. After a half century's research, the control theory for finite Markov jump systems has reached a remarkable degree of maturity, such as stability analysis [5], observability and detectability [6], linear quadratic (LQ) optimal control [7], filter design [8] and finite-time control [9]. When the state space of Markov chain is expanded to an infinite set, there will arise some properties essentially different from the finite case. For example, it was pointed out in [10] that asymptotic mean square stability, stochastic L 2 -stability and exponential mean square stability are no more equivalent for countably infinite Markov jump systems, while these notions were proven in [11] to be equivalent for finite Markov jump systems. The technical difficulty for handling infinite Markov jumps was also exhibited in [12] which is concerned with the stability of a class of stochastic time-delay differential dynamic systems with infinite Markovian switchings. Besides the intrinsic theoretical merits, it is more often appropriate to characterize the real scenarios by infinite state space. An illustrative example is presented in [13], where the change of atmospheric conditions is modeled as a Markov chain taking values in a Borel-measurable set. By now, an increasing interest was attracted to the control issues of Markov jump systems with Markov chain taking values in a Borel set [14][15][16]. However, compared to the existing literature of finite Markov jump systems, there remain many gaps in the study of countably-infinite/Borel-measurable Markov jump systems, which deserves more attention.
H 2 control, based on the seminal work of state-space description [17], was at the core of robust control theory. In contrast to H ∞ control (another popular robust control approach), H 2 control aims to minimize the perturbation influence on the output response caused by the exogenous additive white noise with known distribution; while H ∞ control method deals with the disturbance attenuation problem in the presence of random disturbance whose statistical law is unclear but total energy is finite. For finite Markov jump systems, H 2 control problem was elaborately addressed in [18][19][20] and the references therein. In a recent paper [21], an H 2 control problem was studied for discrete-time Markov jump systems where the Markov chain has a Borel state space. Based on the accessible information of output variable, the optimal H 2 controller is obtained via an optimal filter, which produces a separation principle for the concerned dynamics. Different from the problem formulation of [21], this study focuses on a class of discrete-time stochastic systems subject to both Borel-measurable Markov jump parameter and multiplicative noises. The importance for studying these type of systems lies in the following two aspects. On one hand, it was recognized that the multiplicative noise perfectly depicts the stochastic fluctuation of physical parameter caused by uncertain environment; on the other hand, Markov chain with a Borel-measurable state space can provide substantial benefit for real applications, e.g., the networked control systems analyzed in [22]. Besides, the information of system state is assumed to be fully accessible to measurement, and the optimal H 2 controller will be selected among the admissible control set with a prescribed dynamic structure but unfixed dimensions to minimize the H 2 norm of resulting closed-loop system.
The main contribution of this paper is as follows. Firstly, we present the characterization of H 2 norm in terms of controllability and observability Gramians, which quantify the stationary variance of the perturbed output response. This result can be regarded as an extension of Proposition 3.1 [21] to the considered systems. It is remarkable that this formula is not only necessary to handle the considered H 2 optimal control problem, but also paves the way for further studying H 2 filter about the concerned systems. Different from the method used in [21], our technique takes full advantage of the internal stability and avoids the analysis of the asymptotic tendency about the covariance of the augmented state variable, which is generally very challenging, especially taking into account that the considered model is more complex than that of [21]. Secondly, among all n c -dimensional dynamic stabilizing controllers, the optimal H 2 control strategy is achieved by a zero-order controller with feedback gain determined by the stabilizing solution to a set of coupled stochastic algebraic Riccati equations. This control design allows an off-line computation and hence can be readily realized in practice. Particularly, the obtained Riccati equations involve integrals over a continuous interval, which are completely different from the Riccati equations associated to finite Markov jump systems. Hence, an iterative algorithm is proposed to seek the numerical solution for these coupled Riccati equations, which can approximate the exact value of real solution according to any prescribed accuracy.
The rest of this paper is organized as follows. In Section 2, we give some preliminaries and then derive the Gramian representation of H 2 norm for the considered Borelmeasurable Markov jump systems. Section 3 proceeds with the discussion of H 2 optimal control problem. In Section 4, a numerical algorithm is proposed for solving the coupled stochastic algebraic Riccati equations. Section 5 ends this paper with a concluding remark.
The notations adopted in this paper are standard. R n : n-dimensional real Euclidean space; R m×n : the normed linear space of all m by n real matrices; · : the Euclidean norm of R n or the operator norm of R m×n ; A : the transpose of a matrix (or vector) A; Tr(A): the trace of a square matrix A; S n : the set of n × n real symmetric matrices; A > 0 (≥ 0): A is a positive (semi-)definite symmetric matrix; I n : the n × n identity matrix; δ (·) : the Kronecker functional; Z+ := {0, 1, 2, · · · }.
H 2 Norm and Gramian
On a complete probability space (Ω, F , P ), we consider the following discrete-time Markov jump system with multiplicative noises: where x(t) ∈ R n , z(t) ∈ R n z and v(t) ∈ R n v represent the system state, measurement output and exogenous random disturbance, respectively. The multiplicative noise {w(t)|w(t) = (w 1 (t), · · · , w d (t)) , t ∈ Z + } is a stationary process satisfying Ew(t) = 0 and E[w(t)w(s) ] = I d δ (t−s) . In (1), {r t } t∈Z+ is a Markov chain taking values in a Borel set S and having a transition probability kernel G(·, ·) with respect to a measure µ defined on S. For any given B ∈ σ(S) (Borel σ-algebra of S), there exists a uniform bounded probability density g(·, ·) such that Moreover, the initial distribution of Markov chain is described by where ν > 0 is absolutely integrable with respect to µ. Thus, the probability density function of {r t } t∈Z + is formulated as follows: In the subsequent discussion, there is some constant k > 1 such that g(s, ) < k for arbitrary s, ∈ S, and {r t } t∈Z + is a uniform ergodic Markov chain [23]. The disturbance signal {v(t)} t∈Z + ∈ R n v is a sequence of zero mean white noises satisfying Ev(t) = 0 and Throughout this paper, we assume that three stochastic processes {r t } t∈Z + , {w(t)} t∈Z + and {v(t)} t∈Z + are mutually independent. All the coefficients of system (1) belong to H m×n ∞ with suitable dimensions m and n, where H m×n To make the presentation more concise, the following linear operators will be used: where X ∈ H n ∞ and Y ∈ H n 1 . It can be verified that L maps H n 1 to H n 1 , while E and T map H n ∞ to H n ∞ . Moreover, L and T are adjoint with respect to the following bilinear operator: Definition 1 ([16]). The following discrete-time stochastic system with Borel-measurable Markov jump parameter (denoted by (A; P)): is said to be strongly exponentially mean square stable (SEMSS) if r(L) < 1, where r(L) indicates the spectral radius of the operator L.
Dince all the coefficients of L are real matrices, by the Krein-Rutman theorem [24], there must exist a real eigenvalue λ of L and a corresponding real eigenvector 0 = X ∈ S n ∩ H n 1 such that r(L) = λ and L(X) = λX. The following Lyapunov-type stability theorem follows directly from Theorems 2.4, 2.5 and 2.6 of [25].
Lemma 2. Let x 0 (t) be the state trajectory of system (1) corresponding to the initial state x(0) = 0, then for any ∈ S and t ∈ Z + , there holds , then it is obvious that F t ⊂F t . In terms of the state equation of (1), it can be computed that Because x 0 (t), w k (t) and v(t) are allF t -measurable and mutually independent, it follows from (9) that By Fubini's theorem, Equation (10) can be rewritten as (11) According to the formula (7) of [26], there stands E[g(r t−1 , s)] = π(t, s), which together with (11) justifies the validity of (8). The proof is ended.
, then Lemma 2 implies thatỸ(t, ) satisfies the following Lyapunov equation: By induction, it can be derived that the solution to (12) can be represented asỸ Next, we present the main result of this section, which led to the Gramian characterization for H 2 norm of system (1).
Furthermore, for the output response z(t) of system (1) corresponding to arbitrary initial state x(0) = x 0 ∈ R n , we have Proof. Above all, since {r t } t∈Z+ is a uniform ergodic Markov chain, by Theorem 16.2.1 of [23], there exists an invariant probability density π with the property π(s) = lim k→∞ π(k, s) and π( ) = S π(s)g(s, )µ(ds). Hence, X is well defined in (14). Further, the existence and uniqueness of the solutions to (13) are guaranteed by Lemma 1. To proceed, it can be derived from the linearity of system (1) that where z(t; 0, v) is the zero-initial-state output response influenced by the exogenous disturbance v and z(t; x 0 , 0) is the unforced output response arising from the initial state x 0 .
Noticing that By Remark 1, it follows from (18) that where X is given by (14). Let Y( ) = ∞ ∑ t=0 L t (X)( ), then it can be verified that Y satisfies the second equation of (13). Now, taking the limit t → ∞ in the first term of the last equality of (19), we get By the assumption that (A; P) is SEMSS (i.e., r(L) < 1), we can infer from [27] that where ζ = C(r t )C(r t ) ∞ > 0, λ > 0 and 0 < α < 1. In view of π(s) = lim k→∞ π(k, s), we have lim k→∞ X (k) − X ∞ = 0. Thus, taking the limit t → ∞ in the second term of the last equality of (19) leads to that lim t→∞ S Tr C(s) Combining (19), (20) and (22), we can conclude that
Tr[C(s)Y(s)C(s) ]µ(ds).
Moreover, because (A; P) is SEMSS, it is deduced from (17) that there exist γ > 0 and 0 < q < 1 such that which implies that Hence, Equation (15) is validated. Next, let us show (16). To this end, we make use of the bilinear operator (4) to reformulate (15) as follows: where the third and seventh equalities follow from the first and second equations of (13), respectively. The proof is completed.
Remark 2.
In the literature of H 2 control theory, the stationary variance of output response caused by the exogenous white noise can serve as an H 2 norm of the perturbed stochastic dynamics. That is, Hence, Theorem 1 supplies a Gramian characterization for the H 2 norm of system (1) in the presence of additive white noise disturbance v, and makes it possible to explore H 2 optimal control problem for the considered systems.
H 2 Optimal Control
In this section, we are concentrated on an H 2 optimal control problem for the following controlled discrete-time Borel-measurable Markov jump systems: where u(t) ∈ R n u is the control input. Without loss of generality, we assume that D( ) D( ) > εI n u ( ∈ S) for some ε > 0. In what follows, let us consider the following type of n cdimensional dynamic controllers: Through taking u(t) = y(t) andû(t) = x(t), the dynamic controller (29) is incorporated in the perturbed system (28) and the following augmented system is obtained: In (30), the closed-loop state and the coefficients are given by If system (30) is SEMSS when v(t) ≡ 0, we will call (29) a stabilizing dynamic controller. In the sequel, the set of all stabilizing controller with the form (29) will be denoted as K. In the special case of n c = 0, G turns out to be a zero-order dynamic controller, i.e., a state-feedback controller u(t) = F(r t )x(t).
Our objective is to find a stabilizing controller in the admissible control set K such that the H 2 norm of (30) is minimized. Below, we introduce a standard notion which will be used in the subsequent discussion.
Definition 2. For the following system: if there is u(t) = F(r t )x(t) such that the resulting closed-loop system of (32) is SEMSS, then we say (A, G) is stochastically stabilizable and F(r t ) ∈ R n u ×n is called a stabilizing feedback gain. Moreover, if there is H(r t ) ∈ R n×n z such that the following system is SEMSS: then we say (C, A) is stochastically detectable.
The following result is an extension of Proposition 3.3 ( [21]), which tackles the existence and uniqueness of the stabilizing solution to a set of coupled stochastic algebraic Riccati equations. The detailed proof can be presented by following the same line as [21].
Moreover, F(r t ) = −[Π(P)(r t )] −1 Φ(P)(r t ) is a stabilizing feedback gain (i.e., P is a stabilizing solution to (34)). Now, we are prepared to give the main result of this section, which provides an H 2 optimal control design for the considered systems. Theorem 2. If (A, G) is stochastically stabilizable and (C, A) is stochastically detectable, then the H 2 optimal controller in the set K is given by where P is the stabilizing solution of (34). Moreover, the minimal H 2 norm of G c is represented as where G u * denotes the system (28) driven by u * (t) = F * (r t )x(t).
Proof. By Lemma 3, Equation (34) admits a unique stabilizing solution 0 ≤ P ∈ H n ∞ and u * (t) is a stabilizing state-feedback controller. Furthermore, we can rewrite (34) as follows: . Via simple calculations, it can be verified that the solution P of (37) is indeed the observability Gramian of G u * . From Theorem 1 and Remark 2, we derive that Next, it remains to show that u * (t) achieves the minimal H 2 norm among the set K. To this end, for any dynamic controller G ∈ K, the corresponding closed-loop system is given by (30). It is easy to get the observability Gramian equation for (30), which is given as follows: Making use of Theorem 1 and Remark 2 again, we have Now, according to the structure of A c k ( ), we can separate U c ( ) into four blocks as follows: Then, by combing (37) and (39), it can be verified that the following block matrix satisfies the following Lyapunov equation: where F( ) andĈ( ) are the coefficients of (29). Because G ∈ K is a stabilizing controller, we have that system (30) is SEMSS when v(t) ≡ 0. Hence, by Lemma 1 (i), we can derive from Π(P)( ) > 0 (see (34)) that the above Lyapunov equation (43) admits a unique solution 0 ≤Ū( ) ∈ H 2n ∞ . Therefore, from (40) and (42), it follows that which completes the proof.
Remark 3.
The H 2 optimal control design proposed in Theorem 2 can be regarded as a generalization of Theorem 7.6 of [25]. More specifically, if the state space of {r t } t∈Z + reduces to a finite set, then the above result produces an H 2 optimal controller for discrete-time stochastic systems with finite Markov jump parameters and multiplicative noises, as considered in [25].
Numerical Algorithm
From the preceding analysis, the applicability of H 2 optimal controller depends on how to get the solution of coupled stochastic algebraic Riccati equations (34). Based on the grid of state space S and the iterative algorithm proposed in [28], we can present the following (Algorithm 1) procedure to compute the approximate solution of (34).
Thus, the coupled stochastic algebraic Riccati equations (34) can be written as follows: Now, we divide the interval [0, 1] into 100 equal segments and the state space becomes: Further, let us utilize the Trapezoidal Rule to compute the approximate value of the integral in (47): Hence, for the node (i, m ) (i = 1, 2; m = m 100 , m = 0, 1, · · · , 100), the numerical solution of (34) can be approximately expressed by: At this stage, we have transformed (47) into a set of discrete-time algebraic Riccati equations as considered in [28]. By applying the iterative algorithm of [28], we can calculate the approximate solution of P(i, m ) for every node.
Conclusions
In this paper, we studied the H 2 optimal control problem for Borel-measurable Markov jump systems with multiplicative noises. H 2 norm is introduced by the stationary variance of perturbed output response, which can be quantified by controllability and observability Gramians of the considered systems. Moreover, by means of the stabilizing solution to a set of coupled stochastic algebraic Riccati equations, H 2 optimal controller is obtained. The current study will yield some interesting open topics. For example, when the information of system state is only partially accessible to measurement, as considered in [21], how to settle the H 2 optimal control problem for the considered systems. This issue no doubt deserves a further research.
Conflicts of Interest:
The authors declare no conflict of interest. | 4,620 | 2021-12-23T00:00:00.000 | [
"Mathematics",
"Engineering"
] |
Clinical utility of a rapid two-dimensional balanced steady-state free precession sequence with deep learning reconstruction
Background Cardiovascular magnetic resonance (CMR) cine imaging is still limited by long acquisition times. This study evaluated the clinical utility of an accelerated two-dimensional (2D) cine sequence with deep learning reconstruction (Sonic DL) to decrease acquisition time without compromising quantitative volumetry or image quality. Methods A sub-study using 16 participants was performed using Sonic DL at two different acceleration factors (8× and 12×). Quantitative left-ventricular volumetry, function, and mass measurements were compared between the two acceleration factors against a standard cine method. Following this sub-study, 108 participants were prospectively recruited and imaged using a standard cine method and the Sonic DL method with the acceleration factor that more closely matched the reference method. Two experienced clinical readers rated images based on their diagnostic utility and performed all image contouring. Quantitative contrast difference and endocardial border sharpness were also assessed. Left- and right-ventricular volumetry, left-ventricular mass, and myocardial strain measurements were compared between cine methods using Bland-Altman plots, Pearson’s correlation, and paired t-tests. Comparative analysis of image quality was measured using Wilcoxon-signed-rank tests and visualized using bar graphs. Results Sonic DL at an acceleration factor of 8 more closely matched the reference cine method. There were no significant differences found across left ventricular volumetry, function, or mass measurements. In contrast, an acceleration factor of 12 resulted in a 6% (5.51/90.16) reduction of measured ejection fraction when compared to the standard cine method and a 4% (4.32/88.98) reduction of measured ejection fraction when compared to Sonic DL at an acceleration factor of 8. Thus, Sonic DL at an acceleration factor of 8 was chosen for downstream analysis. In the larger cohort, this accelerated cine sequence was successfully performed in all participants and significantly reduced the acquisition time of cine images compared to the standard 2D method (reduction of 37% (5.98/16) p < 0.0001). Diagnostic image quality ratings and quantitative image quality evaluations were statistically not different between the two methods (p > 0.05). Left- and right-ventricular volumetry and circumferential and radial strain were also similar between methods (p > 0.05) but left-ventricular mass and longitudinal strain were over-estimated using the proposed accelerated cine method (mass over-estimated by 3.36 g/m2, p < 0.0001; longitudinal strain over-estimated by 1.97%, p = 0.001). Conclusion This study found that an accelerated 2D cine method with DL reconstruction at an acceleration factor of 8 can reduce CMR cine acquisition time by 37% (5.98/16) without significantly affecting volumetry or image quality. Given the increase of scan time efficiency, this undersampled acquisition method using deep learning reconstruction should be considered for routine clinical CMR.
Introduction
Cardiovascular magnetic resonance (CMR) cine imaging is widely accepted as the gold-standard, non-invasive modality for visualization of cardiovascular anatomy and quantification of left-ventricular (LV) and right-ventricular (RV) function and volume measurements [1].This is primarily due to its superior image quality (IQ) and high reproducibility compared to other imaging modalities [2].The current method of choice to acquire these images is a breath-held balanced steady-state free precession sequence (bSSFP).This method allows for high temporal resolution and an optimal blood-pool-to-myocardium contrast [3].Such contrast enhancement is crucial for clearly delineating cardiac structures, including trabecular tissue and the endocardial border.This reinforces the method's highly reproducible and accurate capability for cardiac function and volume assessments [3].
While highly accurate, the current bSSFP method has several disadvantages.First, the method is lengthy.It requires patients to breathhold (BH) for approximately 10-12 s-with intermediate periods of rest-for each of the two-dimensional (2D) slices required to achieve full heart coverage (11-12 short axis [SAx] slices and 3 long axis [LAx] slices) [3].Second, BH-ing relies on patient compliance which may be compromised for several reasons.These reasons include anxiety, claustrophobia, age, respiratory complications, or other medical conditions [4].This inconsistent BH-ing may result in misalignment of cardiac anatomy which may further complicate the future planning of slices, add scanning time, or challenge the clinical interpretation of images [5].Finally, the bSSFP sequence is sensitive to magnetic field inhomogeneities.This sensitivity can lead to banding artifacts and/or signal loss near adjacent tissue areas with variations in magnetic susceptibility, such as the lung-heart interface or around metallic implants.These artifacts can potentially degrade the IQ of the heart structures and measurement accuracy in critical diagnostic regions.
SMS acquisitions enable the simultaneous capture of separate anatomical slices of the heart, increasing myocardial coverage in less time with fewer BHs [7].When the optimal acceleration factor is employed, typically capturing three slices simultaneously, SMS acquisitions preserve signal-to-noise ratio (SNR) comparably to conventional bSSFP methods [7].Although additional in-plane acceleration is achievable through varying coil sensitivities, cross-talk from simultaneous slice excitation presents a limitation [6].Moreover, at higher magnetic fields, SMS acquisitions encounter constraints due to an increased specific absorption rate, further limiting the potential for acceleration [8].
Compressed sensing is another widely used technique to accelerate cine acquisitions.It exploits the inherent sparsity of CMR images in a transform domain to reconstruct images from fewer data points [12].This method randomly under-samples magnetic resonance data to reduce structured artifacts through non-uniform sampling while maintaining essential structural image information.The reconstruction process is non-linear and iterative, which, while effective in reducing scan times, demands significant computational resources [12].Compressed sensing may slightly reduce spatial resolution and risks missing the end-systolic phase.This could potentially underestimate end-systolic volume (ESV) and ejection fraction (EF) [12].Thus, to achieve comparable IQ with reference methods without affecting quantitative volumetry measurements, conservative acceleration factors (2.5-3.5) are often used [10].
This study explores the clinical efficacy of Sonic DL, a GE Healthcare accelerated 2D bSSFP cine sequence.Sonic DL incorporates variable density k-t sampling [15] and deep learning (DL) reconstruction [16] for accelerated cine image acquisition.The variable density k-t undersampling is conducted over the phase encoding dimension.Different random shifts of phase encoding are added to different cardiac phases to create an incoherent sampling scheme across cardiac phase dimensions [16].In the reconstruction, an unrolled convolutional neural network (CNN) architecture consisting of 12 unrolls is used.High IQ and fidelity are ensured by including a data consistency term and a CNN-based regularization term in each unroll [16].The data consistency term uses coil sensitivities with assumptions derived from training data sets ("learned priors") to inform image reconstruction from highly undersampled 2D data [16].
Each training dataset is acquired using a retrospectively triggered bSSFP cine sequence without any acceleration and using the Sonic DL sequence with variable density k-t undersampling.The undersampled cine data are used as training input, while the fully sampled cine data are used as the training label.
In preliminary studies, this approach was tested in a pediatric population [17] and in a small cohort of adult patients [18].In the pediatric population, the Sonic DL sequence was significantly faster than the standard bSSFP sequence (0.9 min vs 3.0 min; p < 0.001) [17].The IQ was only minimally lower for Sonic DL (3.8 ± 0.6) than for bSSFP (4.3 ± 0.6; p < 0.001) [17].In the adult population, measurements of LV and RV volumes showed good agreement with standard images (p > 0.05) (r ≥ 0.76).However, LV mass (LVM) was underestimated in the Sonic DL images (109.8 ± 34.6 g) compared to bSSFP (116.2 ± 40.2 g; p = 0.0291) [18].The authors found that the IQ scores of endocardial edge definition and motion artifacts were significantly impaired in the Sonic DL images.They attributed the difference in LVM to the technical limitations of the accelerated sequence [18].The impact of using DL-processed undersampled images on diagnostic decision-making was not studied.
In this study, we aimed to 1) determine the optimal acceleration factor of Sonic DL using a small cohort of volunteers, and 2) assess IQ, accuracy, and diagnostic confidence of LV and RV volume and mass as measured in 2D Sonic DL images acquired using this optimal acceleration factor compared to a standard array coil spatial sensitivity encoding (ASSET) bSSFP sequence in a larger cohort of adult patients with various cardiac diseases.
Study population
For our sub-study, 16 healthy volunteers were prospectively recruited.Each participant gave written informed consent before their CMR scan.Participants were ineligible for recruitment if they had contraindications to CMR, including claustrophobia, pregnancy, noncompatible pacemaker/defibrillator devices, or intraocular/intracranial metallic materials.Volunteers were considered healthy if they were non-smoking, had a body mass index (BMI) of less than 30, were not taking any medications, and had no significant past medical history.
For our larger study, 93 patients with clinical indications for a CMR exam (72 [67%] men, age 53.3 ± 15.3 years) and 15 healthy volunteers were prospectively recruited.All subjects gave written informed consent.Ineligibility and healthy volunteer recruitment followed the same conditions as in our sub-study.Patients with the following clinical indication were enrolled: atrial fibrillation (n = 38), suspected myocarditis (n = 23), suspected or known coronary artery disease (n = 13), hypertrophic cardiomyopathy (n = 9), and other non-ischemic cardiomyopathies (n = 10).
CMR protocol
CMR images were acquired with a clinical MRI system (Premier™ 3T, GE Healthcare, Milwaukee, Wisconsin, USA) using high-channelcount phased-array coils, AIR™ surface coil (30 anterior coil channels + 60 posterior coil channels).For patients, the scans were performed according to indication-specific protocols (Supplementary Table 1).In our sub-study, all participants underwent a conventional 2D array coil spatial sensitivity encoding (ASSET) bSSFP sequence and the proposed 2D Sonic DL bSSFP cine sequences at two acceleration factors (8× and 12×).These were performed using identical positioning and orientation.In our larger study, both, a conventional 2D ASSET bSSFP sequence and the 2D Sonic DL sequence at an acceleration factor of 8× were performed using identical positioning and orientation.Images were acquired in two-chamber (2Ch), three-chamber (3Ch), fourchamber (4Ch) views, and as a SAx stack through both ventricles (11-12 slices) at end-expiration during several BHs.In cases where susceptibility artifacts at the lung-myocardium interface were present, frequency scouting was used.This involved conducting a series of preliminary scans at different frequency offsets.The setting that best reduced this artifact was selected and applied to the bSSFP sequence.
The 2D ASSET bSSFP cine with one BH per slice was acquired using the following imaging parameters: in-plane resolution 1.8 mm × 1.8 mm; slice thickness 8 mm; repetition time/echo time (TR/TE) 3.1 ms/1.2 ms; flip angle 55°; bandwidth 125 Hz; slice gap 2 mm; acceleration factor 2. The 2D Sonic DL bSSFP cine sequence allowed for the acquisition of 3-5 slices per BH, using the following imaging parameters: in-plane resolution 1.8 mm × 1.8 mm; slice thickness 8 mm; TR/TE 2.9 ms/1.1 ms; flip angle 49°; bandwidth 125 Hz; slice gap 2 mm; acceleration factor 8 or 12.The Sonic DL bSSFP cine data were reconstructed inline with an unrolled (combining iterative and DL techniques) neural network reconstruction prototype [16].This prototype included a data consistency update and a CNNbased regularization term on spatiotemporal-split convolutions.The network was trained on 6480 fully sampled 2D bSSFP cine images for approximately 3 days on an NVIDIA V100 graphics processing unit (GPU).The average inline reconstruction time for the entire Sonic DL acquisition was 340 s.The acquisition time was recorded for each of the two cine sequences in the larger study.
Image analysis
All CMR images were analyzed offline by two blinded readers at a core lab (McGill University Health Centre, Montreal, Quebec, Canada) using commercial software (cvi42™, Circle Cardiovascular Imaging Inc., Calgary, Alberta, Canada).The cine LAx and SAx views were used for quantitative LV/RV functional and volumetric measurements.Endocardial contours were semi-automatically traced using the built-in "threshold tool."This tool separates the high signal-intensity bloodpool pixels from the lower-intensity myocardial pixels.Often, manual adjustments were used with particular attention to anatomical details to ensure that the trabecular tissue and papillary muscles were excluded from the blood-pool area [1].In contrast, epicardial contours were manually traced.Both these contours were drawn at end-diastole and end-systole.In basal slices, contours were carefully drawn to include the LV outflow tract into the LV volume up to the level of the aortic valve cusps.LV and RV volumes at end-diastole and end-systole, EF, and LVM, including their respective indexes normalized to the body surface area (BSA) [19], and height, were calculated using the Simpson method for the SAx stack, and a biplane method for the LAx views [20].End-diastolic and end-systolic phases were defined by the largest and smallest area measured in a mid-ventricular slice, respectively [21].LVM was measured in the end-systolic phase to reduce the effect of partial volume effects in trabecular layers [22].In 24 patients, we also measured global peak radial and circumferential strain using the feature tracking method as previously described [23].
The IQ between the two techniques was compared using both qualitative and quantitative metrics.
The qualitative IQ assessment was performed in LAx and SAx views.Two blinded clinical readers (one radiologist with 9 years of experience reading clinical CMR and one cardiologist with 6 years of experience reading clinical CMR) were asked to rate IQ regarding their diagnostic confidence using a 4-point ordinal scale: 1: no diagnostic confidence (non-interpretable); 2: low diagnostic confidence (poor IQ, significant artifacts); 3: moderate diagnostic confidence (good overall IQ with one or two views with poorer IQ); 4: high diagnostic confidence (high IQ, no views with significantly impaired IQ).
For the quantitative IQ assessment, the contrast difference between the blood pool and the myocardium, as well as endocardial border sharpness, was evaluated in a mid-ventricular SAx slice at end-diastole.The contrast difference was calculated as the difference in the average myocardium signal intensity from the average blood-pool signal intensity:
= Contrast difference Average blood pool signal intensity
Average myocardial signal intensity.
To measure endocardial edge sharpness, a three-step procedure was used.First, masks for blood pool and myocardium were generated manually using MatLab R2021b (MathWorks, Natick, Massachusetts, USA).Then, several line segments were drawn orthogonal to the blood pool and myocardial boundary, allowing for a signal intensity profile to be computed (Fig. 1).Edge sharpness was calculated by taking the average slope of the sigmoid functions that were fit to the signal intensity profile of the line segment (Fig. 1).For this assessment, we chose to evaluate a mid-ventricular slice below the level of the papillary muscles so that the papillary muscles would not interfere with our calculations.All subjects had the same number of orthogonal lines computed for analysis.
Inter-observer quality assurance
The evaluation of inter-observer reliability was performed for LV and RV volumetry, and LVM by certified core lab readers.Eighty randomly selected CMR studies were used.Inter-class correlations (ICC) and bias in measurements were used to assess the interobserver variability [24].
Statistical analysis
Continuous variables were presented as means with standard deviations (SD) or as medians with interquartile range, while categorical variables were presented as numbers or percentages.Normality was verified using the Shapiro-Wilk test.Differences between means were evaluated using paired student t-tests for parametric data or the Mann-Whitney test, or Wilcoxon signed-rank tests for non-parametric data.A repeated measures analysis of variance (ANOVA) was used to compare LV volumetry, function, and mass measurements between ASSET and Sonic DL at the two different acceleration factors.A Bland-Altman analysis was performed to compare LV and RV volumetric and LVM measurements between 2D ASSET bSSFP and 2D Sonic DL bSSFP cine methods.Correlation between parameters was also assessed using Pearson's correlation analysis.Statistical significance was set at p < 0.05.All statistical analyses were performed using R (version 3.6.3.R Foundation for Statistical Computing, Vienna, Austria).
Sub-study
The cine CMR protocol was successfully performed in all 16 subjects.No significant differences were found between the three methods in LV end-diastolic volume (LVEDV), LVESV, nor LVM measurements (Fig. 2).However, Sonic DL with an acceleration factor of 12 reduced the LVEF measurements by 6% (5.51/90.16)compared to our standard ASSET method (p = 0.004) and by 4% (4.32/88.98)compared to Sonic DL with an acceleration factor of 8 (p = 0.015).LVEF measurements by Sonic DL with an acceleration factor of 8 showed no significant differences compared to ASSET.
Population
Between June 2020 and June 2022, 93 patients (72 [67%] men, age 53.3 ± 15.3 years) with clinical indications for a CMR and 15 healthy volunteers were prospectively recruited.The protocol was successfully performed in all subjects.Details of the baseline demographics and clinical CMR indications for the participants recruited are presented in Table 1.
Comparison in scan time and image quality
The Sonic DL bSSFP cine sequence significantly reduced the acquisition time of cine images when compared to ASSET bSSFP (37% (5.98/ 16) decrease in total acquisition time, p < 0.0001) (Fig. 3).The acquisition time reduction was particularly evident for the SAx stack (mean acquisition time of SAx using Sonic DL bSSFP: 3.66 ± 1.1 min vs ASSET bSSFP: 8.95 ± 1.97 min).The acquisition of 2Ch, 3Ch, and 4Ch LAx images was also significantly shorter when using Sonic DL bSSFP cine (mean acquisition time of 3 LAx views using Sonic DL bSSFP: 4.26 ± 1.86 min vs ASSET bSSFP: 6.56 ± 2.23 min).
Quantitative and qualitative IQ were similar between methods (p > 0.05) (Table 2, Fig. 4).A greater number of standard ASSET cases were rated with a high diagnostic confidence score compared to Sonic DL (58 vs 39 cases) (Fig. 4).However, the ASSET method contained three cases that were rated as non-interpretable, while Sonic DL had no such cases (Fig. 4).Fig. 5 shows representative cine images using both methods in a patient with atrial fibrillation.
For strain measurements, global peak radial and circumferential strain measurements were highly correlated (r > 0.9) and were statistically not different (p > 0.05) between both cine methods (Table 4).
Inter-observer reliability
A summary of the inter-observer reliability results is listed in Table 5. Moderate to excellent inter-observer variability was demonstrated for all measured parameters.Measurements between observers tended to differ slightly more with the Sonic DL bSSFP sequence compared to ASSET, and when measuring right ventricular volumes.
Discussion
Our results indicate that an accelerated 2D cine method with DL reconstruction may reduce CMR cine acquisition time.This method does not significantly affect volumetry or IQ and does not compromise diagnostic confidence.These results suggest that Sonic DL may have a direct clinical application for CMR cine imaging.
CMR is generally perceived as a high-cost investigational tool [25], limited by long exam times.These lengthy exam times limit the efficiency of clinical CMR, often resulting in reduced access to scanners and long waiting lists [26].Since cine sequences are an essential component of CMR evaluation [3], their lengthy acquisition contributes to this problem.ASSET bSSFP as a clinical standard method for obtaining CMR cine data requires multiple sequential BHs to acquire 14-15 different 2D views, encompassing both SAx and LAx views of the heart [3].Effective acquisition relies on patient cooperation for BH-ing and accurate electrocardiogram (ECG) signal capture to synchronize with the heart rhythm.However, patients requiring cardiac MRI often present with conditions that challenge their BH-ing capacity, exhibit high heart rate variability, or suffer from severe arrhythmias.These factors complicate image acquisition and extend the time needed to comprehensively image the heart.
While free-breathing, three-dimensional (3D), SMS, or other highly accelerated acquisitions [6][7][8][27][28][29], may address issues related to BHing, anatomical 2D slice misalignment, or even ECG-triggering, they are not yet widely adopted in clinical settings due to their own set of complications.Free-breathing techniques, though alleviating BH issues, may not necessarily shorten acquisition times and may require patients to remain immobile for extended periods.SMS methods may offer further scan efficiency compared to Sonic DL by capturing multiple slices simultaneously.However, they introduce the potential for slice cross-talk, which may negatively affect IQ and slice alignment [7].Even if the acquisition is accelerated, the often significant computational demands for data reconstruction, potentially extending over hours or even days, delay clinical decision-making for patients.In these cases, if SD standard deviation, BSA body surface area, BMI body mass index, CMR cardiovascular magnetic resonance, NICMP non-ischemic cardiomyopathy.
IQ is compromised, this may even necessitate a repeat exam further reducing clinical efficiency.Additionally, the reconstruction algorithms used may oversimplify cardiac dynamics or require tedious manual tuning of regularization parameters [27,30], limiting their practicality compared to the established efficiency and reliability of 2D cartesian cine methods.The 2D Sonic DL sequence used in this study may not achieve the same level of time efficiency as other accelerated methods [31,32].However, it may offer a preferable balance between speed and IQ without introducing lengthy reconstruction times in the clinical setting.In our study, scan time was reduced by 40% compared to ASSET (Fig. 3) with an average reconstruction time of 5-6 min for the entire imaging series.This acceleration offers patients a shorter BH time per slice or fewer BH's overall for the complete cine acquisition.In addition, image reconstruction is complete before the patients' examination is finished, reducing the potential for repeat examinations due to poor IQ.Further, User-rated image quality was conducted using a 4-point ordinal scale by 2 experienced clinical readers, where 5 represented images with the best image quality.Contrast difference was measured between the blood pool and myocardium by taking the difference in myocardium signal intensity from the signal intensity of the blood pool.The blood pool to myocardial edge sharpness was calculated as the mean fitted slope of a sigmoid to the signal intensity profile of pixels in the blood pool as they transitioned to the myocardium.Paired statistical tests were used to compare means between the scores of the two methods.A p-value of < 0.05 was considered statistically significant.Values are presented as means ± standard deviation.
ASSET array spatial sensitivity encoding technique, bSSFP balanced steady-state free precession, DL deep learning, SI signal intensity, BP blood pool.this study employed a prototype reconstruction implemented on a CPU.With further model optimization and GPU implementation, we anticipate that reconstruction times could be reduced to mere seconds.This improvement could imply that cine images are reconstructed before the next series is acquired, meeting current clinical demands efficiently.The acceleration offered by Sonic DL is achieved by leveraging a Cartesian variable density k-t-space sampling pattern [15].This distinguishes it from ASSET's uniform under-sampling approach.By focusing on denser sampling in the center of k-t space-where the raw data contributing to image contrast and structure definition reside [34]-Sonic DL minimizes the loss of important image information, with under-sampling artifacts manifesting as noise [15].This strategy, coupled with sophisticated reconstruction techniques, can preserve the diagnostic integrity of images.Conversely, applying an equivalent acceleration factor to ASSET might lead to fold-over artifacts, challenging to eliminate even with multi-channel receiver coil technology.
To recover the missing data from under-sampling, Sonic DL uses a data-driven unrolled CNN reconstruction [11,16] and mirroring techniques such as CINENet [35].The data-driven approach uses information from coil sensitivities and "learned priors" from network training to balance data consistency with network regularization [16].The incorporation of DL into the reconstruction facilitates faster reconstruction times and allows further image acceleration as DL may learn better priors for image recovery [16,[35][36][37][38][39][40].
While DL may indeed learn optimal regularization weights and imaging priors for image recovery, our sub-study found that Sonic DL with an acceleration factor of 12 underestimated LV volumetry and LVEF measurements.This suggests that compression of cardiac motion may occur with Sonic DL at higher acceleration factors.This issue is common with regularized reconstruction methods [9,16,33].Overregularization, used to enforce sparsity, can smooth out important anatomical details, leading to a blurred appearance in the reconstructed image.Our findings indicate that while Sonic DL at 8× acceleration is effective, further future investigation into the issue of temporal blurring may be valuable to optimize this imaging sequence and reconstruction algorithm, potentially allowing for even higher acceleration rates to be achieved.
Techniques, such as CINENet, support 3D acquisition through an additional phase encoding dimension, offering greater acceleration potential, yet 2D cine remains the clinical preference.This is due to shorter BH requirements and superior blood-pool-to-myocardium contrast.Despite needing BHs, Sonic DL reduces the overall number of BHs needed to achieve full anatomical coverage.This lowers the risk of slice misalignment due to inconsistent BH-ing and reduces the need for repetitions.This technique may also be easier for patients who struggle to adhere to the breathing maneuvers.
From a clinical perspective, this study found that Sonic DL's did not impair the diagnostic integrity of images, as rated by two expert clinical readers.Even though more ASSET cases were rated as having nearperfect IQ by clinical readers (Fig. 3), all Sonic DL cases were rated as diagnostic.This was not the case for ASSET, suggesting that Sonic DL, while not improving subjective IQ under perfect imaging conditions, appears to improve IQ under suboptimal imaging conditions.This is especially important in high-throughput settings, where time restrictions often do not allow for individually optimizing scanner settings.
Clinicians require the diagnostic integrity of images to remain uncompromised.This can be determined through their satisfaction with images as well as through specific quantitative metrics of IQ.Precise quantification of blood volumes and mass necessitates accurate demarcation of the endocardial and epicardial borders, assessable by edge sharpness or contrast differentiation metrics.The use of regularization terms in Sonic DL's reconstruction process, aimed at noise reduction, renders direct comparisons of contrast-to-noise ratio (CNR) or SNR infeasible in this study.Instead, we evaluated the contrast differences between tissues by measuring the signal intensity variance between the myocardium and the blood pool as endocardial edge sharpness or contrast difference.We chose to evaluate a mid-ventricular slice below the level of the papillary muscles to avoid interference of papillary muscles in any calculations.Our study observed no significant discrepancies in endocardial edge sharpness or myocardium-to-blood-pool contrast between ASSET and Sonic DL.These conclusions were corroborated by quantitative volumetry and functional assessments, which also showed no statistically significant differences between the methods.
Although LV and RV volumetry measurements were similar between ASSET and Sonic DL, our study found that LVM was overestimated using Sonic DL.Given LVM's association with severe cardiovascular conditions and its significance in diagnosing hypertrophic cardiomyopathies and other infiltrative diseases, accuracy in its measurement is crucial [41,42].Despite observing a statistically significant difference, the magnitude of this variance (∼3 g/m 2 ) is unlikely to misclassify a patient's myocardial mass as normal or abnormal.Although the variation in LVM measured may not have clinical relevance, it suggests caution when using Sonic DL in scenarios where precise mass measurements are critical [3].
Our findings with LVM measurements are in contrast to previous results which found that LVM was under-estimated by sonic DL [18].The difference in findings may be attributed to a difference in contouring methods, as the previous study excluded trabeculations and papillary muscles from myocardial mass while this study included them [18].
Limitations
This study must be interpreted in the context of its limitations.This study was conducted at a single center with a medium sample size.These results have yet to be replicated in larger cohorts.We studied a population that may not be representative of those in other centers, and results may not be generalizable to pathologies not included here, such as diseases with complex anatomy (i.e.congenital) or thin myocardial Radial and circumferential strain were calculated using SAx views, while longitudinal strain was calculated using LAx views.Statistical significance was considered when p < 0.05.Values are presented as means ± standard deviation.
ASSET array coil spatial sensitivity encoding technique, bSSFP balanced steady-state free precession, DL deep learning, LAx long axis, SAx short axis, r correlation coefficient.
walls (i.e.excessive LV trabeculation).ASSET bSSFP cine is still prone to measurement errors so the lack of a third measurement tool as a reference limits the interpretability of the results.Quantitative IQ was only measured in one mid-ventricular SAx slice at end-diastole, arguably the imaging area with fewer problems.Therefore, these results may not accurately depict IQ for more problematic regions of the heart such as the apex with its increased amount of trabeculations, or at the basal region.Finally, only the border between the blood pool and myocardium was measured in terms of its border sharpness and CNR.
Of note, we found that Sonic DL may overestimate LVM without affecting volumetry.Thus, the difference may be due to an altered visual appearance of epicardial borders in Sonic DL images.This should be further investigated.
In this study, Sonic DL was unable to reach acceleration rates as high as similar methods cited in literature [31,32].The reasons for this may be explained by an introduction of temporal blurring at higher acceleration rates.The spatiotemporal-split convolutions used in the reconstruction framework, while useful for managing the complexity of dynamic imaging data, can contribute to temporal blurring if they inadequately capture temporal dependencies or overly prioritize spatial features.While this potential issue did not prevent Sonic DL from ultimately reducing scan time without affecting quantitative LV volumetry or IQ, it suggests that the framework may be further improved with a more thorough investigation into this topic.
Similarly, we focused on exploring the performance of Sonic DL at a specific acceleration factor of 8. Notably, previous research indicates that lower acceleration factors, such as 4, can yield results more closely aligned with standard cine sequences while still offering the benefits of accelerated acquisitions [43].The choice of acceleration factor is not merely a technical consideration but also a clinical one, as it may provide varied advantages depending on the disease context.For conditions requiring precise volumetry, function, and mass measurements, lower acceleration factors are preferable to ensure accuracy.Conversely, in scenarios where patients face challenges with arrhythmia or limited BH capacity, and where precise measurements are less critical for diagnosis, higher acceleration factors could be more suitable.This differentiation underscores the need to tailor the acceleration factor based on specific diagnostic requirements, although this study did not extend to evaluating Sonic DL's generalizability across these different settings.
Conclusion
Undersampled k-space sampling methods combined with DL reconstruction can be considered an efficient tool to reduce CMR scan times [33].This would increase the efficiency of CMR scanning without compromising its clinical utility.Shorter scan times may also improve the patient experience.If these results can be replicated in larger, multicenter trials, Sonic DL has the potential to replace traditional, slower imaging techniques in routine CMR imaging.
Fig. 1 .
Fig. 1.Schematic depiction of the calculation of endocardial edge sharpness measurements.(A) Radial lines drawn in an orthogonal fashion from the center of the LV cavity to the subepicardial myocardial boundary to compute a signal intensity profile.(B) Endocardial edge sharpness was calculated by taking the average slope of the sigmoid functions that were fit to the signal intensity profile.LV left ventricle.
Fig. 2 .
Fig. 2. Comparison of left-ventricular (LV): (A) end-diastolic volume (EDV), (B) end-systolic volume (ESV), (C) ejection fraction (EF) and (D) mass between ASSET and Sonic DL at an acceleration factor of 8 and 12.A repeated measures ANOVA was used to compare means between methods.A p-value of < 0.05 was considered statistically significant.ASSET array spatial sensitivity encoding technique, DL deep learning, n.s.non-significant, LVM left ventricular mass, ANOVA analysis of variance.
Fig. 3 .
Fig. 3. Comparison of scan time between methods.(A) Comparison of scan time between acquisition methods of the complete short-axis (SAx) and long-axis (LAx) views.(B) Comparison of scan time between methods of the SAx stack.(C) Comparison of scan time between methods of the three LAx views.ASSET array coil spatial sensitivity encoding, bSSFP balanced steady-state free precession, DL deep learning.
Fig. 4 .
Fig. 4. Results from image quality assessment.(A) Diagnostic confidence scores were obtained from two experienced clinical readers.Images were anonymized to sequence type and randomized with respect to the order they were presented to the readers.1: No diagnostic confidence (not interpretable); 2: low diagnostic confidence (poor image quality); 3: medium diagnostic confidence (good overall image quality with one or two views with poorer IQ); 4: high diagnostic confidence (perfect image quality).(B) Contrast difference measurements were taken between the blood pool and the myocardium by subtracting the signal intensity of the myocardium from that of the blood pool.(C) Endocardial edge sharpness was calculated by taking the average slope of the sigmoid functions that were fit to the signal intensity profile of the line segment than was drawn orthogonal to the myocardial and blood pool border.ASSET array coil spatial sensitivity encoding technique, bSSFP balanced steady-state free precession, DL deep learning, SI signal intensity, BP blood pool, Myo myocardium, IQ image quality.
Fig. 5 .
Fig. 5.A representative set of cine images at end-systole and end-diastole from a patient with atrial fibrillation.ASSET array coil spatial sensitivity encoding technique, bSSFP balanced steady-state free precession, DL deep learning, LAx long axis.
Fig. 6 .
Fig. 6.Bland-Altman plots displaying the similarity of measured ventricular function metrics using ASSET bSSFP and Sonic DL bSSFP sequence.The solid blue line represents the average bias between measurements.The dotted lines represent the upper and lower 95% limits of agreement.Bland-Altman plots displayed for (A) LVEDV, (B) LVESV, (C) LVEF, (D) LVM, (E) RVEDV, (F) RVESV, (G) RVEF.LV: left ventricle, ASSET array coil spatial sensitivity encoding technique encoding, DL deep learning, RV right ventricle, EDV end-diastolic volume, ESV end-systolic volume, EF ejection fraction, LVM left ventricular mass.
Table 1
Baseline demographics and clinical indications for subjects enrolled in this study (n = 108).
Table 2
Image quality assessment of short-axis images acquired with the ASSET bSSFP or the Sonic DL bSSFP sequence.
Table 3
Comparison of measured functional parameters between ASSET bSSFP cine and Sonic DL bSSFP cine using SAx stack images.
All volumes were indexed to body surface area.Statistical significance was considered when p < 0.05.Values are presented as means ± standard deviation.ASSET array coil spatial sensitivity encoding technique, bSSFP balanced steady-state free precession, DL deep learning, SAx short axis, LVEDV left ventricular enddiastolic volume, LVESV left ventricular end-systolic volume, LVEF left ventricular ejection fraction, LVM left ventricular mass, RVEDV right ventricular end-diastolic volume, RVESV right ventricular end-systolic volume, RVEF right ventricular ejection fraction, r correlation coefficient.
Table 5
Inter-observer variability of ASSET and Sonic DL measurements.
Values are presented as means ± standard deviation.ASSET array coil spatial sensitivity encoding technique, DL deep learning, ICC intraclass correlation coefficient, LVEDV left ventricle end-diastolic volume, LVESV left ventricle end-systolic volume, LVEF left ventricle ejection fraction, LVM left ventricle mass, RVEDV right ventricle end-diastolic volume, RVESV right ventricle endsystolic volume, RVEF right ventricle ejection fraction, CI confidence interval.
Table 4
Comparison of measured strain values between ASSET bSSFP cine and Sonic DL bSSFP cine. | 7,834.4 | 2024-07-01T00:00:00.000 | [
"Medicine",
"Engineering"
] |
Requirement for Schizosaccharomyces pombe Top3 in the maintenance of chromosome integrity
In Schizosaccharomyces pombe, topoisomerase III is encoded by a single gene, top3+, which is essential for cell viability and proper chromosome segregation. Deletion of rqh1+, which encodes the sole RecQ family helicase in S. pombe, suppresses the lethality caused by loss of top3. Here, we provide evidence suggesting that the lethality in top3 mutants is due to accumulation of aberrant DNA structures that arise during S phase, as judged by pulsed-field gel electrophoresis. Using a top3 shut-off strain, we show here that depletion of Top3 activates the DNA damage checkpoint associated with phosphorylation of the checkpoint kinase Chk1. Despite activation of this checkpoint, top3 cells exit the arrest but fail to undergo faithful chromosome segregation. However, these mitotic defects are secondary to chromosomal abnormalities that lead to the lethality, because advance into mitosis did not adversely affect cell survival. Furthermore, top3 function is required for maintenance of nucleolar structure, possibly due to its ability to prevent recombination at the rDNA loci. Our data are consistent with the notion that Top3 has a key function in homologous recombinational repair during S phase that is essential for ensuring subsequent fidelity of chromosome segregation.
Introduction
DNA topoisomerases play important roles in DNA metabolism through their ability to catalyse the inter-conversion of topological isomers of DNA (Champoux, 2001;Wang, 1996;Watt and Hickson, 1994;Wigley, 1995). The fission yeast Schizosaccharomyces pombe expresses three topoisomerases, designated topoisomerases I, II and III. While the functions of topoisomerases I and II are quite well established, the role of topoisomerase III is not fully understood, in part because this class of enzyme possesses only weak DNA relaxation activity and is thought unlikely to participate in the maintenance of DNA supercoiling homeostasis (Goulaouic et al., 1999;Kim and Wang, 1992). The best-characterised top3 gene is from the budding yeast Saccharomyces cerevisiae. In this organism, deletion of Top3 results in hyper-recombination between repetitive DNA elements, slow growth due to a propensity to arrest at the G2-M DNA damage checkpoint, and defects in sporulation and S-phase responses to DNA damage (Chakraverty et al., 2001;Gangloff et al., 1999;Wallis et al., 1989). In contrast, deletion of top3 in S. pombe results in defective nuclear division and lethality (Goodwin et al., 1999;Maftahi et al., 1999). Two human topoisomerase III homologues, TOPOIIIα and TOPOIIIβ, have been identified (Hanai et al., 1996;Ng et al., 1999) and murine top3α has been shown to be essential for embryonic development (Li and Wang, 1998). Mice lacking top3β develop to maturity but show a reduced mean lifespan (Kwan and Wang, 2001).
Accumulating evidence suggests that RecQ helicases act in concert with topoisomerase III. Interestingly, mutation of SGS1 or rqh1 + , which encode the sole RecQ homologues found in budding and fission yeast, respectively, can suppress the deleterious effects of loss of top3 function (Gangloff et al., 1994;Goodwin et al., 1999;Maftahi et al., 1999). There are five human RecQ-like helicase proteins: BLM, WRN, RECQL, RECQ4 and RECQ5. WRN is mutated in the premature ageing disorder Werner's syndrome and RECQ4 is defective in Rothmund-Thomson syndrome (Kitao et al., 1999;Yu et al., 1996). Mutations in BLM cause Bloom syndrome (Ellis et al., 1995), the hallmark of which at the cellular level is an unusually high frequency of sister chromatid exchanges (Chaganti et al., 1974). RECQ5 physically interacts with TOP3α and TOP3β (Shimamoto et al., 2000) and the BLM protein binds to TOP3α (Wu et al., 2000), while overexpressed Caenorhabditis elegans TOP3α interacts physically with the RecQ homologue Him6 in vitro (Kim et al., 2002). Rqh1, the single S. pombe homologue, exists with Top3 in a highmolecular-weight complex (Laursen et al., 2003). The S. cerevisiae Sgs1 and Top3 proteins also interact physically, raising the possibility that Sgs1 may recruit Top3 to its site of action (Bennett and Wang, 2001). Several observations suggest that the function of the Top3-RecQ complex is required during S phase. S. cerevisiae sgs1 and top3 mutants are sensitive to hydroxyurea (HU), which blocks DNA replication by depletion of dNTP pools (Frei and Gasser, 2000;Mullen et al., 2000). In S. pombe, treatment of a temperature-sensitive top3 mutant with HU leads to increased 4769 In Schizosaccharomyces pombe, topoisomerase III is encoded by a single gene, top3 + , which is essential for cell viability and proper chromosome segregation. Deletion of rqh1 + , which encodes the sole RecQ family helicase in S. pombe, suppresses the lethality caused by loss of top3. Here, we provide evidence suggesting that the lethality in top3 mutants is due to accumulation of aberrant DNA structures that arise during S phase, as judged by pulsed-field gel electrophoresis. Using a top3 shut-off strain, we show here that depletion of Top3 activates the DNA damage checkpoint associated with phosphorylation of the checkpoint kinase Chk1. Despite activation of this checkpoint, top3 cells exit the arrest but fail to undergo faithful chromosome segregation. However, these mitotic defects are secondary to chromosomal abnormalities that lead to the lethality, because advance into mitosis did not adversely affect cell survival. Furthermore, top3 function is required for maintenance of nucleolar structure, possibly due to its ability to prevent recombination at the rDNA loci. Our data are consistent with the notion that Top3 has a key function in homologous recombinational repair during S phase that is essential for ensuring subsequent fidelity of chromosome segregation. chromosome segregation defects (Oh et al., 2002) and inactivation of rqh1 also causes HU sensitivity and a defect in recovery from S phase arrest (Enoch et al., 1992;Stewart et al., 1997). It has been suggested that rqh1 + is required to prevent recombination and that suppression of inappropriate recombination is essential for reversible S phase arrest. Consistent with this proposal, expression of a bacterial Holliday junction resolvase can partially suppress the HU and UV sensitivities of rqh1 mutants (Doe et al., 2000). Moreover in vitro, Sgs1, BLM and WRN proteins can efficiently migrate synthetic four-way DNA structures that represent Holliday junctions (Bennett et al., 1998;Gray et al., 1997;Karow et al., 1997). Such structures can be detected during S phase and perturbation of replication leads to an elevation in their frequency (Zou and Rothstein, 1997). The idea that Top3-RecQ complexes play a role in homologous recombination is further supported by the demonstration that in budding and fission yeast these proteins act in a common pathway, and that inactivation of homologous recombination suppresses defects in top3 mutants (Laursen et al., 2003;Oakley et al., 2002;Shor et al., 2002). Furthermore, it has been shown that BLM and TOP3α act together in vitro in the resolution of a recombination intermediate containing a double Holliday junction, suggesting that in vivo they may suppress crossing over during homologous recombination (Wu and Hickson, 2003).
To understand the biological function of Top3, we have further characterised the phenotype of a top3 temperaturesensitive mutant. We show that in top3 mutants, chromosomes become intertwined during S phase, leading to DNA doublestrand break accumulation, checkpoint activation, and chromosome mis-segregation. These data implicate Top3 function in processing aberrant chromosome structures during DNA replication.
Materials and Methods
Fission yeast strains and methods Conditions for growth, maintenance and genetic manipulation of fission yeast were as described previously (Moreno et al., 1991). A complete list of the strains used in this study is given in Table 1. Except where stated otherwise, strains were grown at 30°C in YE5S or EMM2 medium with appropriate supplements. Where necessary, gene expression from the nmt1 promoter was repressed by the addition of 60 µM thiamine to the growth medium. Cell concentration was determined with a Sysmex F-800 cell counter (TOA Medical Electronic, Japan).
Immunochemistry
Cell extracts were prepared by trichloroacetic acid precipitation following glass bead disruption (Caspari et al., 2000). Immunoblotting was performed essentially as described elsewhere (Ausubel et al., 1995). The mouse anti-influenza haemagglutinin (HA) monoclonal HA-11 (Babco, Berkeley, CA) was used for detection of HA-tagged Top3 and Chk1 proteins. Cdc2 was detected using the mouse monoclonal antibody Y100 (generated by J. Gannon and kindly provided by H. Yamano). Horseradish peroxidase-conjugated anti-mouse antibodies (Sigma, Poole, UK) and enhanced chemiluminescence (ECL, Amersham) were used to detect bound antibody.
Microscopy and flow cytometry
Cells fixed in 70% ethanol were re-hydrated and stained with 4′,6-diamidino-2-phenylindole (DAPI) before examination by fluorescence microscopy. Visualisation of GFP protein in living cells, embedded in 0.6% LMP agarose after staining with Hoechst 33342 (5 µg/ml), was performed at room temperature as previously described (Wang et al., 2002). Images were acquired using a Zeiss Axioplan 2 microscope equipped with a Planapochromat 100× objective, an Axiocam cooled CCD camera and Axiovision software (Carl Zeiss, Welwyn Garden City, UK), and were assembled using Adobe PhotoShop. For flow cytometry, cells fixed with 70% ethanol were rehydrated in 10 mM EDTA, pH 8.0, 0.1 mg/ml RNase A, 1 µM sytox green, and incubated at 37°C for 2 hours. Cells were analysed using a Coulter Epics XL-MCL (Fullerton, CA).
Pulsed-field gel electrophoresis DNA plugs were prepared according to the manufacturer's instructions (Bio-Rad, Hercules, CA) with the following modification. 2×10 7 cells for each plug were embedded in 1% low melting point agarose containing 1 mg/ml Zymolyase in suspension buffer (10 mM Tris, pH 7.2, 20 mM NaCl, 50 mM EDTA) and were digested for 1 hour at 37°C in Lyticase buffer (10 mM Tris, pH 7.2, 50 mM EDTA, 1 mg/ml Zymolyase). Plugs were incubated for 90 minutes at 55°C in 1% SDS, 50 mM Tris, pH 7.5, 0.25 mM EDTA before being treated Journal of Cell Science 117 (20) with proteinase K in 1% lauroyl sarcosine, 0.5 M EDTA pH 8.0 for 48 hours at 55°C. Before electrophoresis, plugs were equilibrated in TE for at least 1 hour. Pulsed-field gel electrophoresis was carried out with a 0.8% chromosomal grade agarose gel in 1× TAE buffer (40 mM Tris-acetate, 2 mM EDTA) by using a CHEF III apparatus (Bio-Rad, Hercules, CA). The settings were as follows: 2 V/cm; switch time, 30 minutes; angle, 106°; 14°C, 48 hours.
Results
Top3 mutants accumulate defects during S phase In contrast to its homologue in S. cerevisiae, the S. pombe top3 + gene is essential for cell viability. Fission yeast cells deficient in top3 survive only a limited number of cell divisions before arresting as highly elongated cells with aberrant chromosome morphologies indicative of defects in chromosome segregation (Goodwin et al., 1999;Maftahi et al., 1999). To explore the role of top3 in chromosome segregation, top3-134 cells were synchronized in G1 by nitrogen starvation and then released from the arrest by transfer to nitrogen-rich medium at the nonpermissive temperature. As shown in Fig. 1A, top3-134 cells entered and proceeded through S phase with kinetics similar to those of wild-type cells. Both strains initiated DNA replication by 2 hours after nitrogen re-feeding and completed S phase by 5 hours. These results indicate that top3 is not required for bulk DNA replication. However, we observed a significant delay in entry into mitosis in top3-134 cells (57% reduction in mitotic index as compared with wild-type cells at 6 hours; Fig. 1B). Moreover, top3-134 cells that underwent mitosis frequently mis-segregated their chromosomes ( Fig. 1C) with chromosomes lagging between the separating daughter nuclei (Fig. 1D). After 8 hours, top3-134 cells became highly elongated and displayed aberrant chromosome morphologies. These cells often contained numerous DAPI-staining bodies indicative of fragmented chromosomes (data not shown). It has been proposed that Top3 functions together with RecQ helicase in processing DNA lesions that arise during S phase (Wu and Hickson, 2003). To determine whether passage through S phase elicited the defects in top3-134 cells, we performed analogous experiments in which the cells were allowed to complete S phase (5 hours in nitrogen-rich medium at 26°C) before inactivation of Top3 ( Fig. 2A). In contrast to the results shown in Fig. 1B, top3-134 traversed mitosis with kinetics similar to those of wild-type cells (Fig. 2B). Furthermore, missegregation defects were dramatically reduced in top3-134 cells, which only occurred as cells went through the next cell cycle (Fig. 2C). Taken together, these results indicate that some aspect of S phase is defective in top3-134 cells and that these defects manifest themselves only later in the cell cycle, during mitosis, as aberrant chromosome segregation.
Activation of the DNA damage checkpoint in top3 mutants
The observed mitotic delay and cell elongation suggest that inactivation of top3 leads to a checkpoint-mediated cell cycle arrest, as shown previously for S. cerevisiae top3 deletion strains (Chakraverty et al., 2001). To explore the relationship between the loss of top3 function and checkpoint pathways, we analysed the effects on the phenotype of top3-134 cells with deletion of the genes encoding effector kinases Chk1, which Wild type (top3 + ) and top3-134 cells were arrested in G1 by nitrogen starvation and released into nitrogen-rich medium to restart the cell cycle at the non-permissive temperature of 36°C. Cells harvested at hourly intervals were processed for flow cytometry (A), and mitotic index was assessed by scoring bi-nucleate cells (B). The percentage of cells displaying aberrant mitosis with chromosomes lagging between the separating daughter nuclei (indicated by arrowheads in D) was also determined (C). (D) Fluorescence micrographs of DAPIstained wild type and top3-134 cells, 6 hours after release from nitrogen starvation at 36°C. Bar, 10 µm. acts in the DNA damage checkpoint (Walworth et al., 1993) and Cds1, which acts in the S-M checkpoint (Murakami and Okayama, 1995). Interestingly, deletion of chk1 + but not cds1 + suppressed the cell cycle arrest in top3-134 cells. Whereas ∆cds1 top3-134 cells became highly elongated to the same extent as top3-134 cells at the restrictive temperature, ∆chk1 top3-134 cells were smaller and of a more uniform size (Fig. 3A). These data suggest that the elongation and cell cycle arrest are DNA damage checkpoint responses that signal through Chk1.
Recently, it was shown that Top3 exists in a high-molecularweight complex even in the absence of Rqh1 (Laursen et al., 2003). It is therefore possible that the phenotype of top3 temperature-sensitive mutants reflects the dissociation of such a complex rather than the loss of Top3 activity per se. We therefore analysed further the effect of depletion of Top3 by generating a 'shut-off' strain top3-P41 containing top3 under the control of the thiamine-repressible attenuated nmt41 promoter, at the same time introducing an HA epitope tag sequence fused in-frame to the 5′ end of the top3 open reading frame. An HA epitope-tagged chk1 allele was also introduced into this strain. The growth defects of top3 shut-off cells were investigated by measuring their growth rate in liquid medium containing thiamine (Fig. 4). These cultures were incubated at 36°C to improve the synchrony of appearance of the top3 phenotype. Anti-HA immunoblotting showed that most of the Top3 was depleted by 3 hours after addition of thiamine (Fig. 4C). The generation time of cultures without thiamine was 4.24 hours. Depletion of Top3 inhibited cell proliferation such that the generation time measured 9 hours after addition of thiamine was 20 hours (Fig. 4A). By 18 hours after thiamine addition, these cells had completely ceased dividing. DAPI staining and microscopy revealed that cells depleted for Top3 became highly elongated 12 hours after addition of thiamine (Fig. 4B). After prolonged incubation a variety of nuclear defects were observed in these cells, including the 'cut' phenotype as well as extensive nuclear DNA fragmentation, identical to the phenotype of top3 temperature-sensitive mutants. In addition, deletion of chk1 but not cds1 suppressed the cell cycle arrest in these cells (data not shown). In line with these data, immunoblot analysis revealed the appearance of a slowermigrating band of phosphorylated Chk1, coincidently with the disappearance of Top3 signal following addition of thiamine (Fig. 4C). Phosphorylation of Chk1 is associated with activation of its protein kinase activity and is used as a surrogate marker of checkpoint activation (Walworth et al., 1993). These data suggest that the elongation and cell cycle arrest following depletion of Top3 (Fig. 4A,B) are DNA damage checkpoint responses.
Given the effects that deletion of checkpoint genes had on the cell cycle distribution in top3-134 cells, we investigated the consequences for cell viability. We reasoned that the G2 checkpoint arrest following inactivation of top3 would be important for cell survival and loss of this checkpoint control would lead to a more rapid loss of viability. However, as shown in Fig. 3B, shortening the period of G2 arrest by deletion of rad1 did not appear to affect survival adversely, because ∆rad1 top3-134 double mutants and top3-134 single mutants showed comparable levels of cell survival after shift to the restrictive temperature (Fig. 3C). These data suggest that the lethality in top3-134 cells results from the failure to resolve DNA structures rather than the defect in chromosome segregation or the failure to re-enter the cell cycle.
During the course of construction of double mutants, we identified a specific interaction between top3 and rad3 or rad26, but not with other checkpoint genes. As shown in Fig. 5, whereas deletion of rad3 or rad26 resulted in synthetic lethality in top3-134 cells, deletion of genes encoding the checkpoint sliding clamp protein Rad1 or clamp loader Rad17 had no effect on the growth of top3-134 cells at the permissive temperature. Consistent with the genetic interaction between top3 and rqh1, similar results have been described for strains combining ∆rqh1 and checkpoint mutations. Deletion of rqh1 is synthetic lethal in combination with either ∆rad3 or ∆rad26 but not with other checkpoint mutations (Murray et al., 1997). These data indicate that rad3 might have a function in addition Journal of Cell Science 117 (20) Fig. 2. Chromosome segregation defects in top3-134 cells require passage through S-phase. G1-arrested wild type (top3 + ) and top3-134 cells were released into nitrogen-rich medium to restart the cell cycle at the permissive temperature of 26°C for 5 hours before shifting to the restrictive temperature of 36°C. Cells harvested at hourly intervals were processed for flow cytometry (A), and assessed for mitotic index (B) and aberrant mitosis (C) as in Fig. 1. to its role in checkpoint control that is required for the survival of rqh1 and top3 mutants.
Top3 is required for maintenance of chromosome structure The results presented above suggest that the lethality in top3-134 cells results from a failure to resolve DNA structures arising during S phase. To determine the nature of these abnormal structures, we used pulsed-field gel electrophoresis (PFGE) to assess the integrity of the three S. pombe chromosomes. Incomplete DNA replication and the accumulation of unresolved replication forks yield S. pombe DNA samples that cannot enter the gel (Waseem et al., 1992). If top3-134 cells accumulate abnormal structures during Sphase, we would expect to see a reduction in the amount of DNA entering the gel. Indeed, we consistently observed that the total amount of DNA entering the gel was lower for top3-134 cells (from four independent isolates), even when grown at permissive temperature, than for wild-type cells (Fig. 6A). In addition, a diffuse zone of faster migrating fragmented DNA running below chromosome III, indicating DNA double-strand breaks, was also observed. These results are consistent with the chromosome segregation defects (see below) as attempts to segregate these entangled chromosomes would lead to chromosome fragmentation. Significantly, no obvious difference was seen between the intensity of chromosomes isolated from top3-134 and wild-type cells arrested in G1 (Fig. 6B). This suggests that the pattern of chromosomal abnormality seen in exponentially growing top3-134 cultures is specifically associated with the S and G2-M phases of the cell cycle. Furthermore, we consistently observed anomalous migration of chromosome III from top3-134 cells, which had a significantly faster mobility than the wild-type chromosome, and often showed a greater reduction in the intensity of signal compared with that of chromosome I or II (Fig. 6A, top3-134 number 2 and 4). A similar result was recently described in ∆rqh1 cells, suggesting that Rqh1 and Top3 might function together in the maintenance of the rDNA repeats located at the ends of chromosome III (Coulon et al., 2004).
To substantiate the link between DNA replication and aberrant chromosome structures in top3 mutants, top3-134 cells were treated with HU for 3 hours at 36°C. Consistent with previous findings (Oh et al., 2002), both top3-134 and wildtype cells responded to HU and arrested in S phase (Fig. 6D). As exponentially growing cells are mostly in G2, these data further support the idea that top3-134 cells must undergo S phase to accumulate chromosome segregation defects in the subsequent mitosis. This was further confirmed by PFGE, which showed that chromosomes from both strains remained in the well due to unresolved replication intermediates (Fig. 6C, time 0). After release from the HU block, both wild type and top3-134 cells resumed the cell cycle and completed DNA replication with similar kinetics at 1 hour (Fig. 6D). However, a significant difference in chromosome integrity was observed on PFGE analysis. In contrast to the DNA from wild-type cells that re-entered the gel and separated into three chromosomes, DNA from top3-134 cells failed to enter the gel even after 3 hours when the cells had entered mitosis (Fig. 6C,E). These data suggest that top3-134 cells undergo mitosis with unresolved DNA structures and are not able to separate their chromosomes effectively (Fig. 6E). Similar observations have been made in rqh1 mutants upon treatment of HU (Stewart et al., 1997), in line with the idea that Top3 functions together with Rqh1 in processing DNA lesions that arise during Sphase.
Nucleolar segregation is defective in top3-134 mutants
We explored the nature of the abnormal DNA structures further by using proteins tagged with green fluorescent protein (GFP). It has been shown that the organization of rDNA genes strongly influences the organization and localization of the nucleolus (Oakes et al., 1998). We therefore reasoned that the abnormal chromosome structure in top3-134 cells might affect nucleolar architecture. To address this point, nucleolar structure was Journal of Cell Science 117 (20) 5. top3-134 is synthetically lethal in combination with deletion of rad3/rad26 but not with deletion of other checkpoint genes. (A-D) Tetrads derived from diploid strains of the following genotypes were microdissected onto YES agar: (A) h + /h -rad3::ura4 + /rad3 + top3-134/top3 + ; (B) h + /h -rad26::ura4 + /rad26 + top3-134/top3 + ; (C) h + /h -rad1::ura4 + /rad1 + top3-134/top3 + ; and (D) h + /h -rad17::ura4 + /rad17 + top3-134/top3 + . Colonies resulting from nine tetrads in each case were photographed after seven days growth at 26°C. The genotypes of the segregants were determined by replica plating and are indicated schematically [right: (A) W, top3 + rad1 + rad3 + rad17 + rad26 + ; T, top3-134, R, ∆rad3; (B) ∆rad26; (C) ∆rad1; or (D) ∆rad17; (A) RT, top3-134 ∆rad3; (B) top3-134 ∆rad26; (C) top3-134 ∆rad1; (D) or top3-134 ∆rad17]. Boxes (left) indicate the position of top3-134 double mutants with rad1, rad3, rad17 and rad26, respectively. visualized in living cells using a fusion protein between the nucleolar protein Gar2 and GFP (Shimada et al., 2003). In wild-type cells, Gar2-GFP occupied roughly half of the nucleus, in a discrete region distinct from the bulk chromosomal DNA, against a background of fainter nuclear signal (Fig. 7A,B). In contrast, a variety of abnormal morphologies were observed in top3-134 cells, even at the permissive temperature. Some cells showed a reduction or complete loss of the discrete signal (Fig. 7A, example 1), while others appeared to have increased and more diffuse Gar2-GFP fluorescence (examples 2 and 3). Dispersion of Gar2-GFP in the nucleoplasm was also observed (example 4). In addition, Gar2-GFP fluorescence that was largely or wholly separate from the bulk chromosomal DNA was frequently seen in binucleate cells, indicating segregation defects during mitosis ( Fig. 7B; 17% in cells grown at the permissive temperature of 26°C, which increased to 49.6% 4 hours after shift to the restrictive temperature, 36°C). This phenomenon was further explored by time-lapse microscopy. As shown in Fig. 7C, in contrast to wild-type cells in which Gar2-GFP separated equally into the daughter cells at mitosis, in top3-134 cells an extended bridge of Gar2-GFP often persisted for some time between the nascent daughter nuclei. These data, although indirect, suggest that Top3 function is required at the rDNA loci, presumably to process aberrant chromosome structures arising as a result of DNA replication, to allow proper chromosome segregation during mitosis.
Discussion
In a previous study, we identified S. pombe top3 + as an essential gene, in contrast to the situation in S. cerevisiae, where top3 mutants are viable despite their slow growth compared with wild-type cells (Goodwin et al., 1999;Maftahi et al., 1999). To understand the growth defects and determine the cell cycle stage at which these problems arise, we have further characterised the phenotype of a top3 temperature-sensitive mutant (Oh et al., 2002). We have shown that top3 mutants accumulate defects during S phase. Like S. cerevisiae top3 mutants (Chakraverty et al., 2001), these cells arrest at G2/M in a manner that is dependent on the DNA damage checkpoint, as deletion of chk1 + abolishes the cell cycle arrest (Fig. 3). Despite activation of the DNA damage checkpoint, these cells can exit the arrest but fail to segregate their chromosomes effectively. However, these mitotic defects appear secondary to the fundamental defects in chromosome stability that lead to the lethality in these cells, as advance into mitosis does not adversely affect cell survival.
The phenotype of top3 mutants shares a striking similarity to that of the rqh1 mutants treated with HU (Stewart et al., 1997). In each case, the cells are not able to separate their chromosomes after a cell cycle delay: despite the chromosome segregation defect, the aberrant mitosis is not the primary cause of cell death in these cells. Moreover, the lethality of top3 mutation can be suppressed by inactivation of homologous recombination (Laursen et al., 2003;Oakley et al., 2002;Shor et al., 2002) and the HU sensitivity of rqh1 mutants is also suppressed by deletion of the homologous recombination gene rhp51 (our unpublished data). Given the connection between rqh1 and homologous recombination, we propose that the growth defects in top3 strains arise from the failure to resolve and process recombination intermediates rather than a direct role in chromosome segregation. Entering mitosis with sister chromatids entangled by unresolved recombination intermediates would make subsequent chromosome segregation difficult or impossible. Consistent with these data, pulsed-field gel electrophoresis showed that chromosomes isolated from top3 mutants contain elevated levels of DNA double-strand breaks, probably as a consequence of failed chromosome segregation.
The results presented here, together with our previous data on the interaction between rqh1 and top3, are consistent with an essential function of top3 during S phase. Top3, together with Rqh1, is presumably required to process or disrupt aberrant recombination structures that arise during S-phase, in a manner similar to that proposed for RecQ, the Escherichia coli equivalent of Rqh1 (Harmon et al., 1999). One interpretation of the conserved genetic interaction is that RecQ helicases act upstream of Topoisomerase III in a common biochemical pathway and generate a DNA structure that requires resolution by Top3. In the absence of Top3, this DNA structure would be toxic. However, in the absence of Rqh1 and Top3, the toxic structure would not arise. In line with this idea, we observed an accumulation of aberrant DNA structures in top3 mutants by PFGE analysis, which only occurred following DNA replication in the absence of top3 function. Determination of the exact nature of the aberrant DNA structures would be useful in understanding the function of Top3. Based on a recent biochemical analysis of a combined RecQ helicase/topoisomerase III reaction, the candidate for this toxic structure could be an unresolved double Holliday junction (Wu and Hickson, 2003). Intriguingly, the chromosome abnormality in top3 mutants appears to be more pronounced in chromosome III, where the rDNA loci are located, than in chromosome I or II. This interpretation is further supported by the aberrant nucleolar structures and segregation defects observed in top3-134 mutants (Fig. 7). The rDNA array in S. cerevisiae contains a high density of replication fork barriers (Brewer and Fangman, 1988), which represent a potential source of homologous recombination. Recently, Versini et al. (Versini et al., 2003) showed that DNA replication is specifically retarded at the rDNA locus in sgs1 cells and suggested that this could be due to their inability to prevent recombination at the abundant stalled forks in that region. Consistent with these data, loss of function of either SGS1 or TOP3 results in increased recombination in the multiple tandem rDNA array (Gangloff et al., 1994;Watt et al., 1996). Taken together, these data suggest a function for Top3-RecQ complexes in maintenance of the rDNA structure, presumably processing aberrant chromosome structures arising from DNA replication, to allow proper chromosome segregation during mitosis.
We have identified a specific interaction between top3 and rad3 or rad26, but not other checkpoint genes. Deletion of rad3 + , but not the downstream kinase genes cds1 + and/or chk1 + (Fig. 5 and data not shown), leads to a more severe growth defect in top3 cells, indicating that rad3 + might have an function in addition to checkpoint control that is required for the survival of top3 mutants. Similar results have been described in ∆rqh1 and ∆rhp51 mutants. Deletion of rad3 is synthetically lethal in combination with either ∆rqh1 or ∆rhp51 (Murray et al., 1997). In the model discussed above, we place homologous recombination upstream of the rqh1 and top3 pathway, as inactivation of homologous recombination, preventing the channelling of damaged DNA into this pathway, suppresses the lethality of top3 mutants. Clearly in the absence Journal of Cell Science 117 (20) of Top3-Rqh1 and homologous recombination, repair of DNA damage must proceed via an alternative route. Like its mammalian orthologue ATM, rad3 + might have an additional function in regulation of DNA damage repair (Foray et al., 1997). Further work will be required to identify the nature of this alternative pathway. In addition, top3 deletion mutants are defective in Rad53 phosphorylation following DNA damage specifically during S phase in S. cerevisiae (Chakraverty et al., 2001), suggesting a role of Top3 in DNA damage responses during but not outside of S phase. Consistent with this suggestion, Chk1 is phosphorylated following DNA damage in ∆rqh1 ∆top3 mutants (our unpublished data), suggesting that these cells are not checkpoint defective, at least at G2. Whether or not S. pombe top3 + is required for responses to exogenous DNA damage during S phase remains to be determined.
In summary, we have shown that accumulation of aberrant DNA structures in top3 mutants activates the DNA damage checkpoint, leading to a cell cycle delay at G2 and failure of these cells to segregate their chromosomes as they exit the arrest. Despite the chromosome segregation defects, the phenotype of S. pombe top3 bears a striking resemblance to its counterpart in S. cerevisiae. Together with the conserved relationship between Top3-Rqh1 and homologous recombination, these data strongly suggest that this class of enzyme executes a conserved function in these two highly divergent eukaryotic species and probably in higher eukaryotes. | 7,197.4 | 2004-09-15T00:00:00.000 | [
"Biology"
] |
A comparative study of forest methods for time-to-event data: variable selection and predictive performance
Background As a hot method in machine learning field, the forests approach is an attractive alternative approach to Cox model. Random survival forests (RSF) methodology is the most popular survival forests method, whereas its drawbacks exist such as a selection bias towards covariates with many possible split points. Conditional inference forests (CIF) methodology is known to reduce the selection bias via a two-step split procedure implementing hypothesis tests as it separates the variable selection and splitting, but its computation costs too much time. Random forests with maximally selected rank statistics (MSR-RF) methodology proposed recently seems to be a great improvement on RSF and CIF. Methods In this paper we used simulation study and real data application to compare prediction performances and variable selection performances among three survival forests methods, including RSF, CIF and MSR-RF. To evaluate the performance of variable selection, we combined all simulations to calculate the frequency of ranking top of the variable importance measures of the correct variables, where higher frequency means better selection ability. We used Integrated Brier Score (IBS) and c-index to measure the prediction accuracy of all three methods. The smaller IBS value, the greater the prediction. Results Simulations show that three forests methods differ slightly in prediction performance. MSR-RF and RSF might perform better than CIF when there are only continuous or binary variables in the datasets. For variable selection performance, When there are multiple categorical variables in the datasets, the selection frequency of RSF seems to be lowest in most cases. MSR-RF and CIF have higher selection rates, and CIF perform well especially with the interaction term. The fact that correlation degree of the variables has little effect on the selection frequency indicates that three forest methods can handle data with correlation. When there are only continuous variables in the datasets, MSR-RF perform better. When there are only binary variables in the datasets, RSF and MSR-RF have more advantages than CIF. When the variable dimension increases, MSR-RF and RSF seem to be more robustthan CIF Conclusions All three methods show advantages in prediction performances and variable selection performances under different situations. The recent proposed methodology MSR-RF possess practical value and is well worth popularizing. It is important to identify the appropriate method in real use according to the research aim and the nature of covariates. Supplementary Information The online version contains supplementary material available at 10.1186/s12874-021-01386-8.
Conclusions: All three methods show advantages in prediction performances and variable selection performances under different situations. The recent proposed methodology MSR-RF possess practical value and is well worth popularizing. It is important to identify the appropriate method in real use according to the research aim and the nature of covariates.
Keywords: Survival analysis, Random survival Forest, Conditional inference Forest, Maximally selected rank statistics, Machine learning, Variable selection, Brier score Background Survival analysis, also known as time-to-event analysis, is a branch of statistics investigated in how long it takes for certain events to occur, and estimating the relevant important factors. A key feature of these time-to-event datasets is that they contain either censored or truncated observations, in which right censoring is the most commonly encountered type [1]. The Cox-proportional hazards regression model (Cox model) is a default choice in analyzing right-censored time-to event data [2]. As a semi-parametric method, its flexibility derives from requiring no specifications of the shape of the hazard function, which means no assumption is required on the overall shape of survival times [3]. However, its restrictive proportional hazards assumption is always not met in applications [4][5][6]; what's more, the covariates are assumed to have an additive effect on the log hazard ratio, which may become unsuitable for data containing nonlinearity or high dimensional covariates [7,8]. Machine learning methods can deal with these data. Machine learning methods have been widely concerned in the biomedical field because of their great abilities for selfstudying, classification, prediction and feature identification, among which the forests approach is especially popular with scholars and researchers.
The random forests (RF) approach was first proposed by Breiman [9]. RF construct ensembles from tree base learners, and then combine the results to a final decision. In RF, randomness is introduced in two forms: First, each of the randomly drawn bootstrap samples of the data is used to grow a tree [10]. Second, at each node of the tree, a randomly selected subset of covariates is chosen as candidate variables for splitting [11]. With CART being base learner, the original RF primarily focus on classification and regression problems [12]. Random survival forests (RSF) methodology proposed by Ishwaran et al. extends RF method to right-censored time-to-event data [13,14]. RSF can easily handle high dimensional covariate data as RF [15][16][17]. However, RSF also inherit the drawbacks of RF, especially the selection bias towards covariates with more possible split points, which may result in bias of other parameter estimates such as variable importance measures [18].
Conditional inference forests (CIF) methodology is known to reduce selection bias via a two-step split procedure implementing hypothesis tests [19]. Instead of maximizing a splitting criterion over all possible splits simultaneously in RSF, CIF separate the algorithms for the best split variable search and the best split point search [20]. In the first step, a linear rank association test is performed to determine the optimal split variable. In the second step, the optimal split point is determined by comparing two-sample linear statistics for all possible partitions for the split variable. Despite the two steps are both implemented within the theory of permutation tests, there is a change in the statistical approach for the split variable and the split point selection, which increases the time and storage of CIF application.
Random forests with maximally selected rank statistics (MSR-RF) methodology proposed by Wright et al. seems to be a great improvement towards RSF and CIF [21]. Following the basic concept of CIF, MSR-RF use a twostep split procedure via hypothesis tests, which means MSR-RF also separate the variable selection and the split point procedures. However, distinguished from CIF, binary split via maximal log rank score is used consistently in both steps of MSR-RF, which saves time and reduces bias. Log rank score is one of the most commonly used criterion statistics in RSF. What's more, the authors Wright et al. introduced a new package ranger proved to be faster [22]. This package can be used in both C and R languages, which makes MSR-RF more feasible.
Despite the development of survival forests, only a few studies have been done to compare the forests methods. MSR-RF's authors did simulations to illustrate their methods with RSF and CIF as reference, including split variable selection performance for the null case of no association between covariates and survival outcome, prediction performance under several situations, and runtime performance [21]; Nasejje et al. did simulation study to compare the prediction performance between RSF and CIF with all variables associated with the survival outcome, while split variable selection performance was not investigated [23]; Du et al. compared the prediction performance between RSF and CIF on real cancer dataset without split variable selection performance [24]. The previous simulation researches majorly focused on the predictive performance of the methods without considering the variable selection performance. What's more, proposed in 2016, MSR-RF methodology still has not been implemented in those recent researches, while RSF and CIF retain the wide use. The main aim of this research is popularize the MSR-RF methodology and to provide advices of using the survival forests methods concerning on variable selection and prediction. We think it's essential to study in depth and compare the survival forests under different situations. In this paper we used simulation study and real data study to compare prediction performances and variable selection performances among three survival forests mentioned above, including RSF, CIF and MSR-RF.
The article is structured as follows: section 2 "Methods" describes the three methods used. In section 3 "Simulation study", we present the simulation study together with the simulation results. Section 4 "Application study" introduces the two real datasets used in this study and also gives the corresponding real data analysis results. Lastly section 5 "Discussion and conclusion" presents the discussion and conclusions drawn from this study.
Random survival forests
Random survival forests (RSF) method is an extension of Brieman's RF method to right censored time-to-event data [13]. Given the original data with N subjects and M features, RSF algorithm is described as follows: 1. Draw B bootstrap samples from the original data.
Bagging generates B new training sets with replacement [10]. If the size of each training set equals to N, each subject in the original data has a probability of (1-1/N) N not being selected. In this way, on average 36.8% of the data would be excluded for each bootstrap sample, called out-ofbag data (OOB data) [9]. 2. Grow a binary survival tree for each bootstrapped sample. At each node of the tree, randomly select m (m < < M) features for splitting. In practical settings m is usually set to m= ffiffiffiffiffi M p or m = log 2 M. A split is made using the candidate feature and its cut-off point that maximizes the survival differences between daughter nodes under a predetermined split rule [14]. 3. Grow the tree to full size under the pre-specified constraints. 4. Calculate a cumulate hazard function (CHF) and a survival function (SF) for each tree. Average over all trees to obtain the ensemble CHF. In this way, one estimate for each individual in the data is calculated. 5. Using OOB data, calculate prediction error for the ensemble CHF and variable importance measures (VIM) of M features.
Researchers have come up with several splitting rules for RSF, among which four rules are representative [13], including: a log-rank splitting rule that splits nodes by maximization of the log-rank test statistic, a log-rank score splitting rule that splits nodes by maximization of a standardized log-rank score statistic, a conservationof-events splitting rule that splits nodes by finding daughters closest to the conservation-of-events principle, a random log-rank splitting rule that splits nodes by the variable with maximum log-rank statistic (at its predetermined random split point). Log-rank splitting rule and log-rank score splitting rule are the most popular rules in practical use. Log-rank splitting rule is described as follows: The log-rank test for a parent node splitting at the cut-off point value c for predictor X j is Let t 1 < t 2 < … < t K be the distinct death times in the parent node, d k and Y k equal the number of deaths and individuals at risk at time t k in the parent node respectively. Y k = Y k, l + Y k, r , d k = d k, l + d k, r . d k, l and Y k, l represent those in the left daughter node, which means Y k, l = {i : t i ≥ t k , X ji ≤ c}. The value of |L(X j , c)| is the measure of node separation. The larger the value of |L(X j , c)|, the greater the survival difference between the two groups. The best split is determined by finding the predictor X j * and split value c* with maximum statistic value.
RSF naturally inherit many of RF's good properties [16], including: non-parametric, flexible, and can easily handle high dimensional covariate data, which are essential in the genetics field; RSF are highly data adaptive and model assumption free, which are especially helpful when associations between predictors and outcome are complex such as nonlinear effects or high-order interactions; what's more, VIM and OOB estimates can be obtained through the forest growing. RSF can be performed through several packages. Here we use R package randomForestSRC [25]. Log-rank splitting rule is implemented.
Conditional inference forests
Conditional inference forests (CIF) method is a tree ensemble method utilizing the theory of permutation tests [19,26]. As CART serves as base learner in RF, this kind of algorithms has a variable selection bias towards variables with many split points. This bias is induced by maximizing a splitting criterion over all possible splits, whereas the chance to find a good split increases if the variable has more split points. The authors thought even an uninformative variable could also sit high up on the tree's structure, and then result in biased estimate [18]. CIF are known to solve this problem by taking statistical significance into account [27].
CIF construct forests with conditional inference tree (CIT) as base learner [19]. Instead of maximizing a splitting criterion over all possible splits, CIT separates the algorithms for selecting the best split covariate from the best split point search. CIT first conducts association tests to determine the best split covariate, and then makes the best binary split based on standardized linear statistic.
Same as RSF, assume the original data with N subjects and M features, and a new training set is defined as g . At step 1 variable selection of CIT splitting procedure, we need to decide whether there is any information about the response variable covered by covariate X j , which is indicated by partial hypothesis of independence H j 0 : D YjX j The association between Y and X j , is measured by linear statistics of the form Where w is case weight indicating each node, g j is a non-random transformation of the covariate X j , The influence function h depends on the responses (Y 1 , …, Y n ) in a symmetric permutation way. These functions may differ in practical settings, such as in time-to-event data the influence function may be chosen as log rank score or Savage score. The evaluation of T j L n ; w ð Þ is based on the distribution of Y and X j , which often remains unknown. However, at least under the null hypothesis one can dispose of this dependency by fixing the covariates and conditioning on all possible permutations of the responses, which is known as the theory of permutation tests. Later in the algorithm, T j L n ; w ð Þis standardized to univariate test statistics u T j L n ; w ð Þ for further comparison. If we are not able to reject H 0 at a pre-specified level α, we stop the recursion, otherwise select X j * with the strongest association (the smallest P value) as the best split variable.
Once we have selected a covariate X j * at step 1 of the algorithm, an optimal split point should be determined at step 2. The goodness of a split is evaluated by a twosample linear statistics which is a special case of the linear statistic used at step 1. For all possible split points of X j * the linear statistic is The two-sample statistic measures the discrepancy between two daughter nodes. The split c * with a standard test statistic u T c jà L n ; w ð Þ maximized over all possible splits is established. CIF differ from RF and RSF with respect to not only base learner but the aggregation scheme applied. Instead of averaging predictions directly as in RF, the aggregation scheme works by averaging observation weights extracted from each tree. CIF are implemented in the R package called party [28].
Survival forests with maximally selected rank statistics
Survival forests with maximally selected rank statistics (MSR-RF) method was proposed by Wright et al. in 2017 [21]. The authors thought an obvious disadvantage of standard CIF was a change in the statistical approach for split variable and split point selection. As is introduced above, the association test for selecting the split variable is based on a linear rank statistic, while the optimal split is a dichotomous threshold-based split. MSR- is the discrete uniform distribution, a simple distribution that puts equal weight on the integers from 1 to k b Σ is a squared matrix with all diagonal elements equal to 1 and all off-diagonal elements equal to ρ c M is the number of covariates RF are designed to deal with those problems by a statistical test for binary splits using maximally selected rank statistics [29].
The forest algorithm of MSR-RF is identical with that of RSF. Randomness is induced in both selections of samples and covariate subsets. Finally results of the trees are aggregated through vote or average. The split procedure of MSR-RF follows the basic concept of CIF, which means a two-step procedure via hypothesis tests separating variable selection and split point search. Maximally selected rank statistics for survival endpoints is implemented through log-rank score, which also can be used in RSF and CIF as we mentioned above. Consider where t i is survival time and δ i is censoring indicator. To describe the log-rank score splitting rule, assume the covariate X j has been ordered so that X j1 ≤ X j2 ≤ … ≤ X jn . The log-rank score is defined as a "rank" for each survival time t i Where Γ i is the number of observations with survival time up to t i . The linear rank statistics for a split at point c is the sum of all log-rank scores in the left daughter node P X ji c a i . The null hypothesis is H j 0 : for all points. Under the null hypothesis, the standardized log-rank score test statistic is S X j ; c À Á ¼ P X ji c a i À n l a ffiffiffiffiffiffiffiffiffi ffi n l n r S 2 a n q Where a and S 2 a are the sample mean and sample variance of a i , n l = {i : X ji ≤ c} denotes the number in the left daughter node, n = n l + n r . Log-rank score splitting defines the measure of node separation by |S(X j , c)|. The maximum statistic value yields the best split, and is defined as the maximally selected rank statistics (MSR).
In RSF, a split is established by maximizing a splitting criterion over all possible splits, where the values of logrank score test statistics would be compared not only between cut-points on the same variable but also between different variables, which induces bias. MSR-RF deal with the problem with a two-step procedure. In the first step, for each potential variable, the split point with the maximally selected rank statistics is selected. Therefore, for each variable, P values are obtained for the best split point under the null hypothesis. The covariate with the smallest P-value is selected as splitting candidate. Only if the adjusted P-value for multiple testing of the candidate is smaller than the pre-specified type I error, the split is made, otherwise no split is performed. In the second step, MSR-RF procedure simplifies CIF procedure as the optimal split point is determined as a byproduct in step 1, which means new computation is needed no more in step 2. In this way, one procedure is used consistently in both steps of MSR-RF. MSR-RF model is implemented in the R package called ranger [30].
Simulation design
In this section, we conducted simulation study to evaluate the performance of the three survival forests described in the previous section, in terms of prediction and variable selection.
The number of Monte Carlo simulation replications was set to 1000. All forests were run with 200 trees, in which the number of candidate covariates m for splitting was set to square root of the number of covariates M. The significance level of all hypothesis tests in this study was set to 0.05. To avoid the problems of overfitting that arises from using the same dataset to train and test model, in each simulation we randomly selected 80% subjects as training set and the other 20% as test set.
Survival time T was generated by inverting survival function via exponential distribution.
T ¼ ÀlogðUÞÃ exp 0:5 þ β T X i À Á Where U followed the uniform distribution U(0, 1). Censoring times were generated from exponential distributions with different parameters to get different censoring rates.
(See figure on previous page.) Fig. 1 Correct variable selection frequency for datasets A-D with RSF, CIF and MSR-RF. In subplots A-B, sample size was fixed at 200. Dataset A was set in a linear form, whereas dataset B was set in an interaction term. The unordered-categorical covariate associated with the outcome was (A1, B1) covariate with 2 categories; (A2, B2) covariate with 4 categories; (A3, B3) covariate with 8 categories. Dataset C was set in a linear form with all ten variables generated form the multivariate normal distribution MVΝ(0, Σ). The subplot C(ρ) was fixed at N = 100 and 25% censoring; C(N) was fixed at ρ=0 and 25% censoring; C (censoring) was fixed at N = 100 and ρ=0. Dataset D was set in a linear form with all variables generated from the standard normal distribution for D1 and the binomial distribution with 0.5 probability for D2. The subplots D were fixed at N = 100 and 25% censoring with the various ratio M/N, which means the ratio of the number of covariates M to the sample size N The models that generated datasets are listed in Table 1, where the form is described as β T X i . For each simulated dataset, only two covariates were set to be associated with survival outcome, whereas the others performed as noise covariates. We specified four types of models: A. Multiple categorical covariates are included, and no interaction term exists. B. Multiple categorical covariates are included, and one first-order interaction term exists. C. Only continuous covariates generated from multivariate normal distribution are included, and the correlation degree among covariates changes. D. Only independent and identical distributed covariates are included, and the dimension of covariates changes.
Model A was established in a linear form with ten covariates, including four continuous covariates x 1i − x 4i (two covariates were generated from uniform distribution x 1i − x 2i~U (0, 1) and the others were generated from standard normal distribution x 3i − x 4i~Ν (0, 1)) and six categorical covariates x 5i − x 10i (the categorical covariates were generated from discrete uniform distributions with different categories, including two covariates with 2 categories x 5i − x 6i~D iscreteU(1, 2), two covariates with 4 categories x 7i − x 8i~D iscreteU(1, 4) and two covariates with 8 categories x 9i − x 10i~D iscreteU (1, 8)). Only a continuous covariate x 1i and an unordered-categorical covariate were set to be associated with the outcome, including (A1) covariate x 5i with 2 categories; (A2) covariate x 7i with 4 categories; (A3) covariate x 9i with 8 categories. For valid comparison, we controlled the categories in the indicative function Ι() so that 50% subjects in each model would have a value of 1. Model A was simulated at different censoring rates of 0, 25, 50, 75% and different sample sizes of 100, 200, 400, 800 (training set).
Model B had the same covariate framework as model A. Model B was established in a first-order interaction form with a continuous covariate x 1i and an unorderedcategorical covariate associated with the outcome, including (B1) covariate x 5i with 2 categories; (B2) covariate x 7i with 4 categories; (B3) covariate x 9i with 8 categories. Model B was simulated at different censoring rates of 0, 25, 50, 75% and different sample sizes of 100, 200, 400, 800 (training set).
Model C was established in a linear form with two continuous covariates x 1i and x 2i associated with the outcome. In this model all ten variables followed multivariate normal distribution MVΝ(0, Σ), where Σ is a squared matrix with all diagonal elements equal to 1 and all off-diagonal elements equal to ρ. We changed the parameter ρ to get different correlations between the covariates. Model C was simulated at different censoring rates of 0, 25, 50, 75%, different sample sizes of 40, 100, 200, 400, 800 (training set) and different correlation parameter ρ of 0, 0.2, 0.4, 0.6, 0.8.
Model D was established in a linear form with two covariates x 1i and x 2i . It was used to study the performance of the methods under different dimensions of covariates, including (D1) continuous covariates all generated from the standard normal distribution; (D2) binary covariates all generated from discrete uniform distribution. The sample size N was set to 100 (training set); the censoring rate was set to 0, 25, 50, 75%. The ratio M/N, which means the ratio of the number of covariates M to the sample size N, was set to 0.2, 0.5, 1, 2, 5.
Model evaluation
To evaluate the performance of variable selection, we ranked the VIM of each forest in each simulation and obtained the ranks of the two correct variables. Finally we combined all simulations to calculate the frequency of the correct variables ranking in the top by VIM.
We used prediction error based on Brier score and cindex (c-index only exhibited in the supplement) to measure the prediction accuracy of all the three models. Brier score was originally applicable to multi-category forecasts, defined as the mean squared difference between the predicted probabilities and the actual observations [31].
Where N is sample size and R is the number of categories, predict ij is the predicted probability for individual i assigned to the possible category j, and observe ij is (See figure on previous page.) Fig. 2 Integrated Brier score for datasets A-D with RSF, CIF and MSR-RF. In subplots A-B, sample size was fixed at 200. Dataset A was set in a linear form, whereas dataset B was set in an interaction term. The unordered-categorical covariate associated with the outcome was (A1, B1) covariate with 2 categories; (A2, B2) covariate with 4 categories; (A3, B3) covariate with 8 categories. Dataset C was set in a linear form with all ten variables generated from the multivariate normal distribution MVΝ(0, Σ). The subplot C(ρ) was fixed at N = 100 and 25% censoring; C(N) was fixed at ρ=0 and 25% censoring; C (censoring) was fixed at N = 100 and ρ=0. Dataset D was set in a linear form with all variables generated from the standard normal distribution for D1 and the binomial distribution with 0.5 probability for D2. The subplots D were fixed at N = 100 and 25% censoring with the various ratio M/N, which means the ratio of the number of covariates M to the sample size N the actual observation for individual i at category j (1 if it is the actual observation and 0 otherwise).
The brier score BS(t) for survival data is defined as a function of time WhereĜ is the Kaplan-Meier estimate of the conditional survival function of the censoring time. Brier score value has a range of 0 and 1. Good predictions at time t denote small values. The integrated brier score (IBS) introduced by Graf is [32].
The smaller IBS value, the greater the prediction. Note that IBS has gradually become a standard evaluation measure for survival prediction methods and is commonly used in survival forests prediction [33]. VIM was computed just from the training set, whereas IBS and c-index were estimated in the test set. IBS and c-index are implemented in R package pec [34].
Simulation result
Variable selection Figure 1 displays the ratio of identifying both the correct variables of models A-D.
Models A and B are presented as a function of different censoring rates at a fixed sample size of N = 200. In subplots A1, A2, A3, RSF have the lowest ratio of correct variable selection. CIF and MSR-RF vary slightly in A1 and A2, as CIF are a little bit better in A1 whereas conversely in A2. In A3, MSR-RF absolutely take over the lead. In subplots B1, B2, CIF perform best, while the others present equivalent performances. MSR-RF exhibit the best performance closely followed by CIF in B3 while RSF remain the poorest performance. More results are exhibited in Fig. S1 and S2. It can be seen that three methods all perform better when sample size increases, especially reach a nearly complete selection at a sample size of 800. RSF perform relatively badly in all situations. As models A and B share the same covariate framework, the results show that RSF may have a relatively weak ability in this kind of data, no matter with linear sets or interaction sets, which directs variable selection bias.
Dataset C was set in a linear form with all ten variables generated form the multivariate normal distribution MVΝ(0, Σ), and we use three subplots to describe it under different conditions. The subplot C(ρ) plots the results as a function of the correlation parameter ρ between the covariates at N = 100 and 25% censoring. It can be seen that the selection rate varies slightly when ρ < 0.6. The selection rate at ρ = 0.8 is lower than ρ = 0.6. C(N) plots the results of various sample sizes N at a fixed correlation parameter ρ=0 and 25% censoring, which means the covariates are independent. C(N) shows when the sample size N is smaller than 40, RSF and CIF both exhibit poor results with less than 20% selection rate. When N = 800, all three methods reach nearly complete variable selection. C (censoring) studies different censoring rates at N = 100 and ρ=0. It shows when the censoring rate is 75%, all three methods perform low selection rate less than 20%, but MSR-RF have absolutely higher selection rate under other censoring rates. In three subplots, it can be seen that the order of Fig. S3, which verify the results above. The variable selection frequency doesn't fluctuate too much for small correlations. CIF don't perform well under this type of covariates.
The subplots D1, D2 plot the results as a function of number of covariates M at N = 100 and 25% censoring. In D1 with continuous covariates, MSR-RF offer obvious advantages across different dimensions of covariates. RSF are slightly better than CIF when M/N is no more than 1, which means sample size N is no less than the number of covariates M. When M/N > 1, both RSF and CIF perform badly, as the selection rate is nearly 0. In D2 with binary covariates, both MSR-RF and RSF perform much better than CIF when M/N increases. More results are shown in Fig. S4. In D1 MSR-RF show absolute advantages over the others. CIF are obviously weak, even sustain a selection rate of nearly 0 at a censoring rate of 75% no matter how M/N changes. MSR-RF closely follow RSF, and both methods have a selection rate of over 75% even when M/N = 5 in D2, while CIF perform conversely poor when M/N increases. As model D is aimed to study different numbers of covariates with continuous variables or categorical variables, the results show that CIF may not identify the correct covariates accurately in data with high dimensional covariates. Figure 2 displays the mean value of IBS of models A-D, with the same parameter settings as Fig. 1.
Prediction performance
Models A and B were investigated as a function of different censoring rates at a fixed sample size of N = 200. In subplots A1-A3, all three curves almost coincide. In subplots B1-B3, MSR-RF perform better under 75% censoring, whereas three methods remain overlapped when censoring rate less than 50%. More results in Fig. S5 and S6 prove the findings above. The results of c-index in Fig. S9 and S10 also indicate that there are only slight differences within 0.02 between the curves at the same settings, so it's hard to conclude which perform best.
The subplot C(ρ) was fixed at N = 100 and 25% censoring, and IBS decreases as ρ increases. RSF perform slightly better, followed by MSR-RF and lastly CIF. C(N) was fixed at ρ=0 and 25% censoring. It shows MSR-RF performs best when N = 40, and RSF take over the lead when N > 40. C (censoring) was fixed at N = 100 and ρ= 0, in which RSF maintain the best prediction. Overall three curves only have small IBS gap and Fig. S7 proves The subplots D1, D2 plot the results as a function of number of covariates M at N = 100 and 25% censoring. IBS increases as the ratio M/N increases. In D1 with continuous covariates, both MSR-RF and RSF have lower IBS than CIF. In D2 with binary covariates, RSF offer obvious advantages across different M. Same findings can be observed in Fig. S8 with more results. The results of c-index in Fig. S12 indicate that MSR-RF are superior in D, followed by RSF and lastly CIF. In D2 with binary covariates, RSF perform just a little bit lower than MSR-RF. Overall, CIF still remain the poorest performance.
Application study
To demonstrate the efficiency and the predictive performance of the three survival forest models, we analyzed two real datasets with MSR-RF, RSF and CIF. For forest construction, 200 survival trees were grown for each survival forest. In each simulation, we randomly selected 80% subjects as training set and the other 20% as test set, and this was repeated 100 times. For each repetition, IBS of test set were recorded and shown as boxplots. For easy explanation, Cox model was also conducted with the same analysis as a benchmark model.
The lung dataset recorded survival in patients with advanced lung cancer from the North Central Cancer Treatment Group (NCCTG) [35]. Subjects with missing values were excluded, so 167 subjects with 8 covariates were retained for analysis in our study. The median survival time is 268(range: 5~1022) days. A total of 120 patients died, with a low censoring rate of 28.1%. Summary characteristics can be found in Table 2. The dataset has 5 continuous covariates and 3 categorical covariates, including one with 2 categories, one with 4 categories and one with 17 categories. The dataset is freely available in the R package survival [36].
The hnscc dataset is a high dimensional breast cancer gene expression data with 565 subjects and 99 continuous covariates. The median survival time is 1671 (range: 2-6417) days. A total of 253 patients died, with a censoring rate of 55.2%. The dataset is freely available in the R package SurvHiDim [37]. Survival curves generated from lung dataset and hnscc dataset are shown in Fig. 3.
Besides the overall survival, researchers may also have interest in survival of specific time. In this way, despite the overall survival cohort of each dataset, we also present the 1-year survival prediction of lung dataset and the 4-year survival prediction of hnscc dataset. In Figs. 4 and 5, we find that all three forests perform better than the default benchmark Cox model. For lung dataset, all three forests seem to be comparable in predicting 1-year survival while CIF have the lowest median value. MSR-RF show the relatively low IBS range in overall survival prediction. What's more, MSR-RF seem to be the most stable method here because of the smallest range and interquartile range. For hnscc dataset, MSR-RF show the smallest range and interquartile range in both four-year survival prediction and overall survival prediction whereas CIF show the largest conversely. Figure 6 presents the variable importance result of lung dataset. CIF and MSR-RF have similar results in identifying the factors affecting the survival outcome, as ph.ecog, wt.loss, ph.karno, meal.cal rank 1st, 4th, 5th, 8th respectively in both forests. Sex, pat.karno rank 2nd-3rd and inst, age rank 6th-7th in both forests with slight difference in order. However, despite the difference in RSF, ph.ecog, sex, pat.karno are the top three predictors among all methods, and meal.cal tends to have the lowest association with the outcome.
For the variable selection performance of hnscc dataset, unlike the custom establishment of correct variables in the simulation part, we have to use other variable selection method to learn the variables as a reference.
Here we used backward stepwise selection based on AIC criterion as a reference and 38 covariates were selected. Ranks of VIM of the 38 variables were calculated among all the 99 variables. Median ranks of VIM and frequency of the selected variables' VIM ranking top 38 of all repetitions were listed in Table 3. It can be seen that three methods differ in the ranks. CIF have the largest range and interquartile range, which indicate a relatively dispersion in the result. RSF and MSR-RF have a relatively close and robust performance compared to CIF.
Discussion and conclusion
In this paper we used simulation study and real data study to compare prediction performances and variable selection performances between three survival forests mentioned above, including RSF, CIF and MSR-RF. The prediction performance was evaluated through the prediction error IBS based on brier score with c-index as supplement. The smaller IBS value, the greater the prediction. The variable selection performance was evaluated by calculating the frequency of the correct variables ranking in the top by VIM.
The variable selection performance in simulation study shows that CIF and MSR-RF both outperform RSF when there are multiple categorical variables in the datasets, where CIF show advantages in dealing with interaction term. When there are only continuous variables in the datasets, MSR-RF perform better. When there are only binary variables in the data, RSF and MSR-RF are superior than CIF. The results also show that three forests methods are not sensitive to the correlation between covariates due to the fact that correlation degree of the variables has little effect on the selection frequency. When the variable dimension increases, MSR-RF and RSF seem to be more robust than CIF.
However, the IBS results and c-index results in predictive performance show that three methods are comparable majorly, only small variations can be observed under some situations. When there are only continuous variables or binary variables in the datasets, MSR-RF and RSF seem to perform better than CIF. The results in application study are similar to those from simulation study. All three forest methods outperform the benchmark Cox model based on IBS result, but there are only tiny differences among the three methods. MSR-RF seem to be more stable based on smaller range and interquartile range.
What's more, it can be seen that higher correct variable selection frequency does not match better IBS or cindex value exactly in simulation study, which indicates prediction performance and variable selection performance are worth taking into consideration respectively, and that's the objective in this paper.
There are several limitations in our study. First, high dimensional datasets have been considered only in model D in our research, which studied only continuous or binary variables. We have to admit that we lack deep investigations in high or ultra-high dimensional datasets in this paper because we think it's a wide field deserving deep investigations and we will make independent research in the future. Next, our paper studied both variable selection performance and prediction performance respectively. The prediction performance of simulation and application is easy to exhibit, which has been done in previous comparative studies. However, the variable selection performance of application is hard to evaluate because the correct variables associated to the outcome are unknown, whereas they can be set in simulation. We can only learn those variables from other variable selection methods for real datasets such as stepwise selection, LASSO et al. to serve as reference. Nevertheless, no method could be regarded as a "gold standard" reference. In this paper we just conducted the backward stepwise selection and exhibited the distribution of the ranks of VIM.
The main finding of this study is that RSF, CIF, MSR-RF all show advantages based on different type of covariates. Hence it's important for researchers to choose an appropriate forest model according to the research aim (variable selection or better prediction) and the nature of covariates. As it is shown in our study, MSR-RF exhibit a relatively good and stable performance in most situations. Years ago the proposers conducted studies on computational time and proved the realization faster, which is also observed in our study. In this way, MSR-RF are worth generalization and we hope the method could raise more attention in biomedical field. | 9,391.2 | 2021-04-28T00:00:00.000 | [
"Computer Science"
] |
Effect of Bismuth Ferrite Nanometer Filler Element Doping on the Surface Insulation Properties of Epoxy Resin Composites
In the direct current electric field, the surface of epoxy resin (EP) insulating material is prone to charge accumulation, which leads to electric field distortion and damages the overall insulation of the equipment. Nano-doping is an effective method to improve the surface insulation strength and DC flashover voltage of epoxy resin composites. In this study, pure bismuth ferrite nanoparticles (BFO), as well as BFO nanofillers, which were doped by La element, Cr element as well as co-doped by La + Cr element, were prepared by the sol-gel method. Epoxy composites with various filler concentrations were prepared by blending nano-fillers with epoxy resin. The morphology and crystal structure of the filler were characterized by scanning electron microscopy (SEM) and X-ray diffraction (XRD) tests. The effects of different filler types and filler mass fraction on the surface flashover voltage, charge dissipation rate, and trap characteristics of epoxy resin composites were studied. The results showed that element doping with bismuth ferrite nanofillers could further increase the flash voltage of the composites. The flashover voltage of La + Cr elements co-doped composites with the filler mass fraction of 4 wt% was 45.2% higher than that of pure epoxy resin. Through data comparison, it is found that the surface charge dissipation rate is not the only determinant of the flashover voltage. Appropriately reducing the surface charge dissipation rate of epoxy resin composites can increase the flashover voltage. Finally, combining with the distribution characteristics of the traps on the surface of the materials to explain the mechanism, it is found that the doping of La element and Cr element can increase the energy level depth and density of the deep traps of the composite materials, which can effectively improve the flashover voltage along the surface of the epoxy resin.
Introduction
Composite insulators based on the epoxy resin (EP) have excellent electrical insulation and thermodynamic properties and are widely used in high-voltage power transmission equipment, such as gas-insulated switchgear (GIS) and large-scale pulse power devices [1][2][3]. Compared with AC GIS, with the continuous improvement of voltage level, the flashover voltage on the insulator surface under DC working conditions is significantly reduced [4,5]. The further improvement of the insulation level of EP composite material has become an important factor to ensure the safe and stable operation of electrical equipment [6].
Studies have found that the surface flashover discharge of epoxy resin composite insulators is a key factor leading to insulation failures of electrical equipment [7], but the mechanism of surface flashover discharge has not been fully revealed. A large number of experimental data show that the gas-solid interface of insulating materials will accumulate a large number of charges under the action of DC electric field or partial discharge at the electrode and gas-solid interface [8,9]. These charges easily overflow the material surface under the action of the electric field, inducing surface flashover [10,11], which reduces the insulation reliability of equipment and even leads to insulation failure. For this reason, the researchers managed to improve the surface charge transport capacity of epoxy resin composites, accelerate the dissipation rate of surface charges, and then increase the surface flashover voltage [12]. However, the high surface conductivity will cause discharge on the surface of the material, which will seriously reduce the insulation characteristics and service life of epoxy resin composite insulation materials. Therefore, although the correlation between the material surface charge dissipation rate and flashover voltage has been proposed in the current studies, the theory is not yet complete, and the experimental studies on the flashover characteristics of materials with different conductivity also need to be further verified. In addition, Li, S.T. et al. proposed that the trap energy level and trap distribution on the surface of polymer materials are also important factors affecting flashover [13]. The physical defects in the process of material synthesis and use will produce shallow traps, which are helpful for carrier transport, while the deep traps caused by chemical defects trap the charge, inhibiting the carrier movement. Yu, K.K. et al. found that the existence of shallow traps on the surface makes the surface charge of the dielectric easily excited to escape from the trap, resulting in a decrease in flashover voltage along the surface [14]. However, Li, S.T. and Shao, T. et al. pointed out that if the depth of the dielectric trap is too deep, the charge accumulated in the trap will cause serious distortion of local field strength and store a large amount of polarization energy. When subjected to external interference, the flashover voltage will be reduced [13,15]. Therefore, the explanation of flashover by trap theory still needs to be further improved, and the method of increasing the flashover voltage by adjusting the distribution of material traps needs in-depth study.
Lewis, T. J. proposed the concept of nano-dielectric in 1994 [16]. The doping modification of inorganic nanoparticles can change the trap energy level and conductivity of epoxy resin composites and effectively improve the charge dissipation ability and flashover voltage of epoxy resin composites [17,18]. He, J.L. et al. studied the interface effect between the filler and matrix, and the interaction between the nanoparticles and the polymer molecular chains made the interface have a particular structure and properties, which affected many electrical properties of the material [19].
Semiconductor nano-materials have a narrower bandgap than insulators. The preparation of epoxy resin composites by doping them can reduce the energy barrier overcome by carrier migration in traps, increase the rate of charge dissipation on the surface of the material, and reduce the internal electric field distortion [20]. At present, there are relatively few studies on the effect of semiconductor nanofillers with different morphologies on the surface insulation properties of epoxy resin composite materials. One-dimensional nanofibers have a high specific surface area and aspect ratio, as well as special-shaped atoms, electronic structure and other properties, which will affect the comprehensive properties of composites [21]. Chi, Q. studied the effect of nano-TiO 2 morphology on the breakdown field strength and space charge characteristics of polyethylene composites. The results show that nano-TiO 2 fiber doped composites have higher breakdown field strength than nano-TiO 2 particles. In addition, the finite element simulation results show that the space charge accumulation and electric field distortion of the nanofiber doped composite materials are significantly reduced [22]. Zhang, D. used multifunctional carbon nanofibers (CNF) doped with epoxy resin. The modified composites have good sand corrosion resistance, and the flexural strength of the composites is increased by 23.8% by carbon fiber [23]. Bismuth ferrite (BiFeO 3 , BFO), a semiconductor material, is widely used in digital storage devices, magnetoelectric induction equipment and other fields [24]. BFO materials theoretically have unexpectedly large remanent polarization [25]. In the experimental process of preparing BiFeO 3, the Bi element is easy to volatilize, and Fe 3+ will be transformed into Fe 2+ . These factors cause the leakage current density of the material to increase, making it difficult to polarize the material. Therefore, the application of BFO materials is limited [26]. Yan F et al. studied the effect of A-site doping of lanthanide elements La 3+ on the performance of BiFeO 3 films and prepared pure BiFeO 3 . The density of the doped film was increased, and the leakage current was decreased [27]. Zhang Y et al. studied the effect of B-site doping of Cr 3+ on the properties of BiFeO 3 materials. It was found that the dielectric constant of BiFeO 3 was increased, the dielectric loss was decreased, and the leakage current density was decreased. However, as the proportion of element doping increases, the properties of BiFeO 3 materials decrease immediately [28]. Currently, there are relatively studies on epoxy resin modified by semiconductor nano-filler doped with bismuth ferrite nano-filler elements. The effect of doping of bismuth ferrite nanofiller elements on the surface insulation properties of epoxy resin composites is worthy of in-depth study.
In this study, pure BFO nanofillers and BFO nanofillers doped with La and Cr were prepared by the sol-gel method, and EP composite samples with different filler mass fractions were prepared. The modification effect was characterized by scanning electron microscopy (SEM) and X-ray diffraction (XRD) tests. Negative polarity DC flashover and charge dissipation rate were measured. The effect of doping of bismuth ferrite nano-filler elements on the surface insulation properties of epoxy resin composites was analyzed. Based on the trap distribution characteristics, the variation law of surface insulation properties of materials is analyzed. The results can provide a certain reference for further improving the surface insulation performance of the EP composite.
Preparation of BiFeO 3 Nanoparticles
In this paper, the pure bismuth ferrite nanoparticles (BiFeO 3 , BFO) nanofiller was prepared by the sol-gel method, and the specific steps are as follows: • Grind the dry gel into powder, place it in a quartz boat, and push it into a tubular furnace for heating at a heating rate of 5 • C/min. After holding at 120 • C for 1 h, 300 • C for 2 h, and 600 • C for 2 h, the dry gel is extracted and quickly cooled to room temperature; • Wash twice with dilute nitric acid with 10% volume fraction (to remove the Bi oxide or Bi salt overproduced by Bi(NO 3 ) 3 ), and then wash twice with deionized water (to remove the nitrate ion introduced by nitric acid).
The pure BFO nanofillers can be obtained by the above process. The preparation method of elemental doped BFO nanofillers was the same as that of pure BFO nanofillers, except for some drug dosage differences: 5% La element doping is A-site doping, and part of Bi(NO 3 ) 3 ·5H 2 O was replaced by La(NO 3 ) 3 ·6H 2 O, and the molar ratio of the two was 0.05:0.95. In preparation, 3% Cr element doping is B-site doping, Cr(NO 3 ) 3 ·9H 2 O was used to replace part of Fe(NO 3 ) 3 ·9H 2 O, and the molar ratio of the two was 0.03:0.97. A total of 5% La + 3% Cr element doping is A-site and B-site co-doping, were added at the same time, and the molar ratio was 0.05:0.95:0.03:0.97.
To compensate for the Bi element volatilized during the heating process, the Bi(NO 3 ) 3 ·5H 2 O was added in excess by 10% during the preparation of nanoparticles. In addition, for the convenience of description, the BFO nanofillers doped with 5% La, 3% Cr and 5% La + 3% Cr were denoted as BL5FO, BFC3O, and BL5FC3O, respectively.
Preparation of Epoxy Resin Composites
The preparation method of the EP composite sample doped with BFO nano-filler is as follows: • The prepared BFO nanofillers were placed in a beaker, and DGEBA was added. The BFO nanofillers were mechanically stirred for 60 min in an oil bath environment at 60 • C and dispersed by ultrasonic for 30 min to make the fillers fully dispersed in the resin; • The curing agent MTPHA and accelerator DMP-30 were added, and the mass ratio of DGEBA, MTPHA, and DMP-30 was 100:80:1, and the mixed suspension was fully stirred for 30 min and degassed; • The mixed suspension was placed in the polytetrafluoroethylene (PTFE) mold, and gradient curing was performed at 80 • C/1 h + 100 • C/10 h. After cooling to room temperature, the mold was stripped to obtain the BFO/EP epoxy resin composite material sample.
Flashover Voltage and Surface Charge Dissipation Test
The flashover voltage of the prepared EP composite was tested. The flashover voltage test platform is shown in Figure 1. During the flashover test, the samples are placed in a cylindrical sealed experimental cavity. The outer diameter of the cavity is 300 mm, the height is 350 mm, and the thickness is 8 mm. The inside of the vessel is an air environment. A discharge electrode is installed inside the vessel, and a high-voltage DC power supply is connected externally through the high-voltage bushing. To facilitate the experiment, a two-dimensional mobile device is installed inside the cavity to realize the smooth movement of the sample table. The cavity is equipped with an observation hole, which is convenient for observing the internal discharge phenomenon. The discharge electrode adopts the needle-needle electrode form to simulate the extremely uneven electric field and make flashover easier. The total length of the electrode is 15 mm, the bottom diameter is 4 mm, and the tip inclination angle is 15 • . The distance between the high voltage electrode and the ground electrode is 7 mm. The lower surface of the electrode is planar, and the electrode surface is smooth and free of burrs. In this way, the electrode and the sample surface can be completely attached so that the flashover occurs completely on the composite sample surface. In addition to the above equipment, the test platform also includes a high voltage DC power supply (0~-50 kV adjustable), high voltage measurement probe (Tektronix, P6015A, 1000:1, San Francisco, CA, USA), digital oscilloscope (Tektronix DPO 2002B, San Francisco, CA, USA), protection resistance (1 MΩ), etc. The voltage variation can be obtained by a high voltage measuring probe and digital oscilloscope. The protection resistor can prevent excessive current during flashover, which may cause harm to persons and equipment.
The specific steps of the flashover test on EP composite samples are as follows: • Put the EP composite material in deionized water and clean it with an ultrasonic cleaner for 10 min; • Put the sample in the drying oven, and wear disposable gloves to transfer the sample after the sample is completely dry; • The sample is placed on the sample table, and operate the two-dimensional moving device by placing the needle-needle electrode in the center of the sample with good contact; • Start the high-voltage power supply with a pressure gradient of 100 V/s until flashover occurs along the surface.
The flashover that occurs is judged by observing the waveform of the oscilloscope and the arc between the electrodes. Adjust the voltage to zero immediately after flashover, and conduct the next surface flashover test after an interval of 60 s. Each group of samples is measured 10 times, and the average value is recorded as the surface flashover voltage of the sample.
The schematic diagram of the test platform for charge dissipation characteristics is shown in Figure 2. The measuring system uses a high-voltage DC power supply to make the charging pin generate corona discharge above the sample and inject charge into the surface of the test sample. The active attenuation probe (Trek 3455ET, Bavaria, Germany) was used to detect the surface point potential information. The potential was measured by an electrostatic potentiometer (Trek P0865, 10 kV, Bavaria, Germany), and the measured results were stored in a data acquisition card (Altai USB3202, Beijing, China). The specific steps for the charge dissipation characteristics test are as follows:
•
Place the sample to be tested in deionized water, ultrasonically clean it for 10 min, and then dry it in a drying oven at 60 • C; • Place the dried sample on the stage and perform a surface charge test on the surface of the sample. If the surface potential of the sample is not zero, the sample can be continued to be placed in the 60 • C drying oven until the surface potential of the sample is 0. In this way, it is ensured that no charge is accumulated on the sample surface; • Move the charging pin above the surface of the sample, and the charging pin is 2 mm away from the sample. Apply a negative DC voltage with an amplitude of 7 kV to the charging pin, and charge the sample for 1 min; • Remove the charging pin immediately after charging is completed. Through a two-dimensional operation device, the probe was transferred directly above the charging position, and the acquisition card was opened to collect potential data every 100 ms. The average value of 10 potential data was calculated and output [29]. The experimental time of charge dissipation is 30 min. After the test is completed, the exponential decay curve of the potential data corresponding to the time can be obtained. Figure 3 shows the SEM test [30] results of pure phase BFO nanoparticles and BL5FO, BFC3O, and BL5FC3O nanoparticles. According to the results in Figure 3, the particle size distribution of pure BFO nanoparticles is uneven, and the particle surface has obvious prismatic morphology, which may be the result of the preferred growth of crystal planes [31]. The element doping makes the filler particles more uniform, and the particle size is further reduced. The decrease in particle size may be caused by the substitution of Cr 3+ and La 3+ (with smaller ion radius) for Fe 3+ and Bi 3+ (with larger ion radius). When the elements with a smaller ionic radius are incorporated into the lattice, the lattice will be deformed and contracted, resulting in a denser network [32][33][34]. The study found that the smaller the particle size of the filler, the lower the flashover voltage of BFO/EP composites. However, considering the overall performance changes of the composite material, the BFO filler particle size has little effect on the epoxy resin composite material [35].
Characterization of BFO Nano-Filler and EP Composite
As shown in Figure 4, to study the lattice structure of the experimentally prepared BFO nanofillers, X-ray diffraction (XRD) tests [36] were performed on each nanofiller. C u K α (λ = 1.5418A) was used as the ray source with the scanning range of 20 •~8 0 • , and the scanning interval was 0.02 • . From the results in Figure 4, it can be seen that the XRD diffractograms of each nanofiller are consistent with the standard diffractogram of BiFeO 3 . The characteristic diffraction peaks of crystal planes such as (012), (104), (110), (006), (024), (116), (018) and (220) appear in the X-ray diffraction patterns, which can confirm that the sample prepared is rhombohedral perovskite structure with space group R3c [37], and all the diffraction peaks are relatively sharp, indicating that the obtained samples have good crystallinity [38]. There is no impurity peak in BL5FO, but there is a small amount of impurity phase in the two samples of BFC3O and BL5FC3O, as shown by "*", which is mainly impurity phase Bi 2 Fe 4 O 9 [39]. However, some impurity phases are free of the unit cell, and diffraction peaks are not regularly reflected in the XRD diffractogram (as shown by " " in the figure). These diffraction peaks are difficult to match with the material library, so their properties cannot be identified. It is worth noting that most phases in the compounds of BL5FO, BFC3O and BL5FC3O have the same structure as BFO, i.e., R3c, which means that the substitution of La and Cr elements does not affect the crystal structure of the parent BFO compound, which is significant if the ferroelectric properties of pure BFO are to be maintained [40]. Figure 5 is the energy dispersive spectrometer (EDS) [41] spectrum of BL5FO, BFC3O, and BL5FC3O, which is used to detect the element composition and content analysis of the material composition. The energy spectrum results show that La and Cr elements have been successfully doped into the lattice structure of BFO. EP composites doped with BFO nanoparticles were prepared according to the method in Section 2.3. Figure 6 shows the SEM test results of EP composites doped with pure phase BFO nano-filler. As shown in Figure 6, with the increase in the mass fraction of the filler, the density of the filler in the matrix increases and agglomeration occurs. When the mass fraction of filler is 2% and 8%, the nano-filler in EP composite is more uniform. As shown in the red circle in Figure 6c, when the mass fraction of filler was 16%, the nanoparticles in EP composite showed obvious agglomeration. Figure 7 shows the flashover voltage test results. The ambient temperature of the flashover voltage test is 27 • C, and the relative humidity is 30%. According to the results in Figure 7a-d, the flashover voltage of the EP composite after doping the nano-filler is more complex: with the increase in the mass fraction of the filler, the flashover voltage of the BFO/EP composite shows an "M" shaped change rule. The flashover voltage of BL5FO/EP composites changes in an inverted "V" shape. The flashover voltage of BFC3O/EP and BL5FC3O/EP composites shows an "N" shaped change rule. The yellow dotted line in Figure 7e is the flashover voltage of the pure EP system. According to the figure, the flashover voltage of each EP composite material is higher than that of a pure EP system when the mass fraction of the nano-filler is less than 16%. The analysis shows that the introduction of nano-fillers can provide a path for the surface charge to dissipate, thereby reducing the degree of electric field distortion on the surface of the composite and increasing the value of the flashover voltage. The influence of nano-filler doping concentration on EP composites is more complicated: with the increase in doping concentration, the distance between nano-filler decreases, the number of charge dissipation channels increases, and the flashover voltage increases. However, increasing the doping concentration will reduce the uniformity of the dispersion of nanoparticles in the matrix, resulting in regional differences in parameters, such as surface conductance and traps on the EP composite surface. Increasing the doping concentration will also increase the distortion of the surface electric field and reduce the flashover voltage. In addition, as the doping concentration increases, the interface between the nanoparticles and the polymer begins to overlap, and the scattering of carriers is enhanced. Meanwhile, the agglomeration of nanoparticles causes new defects in the materials [38]. In turn, the surface flashover voltage of the EP composite decreases. Under the combined effect of the above-mentioned reasons, the flashover voltage of EP composites has a complicated change pattern, as shown in Figure 7.
Experimental Results of Flashover Voltage
Comparing the results in Figure 7, the flashover voltage of EP composite filled with element doped BFO nanofiller is higher than that of pure BFO nano-filler. At 4 wt%, the flashover voltage of BL5FC3O/EP composite has been increased by 45.2% compared with the pure EP system, which has a good application prospect.
Effect of BFO Element Doping on Charge Dissipation Characteristics of EP Composites
Under DC conditions, a large amount of charge accumulates on the surface of the insulator, which causes electric field distortion at the gas -solid interface, leading to surface flashover failure [8]. Therefore, it is necessary to study the surface charge dissipation characteristics of EP composites to analyze the flashover process of composites. Figure 8, the addition of a nano-filler can improve the charge dissipation rate of the composites. At a lower filling concentration, the charge dissipation rate of the composite is significantly higher than that of the pure EP system. The conductivity of the composite material increases with the increase in nano-filler concentration, and the dissipation rate of surface charge should be increased. However, the high concentration of nano-filler causes the agglomeration of filler in the matrix, which leads to new defects. Therefore, when the packing concentration is 16 wt%, the charge dissipation rate of the BFO/EP composite decreases. In addition, according to the results of Figures 7 and 8, for BFO/EP composites, the charge dissipation rate of the composite with 16 wt% filling concentration is higher than that of 2 wt% and 4 wt% filling concentration, but the flashover voltage is only 10.6 kV. For BL5FO/EP composites, the charge dissipation rate increases with the increase in filling concentration, but the flashover voltage changes in an inverted "V" shape, which increases first and then decreases. Therefore, it is not that the greater the surface charge dissipation rate of the composite, the higher its flashover voltage. Appropriately reducing the dissipation rate of surface charge will help increase the flashover voltage. Figure 9 shows the surface charge dissipation curves of EP composites doped with various nano-fillers at the filling concentration of 4 wt%. It can be seen from the figure that the surface charge dissipation rate of EP composites can be reduced by doping BFO nanoparticles with elements. The analysis shows that the pores and cracks of the material itself make the surface of the material have voids; the reduction of Fe 3+ to Fe 2+ and the volatilization of Bi elements will cause oxygen vacancies in the material. The above factors make the conductivity of the pure phase BFO sample higher, and the surface charge dissipation rate of EP composites is higher under the same doping concentration [26]. The doping of La and Cr helps to reduce the conductivity of BFO nanoparticles, which is mainly due to the following reasons: Firstly, the doping of high valence Cr in the material controls the valence fluctuation of Fe and reduces the oxygen vacancy. Secondly, the La-O bond is more difficult to break than the Bi-O bond (the dissociation energy of the La-O bond is 799 ± 13 kJ/mol; the dissociation energy of the Bi-O bond: 343 ± 6 kJ/mol) [42]. The volatilization of Bi can be inhibited, and the perovskite structure can be stabilized by replacing part of Bi with La element. Finally, La and Cr doping can improve the surface morphology of BFO, increase the density of the material network, and the reduction of voids is helpful to reduce the leakage current density. Therefore, the dissipation rate of surface charge of EP composite decreases after La doping, Cr doping, or La + Cr co-doping.
Effect of BFO Element Doping on the Trap Characteristics of EP Composites
The trap structure can affect the charge transport in the material system by consolidating and constraining the charge. If the charge breaks free of the trap, it can cause a large amount of energy release and induce flashover. Therefore, by studying the trap characteristics of composites, the mechanism of flashover voltage increase in EP composites can be explored [43,44].
Based on the basic theory of the ISPD method [48], the trap energy level can be expressed as: where, E T is the trap level; k B is the Boltzmann constant; T is the absolute temperature of the test environment, in K; h is Planck constant; ν is the natural vibration frequency around the defect point at the orthogonal plane of the moving direction; Q S (t) is the trap charge density; ε 0 is the vacuum dielectric constant; ε r is the relative dielectric constant; d is the thickness of the sample; V s (t) is the surface potential of the sample. Studies have shown that the surface potential attenuation curve of the sample can generally be fitted based on the double exponential function, and its expression is: where A 1 , τ 1 and A 2 , τ 2 are fitting parameters. After fitting analysis, the characteristics of trap energy level distribution can be obtained based on Equations (1) and (2). Figure 10 shows the trap energy level distribution characteristics of the pure EP system and the EP composite filled with various nano-fillers (filling concentration is 4 wt%). The two peaks on each curve represent the shallow trap (Area A) and the deep trap (Area B), respectively. Figure 10 shows that the shallow trap energy level depth, shallow trap energy level density, and deep trap energy level density of BFO/EP composite are close to those of the pure EP system. However, the deep trap energy level depth of BFO/EP composites is significantly higher than that of pure EP composites. Compared with BFO/EP composites, there is no significant change in the shallow trap energy level depth of BL5FO/EP, BFC3O/EP, and BL5FC3O/EP composites doped by elements. However, the shallow trap energy level density decreases, while the deep trap energy level depth and deep trap energy density both increased. The BL5FC3O/EP composites Co-doped with La and Cr only contain deep trap peaks, and the depth and density of deep trap energy levels are the highest.
When applied voltage, the local field strength at the junction of electrode, gas, and sample surface is high, and the primary electron will be first ionized and formed there. Some primary electrons are easily absorbed by the polymer surface and form a space charge, which distorts the electric field. The other part of the primary electrons will cause the emission of secondary electrons under the action of the distorted electric field, which will lead to electron collapse and eventually flashover [49][50][51].
The shallow trap has a lower energy level, and the bound charge is easy to collapse. The level of the deep trap is deep, and the bound charge is difficult to escape. Therefore, shallow traps are easier to improve the charge dissipation rate than the deeper traps and improve the uniformity of the electric field. Compared with BFO/EP composites, the shallow trap level depth of the EP composites does not change significantly after the BFO is doped with La, Cr, or co-doped with La and Cr elements. However, the depth and density of the deep trap energy level of the element-doped composites are both increased, and the depth and density of the deep trap energy level of the BL5FC3O/EP composite are the highest, which is also consistent with the test results of charge dissipation characteristics in Figure 9.
However, according to the above analysis results, although charge dissipation characteristics affect the flashover voltage, they are not the key factors. As can be seen from Figure 10, each sample has obvious characteristics of deep traps. Further analysis shows that after the electrons in the shallow trap collapse and escape, it is easy to trigger the emission of secondary electrons and promote the occurrence of flashover. Deep traps make it difficult for electrons to escape. Increasing the energy level and density of deep traps can reduce the number of primary electrons, make it difficult to form secondary electron emissions on the material surface, and thus increase the flashover voltage [52]. Compared with BFO/EP composites, the depth and density of deep trap energy level and flashover voltage of EP composites are increased after the BFO is doped with La, Cr, or co-doped with La and Cr elements.
Conclusions
In this paper, bismuth ferrite (BFO) nanofillers doped by La, Cr, and La + Cr elements were prepared and filled into epoxy resin to prepare epoxy resin (EP) composites. In addition, the insulation characteristics of the EP composite surface were tested and analyzed. The main conclusions are as follows: The flashover voltage of the EP composite can be further improved by doping BFO nano-filler with La and Cr elements. With the concentration of 4 wt% La + Cr co-doped BFO nano-filler, the flashover voltage of EP composite can reach 14.52 kV. Compared with pure EP material, the flashover voltage of this composite material is increased by 45.2%, which has a better application prospect.
Compared with the BFO nanofillers without element doping, the surface charge dissipation rate of EP composites is decreased after adding La and Cr doped BFO nanofillers. Therefore, the surface charge dissipation rate is not the only determinant of the flashover voltage. Appropriately reducing the surface charge dissipation rate of epoxy resin composites can increase the flashover voltage.
After the BFO nanofiller is doped with La and Cr elements, the deep trap energy level depth and energy level density of the EP composite material are further increased, which is helpful to suppress the secondary electron emission process and increase the flashover voltage. When the filling concentration is 4 wt%, the EP composite filled with La + Cr co-doped BFO nano-filler has the highest deep trap energy level depth and density, as well as the maximum flashover voltage amplitude. | 7,404.6 | 2021-08-26T00:00:00.000 | [
"Physics",
"Materials Science"
] |
Design considerations for the enhancement of human color vision by breaking binocular redundancy
To see color, the human visual system combines the response of three types of cone cells in the retina—a compressive process that discards a significant amount of spectral information. Here, we present designs based on thin-film optical filters with the goal of enhancing human color vision by breaking its inherent binocular redundancy, providing different spectral content to each eye. We fabricated a set of optical filters that “splits” the response of the short-wavelength cone between the two eyes in individuals with typical trichromatic vision, simulating the presence of approximately four distinct cone types. Such an increase in the number of effective cone types can reduce the prevalence of metamers—pairs of distinct spectra that resolve to the same tristimulus values. This technique may result in an enhancement of spectral perception, with applications ranging from camouflage detection and anti-counterfeiting to new types of artwork and data visualization.
the visual fields of each eye are overlapping, providing different spectral content to each eye via a wearable passive multispectral device comprising two optical transmission filters (Fig. 2).
A number of existing vision-assistive devices or techniques operate by breaking binocular redundancy, though usually in the spatial rather than spectral domain. Examples include hemianopia (partial blindness in the left or right visual field) treatment using spectacles with a monocular sector prism that selectively relocates the visual field in one eye, leaving the other eye unaffected, and thus conferring an additional 20° of visual-field sensitivity for binocular vision 13,14 , and the treatment of presbyopia by correcting one eye for near vision and the other for distance vision 15 .
We break binocular redundancy spectrally by using filters that selectively attenuate different wavelength bands to yield effective cone sensitivities (i.e., the products of the cone sensitivities and the filter transmission spectra) that are different between the two eyes. This approach is intended to increase the number of effective cone types while preserving most spatial information. In this vein, the use of two simple band-pass filters was previously demonstrated to increase the dimensionality of color vision in dichromatic individuals (i.e., those with two functioning cone types) 16 . Conversely, the goal in the present work is to enhance the dimensionality of a trichromat's visual system to beyond that of a typical human. Such an approach was briefly suggested by Cornsweet in 1970 17 , but to our knowledge no specific design has been proposed or realized. We note that the use of even a single filter positioned in front of both eyes can help distinguish certain metamers 17 , with the caveat that previously distinguishable spectra can become metamers when viewed through the filter; that is, a similar number of metamers (usually more) are created as are destroyed. In contrast, the use of two filters might be used to decrease the overall number of possible metamers. For this work, if at least one of the two filters can be used to differentiate a pair of spectra, we consider the pair to no longer be metamers. We emphasize that no behavioral data is presented in this manuscript.
Results and Discussion
Filter Design and Construction. The filter pair was designed using a standard psychophysical model to determine the perceived (monocular) colors corresponding to particular spectra 2,18 . The perceived colors were calculated using the International Commission on Illumination (CIE) 1931 2° standard-observer matching functions, and monocular color differences (e.g., between colors 1 and 2) were calculated in the CIELAB color space using a standard color-difference metric (see Methods for further details) 2,19 : The filters were designed to enhance the ability of a typical trichromatic viewer to discriminate spectra while limiting adverse effects. For simplicity, we focused on a design that splits the response of the S cone, thus transforming the trichromatic visual system into one that simulates tetrachromatic vision. The S cone was chosen because its responsivity has relatively little overlap with those of the M and L cones (Fig. 1b(i)), so it can be attenuated while minimizing the impact on the effective responsivity (i.e., the product of cone responsivity and the filter transmission response) of the other two cone types. To provide approximate parity between eyes, we partitioned the S cone responsivity such that each eye retains approximately half of the original response spectrum (Fig. 1b(ii)). Our secondary design goal was to ensure that the transmission of broadband white light (defined using CIE illuminant D65) 20 through the two filters results in similar tristimulus values. This constraint was put in place to minimize potential baseline disparities (e.g., when viewing broadband white objects) between the eyes when the device is used in daylight. Though a particular implementation of this type of filter-based device generally depends on the illuminant chosen, the design presented here should work well for most illuminants along the Planckian locus 21 .
The final device presented here comprises a 450 nm long-pass filter (Filter 1) and a 450-500 nm, 630-680 nm double-band-stop filter (Filter 2) (Fig. 2b,c); the filter designs were optimized by varying their stopband/passband positions and transmittances using simulated annealing to minimize the CIE ΔE color difference of D65 white light passing through Filters 1 and 2 (See Methods for further details). The 450 nm transition between Filters 1 (1,2) and D65 broadband white light (3), as perceived by a typical trichromat. The traces in each box are the underlying unfiltered spectra (arbitrary units), taken from our experiments, as described in Fig. 3.
(e,f) Rendered monocular colors of the spectra in (d) after passing through Filter 1 and Filter 2, respectively. Note that e(1) and e(2), f(1) and f(2) are substantially distinct, while e(3) and f(3) are similar in color due to the white-balance constraint enforced in our design. and 2 is at the peak sensitivity of the S cone, and partitions it in half. However, due to the non-zero sensitivity of the M and L cones in the 450-500 nm region [ Fig. 1b(i)], the M and L cones are also (unintentionally) attenuated by Filter 2. A second stopband at 630-680 nm was introduced to attenuate the effective responsivity of the M and L cones to broadband white light to preserve color balance. Though the filter designs were optimized for these constraints, we note that the design presented here is a proof of concept, and is not a unique or globally optimal solution.
To reduce cost and manufacturing time, an off-the-shelf component (450LP RapidEdge, Omega Optical), was used for Filter 1. The optimized transmission function of Filter 2 was realized using conventional thin-film technology 22 , with alternating layers of silicon oxide (SiO 2 , n = 1.46) and tantalum oxide (Ta 2 O 5 , n = 2.15) ( Fig. 2a,b), deposited on an NBK7 glass substrate (see Methods). The two filters were then characterized by angle-dependent transmission spectroscopy, demonstrating that the transmission spectra are robust to incidence angles up to 5° away from the normal (Fig. 2c). Following fabrication, the filters were constructed into a pair of glasses.
Experiments.
To test the performance of this design, we constructed a setup that generates metameric spectra using a liquid crystal display (LCD, True HD-IPS display on LG G3 smartphone) and a cathode ray tube (CRT, Dell E770P) monitor ( Fig. 3a,b). The displays use different emission mechanisms, and thus produce distinct spectra when displaying the same color (See Supplementary Information for further analysis) 23,24 . Blocks of color generated by the displays were presented side by side using a 50/50 beam splitter, and the colors were individually adjusted until no perceivable color difference was present. The emission spectra of each monitor were recorded using a free-space spectrometer, allowing for chromaticity and color-difference calculations to be made given a standard observer. A threshold value of 2.3 for the CIE ΔE "just noticeable difference" was taken to define perceptually indistinguishable spectra (i.e., ΔE < 2.3) 25 . See Methods for further details of the experimental setup.
One representative example from this dual-display setup, using a pair of metamers that appear purple, is shown in Fig. 3b-d. Without the use of either filter, the two different spectra appeared as identical patches of color. However, when observed through either of the filters, the two can be differentiated. Subjectively, we observed that, by looking at a particular patch through both filters simultaneously (i.e., Filter 1 over the left eye, Filter 2 over the right), a color percept is observed that is different from the color perceived through either filter individually or with no filter. We note that a related study involving dichromats demonstrated an increase in color dimensionality using band-pass filters, which the authors suggest the effect could be related to binocular lustre 16 . Our proposed capability to distinguish metamers by breaking binocular redundancy may be affected by binocular lustre and/or rivalry, which might be advantageous provided it can be used as a cue to a difference in hue. For example, a recent report has demonstrated that compression artifacts of pixels in virtual reality images can be easily detected due to lustre 26 . Note that lustre and rivalry are both dynamic phenomena, even for static stimuli [27][28][29] ; thus the use of lustre/rivalry may result in a tradeoff between temporal and spectral resolution. Though our current experiments do not directly investigate the effect of lustre or rivalry, or probe to what degree the neuronal processing system of a trichromat can take advantage of the extra spectral information resulting from binocular filtering, this can be explored in future work.
We note that substantial differences in luminance between the two eyes, such as for spectra that transmit chiefly through only one filter, might lead to the Pulfrich effect 30 ; however, our design was optimized to minimize differences in appearance of Illuminant D65, and the binocular luminance disparity required for the Pulfrich effect to occur is unlikely for most commonly occurring (i.e., smooth/broad) spectra.
Calculation of metamer reduction. Broadly stated, the number of cone types and their frequencydependent responsivities determines the extent to which metamerism is a limitation to the visual system. Our method is meant to increase the number of effective cone types, which should decrease the number of potential metamers, provided the subsequent neuronal processing can adapt appropriately (which seems to occur in the case of spatial multiplexing used for vision-assistive devices 13,14 ). In general, quantitatively determining the decrease in the metamer frequency is difficult because the set of possible metamers is not bounded. Nevertheless, various metrics can be applied to roughly estimate this quantity. For this work, we developed two separate metrics that describe this decrease in metamer frequency given the following two conditions: Condition 1: Without the use of filters, a metamer pair is defined by two spectra with a color difference ΔE < 2.3 25 . Condition 2: With the use of binocular filters, such as those in Fig. 2b, a metamer pair is defined by two spectra with a monocular color difference ΔE < 2.3 in each eye. That is, a pair of spectra is a metamer if and only if it is a metamer in each eye individually. We do not consider the possibility of other perceptual effects such as binocular rivalry or dichoptic color mixing.
Our first metric uses a Monte Carlo simulation to probe the effect binocular filters have on the perception of pairs of spectra, given the conditions above (Fig. 4). To start, a pair of reflectance spectra is generated by stochastically sampling intensity values from a uniform distribution at regularly spaced intervals within the visible wavelength range (i.e., at λ λ λ λ … − , , where N s is the total number of sampling points). The sharpness of the reflectance spectra was adjusted by changing N s , with larger numbers leading to sharper features, and were interpolated at 10 nm intervals using a cubic spline to create smooth spectra. We assumed illuminant D65, and then filtered the reflected spectra by the filter transmission responses given in Fig. 2b. ΔE color differences were calculated between the pairs of spectra for the unfiltered case, and through Filter 1 and Filter 2. The method was performed for various number of iterations (N i ), which varied from 1,000 to 20,000,000, and the number of unfiltered (M u ) and filtered (M f ) metamers were recorded for each trial. We then defined a metric that represents the decrease in metamer frequency upon filtering: For example, P m = 2 represents a two-fold decrease in the number of metamers using the two filters. The results from this simulation, for several sampling values (N s ) and iteration numbers (N i ), are given in Fig. S5 of the Supplementary Information. Given the simulation conditions, the filters in this work result in up to a ~15× decrease in the number of metamers for randomly generated spectra; this effect appears to be greatest for moderately sharp spectral features (N s = 15), and drops off for very broad or very sharp spectra. We note that, though the given metric seems to converge for larger iteration numbers (See Methods and Supplementary Information), these measures are only meaningful to within a factor of ~2 due to the stochastic nature of this calculation.
As further verification of the apparent decrease in metamer frequency, we also developed a more-abstract method (See Supplementary Information for a complete description of this calculation, abridged here for clarity). Rather than comparing stochastically generated spectra, as above, this method aims to calculate the overall number of spectra that map to perceptually indistinguishable tristimulus values (ΔE < 2.3). For a given reference point in LAB space, [L o , a o , b o ], the number of metamers (with respect to the reference point) was determined by counting the spectra, I(λ), that map to LAB coordinates within a sphere of radius 2.3 around the reference point. We determined the number of metamers by calculating the volume of spectra, represented by an ellipsoid in N-dimensional space, where N is the number of discrete wavelength bins that define a spectrum. However, calculating the exact volume of high dimensional ellipsoids in this case is difficult; instead, we calculate the volume of the max-inscribed ellipsoid subject to box constraints, which represents an upper bound of the true value 31 (see Supplementary Information for more details). The volume of this ellipsoid represents the number of metamers, for a given reference point, for the unfiltered case (V u ). For the filtered case, the union of two ellipsoids, corresponding to each filter individually, represents the number of metamers (V f ); this is equivalent to Condition 2 above, where we assume that monocular metamerism must be present in both eyes simultaneously to yield indistinguishable color percepts in the filtered case. Thus, the overall decrease in metamer frequency is given by: where F m = 2, as an example, represents a two-fold decrease in metamer frequency. This process was repeated for 500 LAB reference points from randomly generated spectra to adequately sample the color space. The number of wavelength samples (N s ) was also varied to again explore the effect of spectral sharpness; as in the Monte-Carlo simulation, the decrease in metamer frequency occurs around 12-16 bins. Using this metric, we estimate a decrease in metamer frequency by one-to-two orders of magnitude when using our thin-film filter pair (see Table S1 in the Supplementary Information). By the same metric, a single-filter system designed to improve vision in color-vision-deficient individuals 32 seems to provide no decrease in the frequency of apparent metamers.
Conclusion
By breaking the inherent chromatic redundancy in binocular vision, our method aims to provide the user with more spectral information than is otherwise available. In the present design, the S cone is partitioned using a pair of filters that results in photoreceptor responses consistent with a visual system that utilizes approximately four cone types (i.e., simulated tetrachromacy). The S cone is more sensitive to blue-colored objects, and this approach can, e.g., be used for differentiating structural color versus natural pigments (See Supplementary Information) 33 .
It is also possible to use similar methods to design filters that more strongly affect metamers that appears as green and red, that are more prevalent in nature 34 . While the possibility of natural tetrachromacy in a fraction of the population has received both academic and popular interest [10][11][12] , the technology demonstrated here has the potential to simulate tetrachromatic vision in anyone with typical, healthy trichromatic vision. The extent to which observers can (or can learn) to take advantage of the additional spectral information is yet to be determined, and requires behavioral/perception studies. Given two eyes and three types of cones, it should be possible to increase the number of effective cones up to six using our approach, and potentially even more with spatial or temporal multiplexing. It may also be possible to generate personalized designs to improve color discrimination for individuals with color-vision deficiencies. This technology can be integrated in a simple pair of eyeglasses or sunglasses, and could have immediate applications in camouflage detection, quality control, anti-counterfeiting, and more. More broadly, the ability to see many more colors has intriguing opportunities for design and artwork, and for data representation with extra color channels. is the transmission spectrum of the filter, and I(λ) is the spectral irradiance of light passing through the filter. The XYZ tristimulus values can be transformed to a different color space (e.g., RGB); here, we use the CIELAB color space because it is more perceptually uniform and allows for straightforward calculations of perceived color differences. The XYZ to LAB transformation is given by 19 16, 500 , 200 , n n n n n where the filter responses became more complex as our intuition grew between iterations. This approach was used to meet the primary design goal, splitting the spectral response of the S cone between eyes, while also enforcing other optimization conditions such as a perceptual color balance for D65 white light between eyes. The final filter designs comprise a 450 nm longpass filter (Filter 1) and a 450-500 nm, 630-680 nm double bandstop filter (Filter 2). The 450 nm transition region, where Filter 1 cuts on and the first bandstop of Filter 2 cuts off, occurs roughly at the peak sensitivity of the S cone. Therefore, Filter 1 transmits the long wavelength half of the S cone, while the first stopband of Filter 2 transmits the short wavelength half of the S cone. However, due to the nonzero sensitivity of the M and L cones between 450-500 nm, their sensitivities are also inadvertently attenuated, impacting the D65 color balance between eyes. The second bandstop of Filter 2, between 630-680 nm, attenuates the long wavelength tails of the M and L cone sensitivities, restoring color balance between eyes. A 450 nm longpass filter (Omega Optical, 450LP RapidEdge) was chosen as Filter 1. Filter 2 was optimized using constrained optimization by linear approximation (COBYLA) to minimize the merit function: ΔE/Width bandstop , where ΔE is the color difference between Filter 1 and Filter 2 when transmitting D65 white light and Width bandstop is the spectral width of the short-wavelength band-stop region of Filter 2. This merit function ensures satisfactory D65 color balance between eyes while also maximizing the difference between filters, enhancing their ability to distinguish spectra. For Filter 2, the transmittance of the pass and stop-bands were constrained between 5-15% and 80-95%, respectively, and the longer wavelength stopband was constrained between 600-700 nm to prevent attenuation of the M and L cones at their peak sensitivities (~550 nm, ~580 nm respectively). This procedure yielded an optimized response for Filter 2 with stopbands at 450-500 nm and 630-680 nm, and stopband/passband transmittance of 10% and 90%, respectively; the color difference for illuminant D65 between Filter 1 and Filter 2 is ΔE = 5.21, with chromaticities of (0.348,0.406) and (0.35,0.415), respectively.
Thin-film optimization and construction. The required film thicknesses were determined by conventional thin-film optimization methods, including gradual evolution 35 and needle optimization 36 , to implement the target transmission function. The final stack was constrained to be less than 75 total layers, and each layer between 10 and 500 nm thick. The filter was optimized such that the transmission would not change significantly for incident angles up to 5° away from the normal. The films were deposited using ion-assisted sputtering onto an NBK7 glass substrate at a thin-film foundry (Iridian Spectral Technologies, Ontario, Canada). See Supplementary Information for more information about the thin-film design.
Metamer
Generation. An LCD (True HD-IPS on LG G3 smartphone) and CRT (Dell E770P) display were used to generate metameric pairs. The monitors were placed at a 90° angle from one another, and a large 50/50 beam splitter (Edmund Optics) was placed at 45° between the displays such that images from the two displays could be projected directly next to each other with no border. To find a metamer, a block of color was displayed on the CRT display, and a user-controlled 3-axis joystick was used to adjust the LCD image until no perceivable color difference was detected by the observer; the 3-axis joystick controlled colors in the HSV color space. The entire experimental setup was enclosed in a wooden box, painted black on the inside, and square apertures were placed on each display to ensure the images were displayed with black backgrounds to mitigate possible contextual perception effects 28 . Spectra from each monitor were acquired using a free-space spectrometer (Ocean Optics SCIEnTIfIC REPORTS | (2018) 8:11971 | DOI:10.1038/s41598-018-30403-y FLAME VIS-NIR with cosine corrector), normal to and adjacent to each display screen. Spectra for the white point of each monitor are shown in the Supplementary Information. Monte Carlo Simulation. Reflectance spectra were calculated by generating random values at a defined number of sampling numbers (N s ) within the visible wavelengths (400-720 nm) using Matlab's rand function; N s was varied between 4-35 points to define the sharpness of spectral features, and the spectra were interpolated using a cubic spline. CIE 1931 2° matching functions were used to calculate tristumlus values for illuminant D65 reflected from the objects. Illuminant D65 was used as the white-point for conversion to the CIELAB space. A threshold of ΔE < 2.3 was used to define indistinguishable tristimulus values 25 . The simulation was performed for several number of iterations (N i ), from 1,000-20,000,000, to determine if the defined metric P m converged for a given N s . For N i greater than 1,000,000 values for P m converged to values within ~20% of each other within a given N s .
More-abstract calculation of metamer frequency. The volume approximation ratios were computed using CVX, a package for specifying and solving convex programs 37 . Reference points [L 0 , a 0 , b 0 ] were chosen by uniformly sampling discretized spectra and mapping them to LAB tristimulus values. The computation was performed for various wavelength binning values (N b ), between 7-18, to vary the broadness/sharpness of spectra, and 500 reference points were computed for each binning value. A detailed summary of this calculation, including a formalized mathematical treatment, can be found in the Supplementary information. Data Availability Statement. All data generated or analyzed during this study are included in this published article (and its Supplementary Information files). | 5,439 | 2018-08-10T00:00:00.000 | [
"Physics",
"Engineering"
] |
Oxylipins as Biomarkers for Aromatase Inhibitor-Induced Arthralgia (AIA) in Breast Cancer Patients
Aromatase inhibitor-induced arthralgia (AIA) presents a major problem for patients with breast cancer but is poorly understood. This prospective study explored the inflammatory metabolomic changes in the development of AIA. This single-arm, prospective clinical trial enrolled 28 postmenopausal women with early-stage (0–3) ER+ breast cancer starting adjuvant anastrozole. Patients completed the Breast Cancer Prevention Trial (BCPT) Symptom Checklist and the Western Ontario and McMaster Universities Arthritis Index (WOMAC) at 0, 3, and 6 months. The plasma levels of four polyunsaturated fatty acids (PUFAs) and 48 oxylipins were quantified at each timepoint. The subscores for WOMAC-pain and stiffness as well as BCPT-total, hot flash, and musculoskeletal pain significantly increased from baseline to 6 months (all p < 0.05). PUFA and oxylipin levels were stable over time. The baseline levels of 8-HETE were positively associated with worsening BCPT-total, BCPT-hot flash, BCPT-musculoskeletal pain, WOMAC-pain, and WOMAC- stiffness at 6 months (all p < 0.05). Both 9-HOTrE and 13(S)-HOTrE were related to worsening hot flash, and 5-HETE was related to worsening stiffness (all p < 0.05). This is the first study to prospectively characterize oxylipin and PUFA levels in patients with breast cancer starting adjuvant anastrozole. The oxylipin 8-HETE should be investigated further as a potential biomarker for AIA.
Introduction
Adjuvant aromatase inhibitors (AIs) are the recommended endocrine treatment for postmenopausal women diagnosed with early-stage, estrogen receptor-positive (ER+) breast cancer. AIs are also used in premenopausal women in combination with gonadotropinreleasing hormone agonists (GnRH). The three third-generation AIs in routine clinical use-anastrozole, letrozole, and exemestane-have similar efficacy and toxicity profiles when compared across studies. The standard recommended duration was five years until recent clinical trials showed that extended therapy (10 years) improves the disease-free survival rate in patients with high-risk ER+ breast cancer [1]. Despite these benefits, adherence remains a challenge, as AI therapy is associated with significant, activity-limiting musculoskeletal symptoms, including arthralgia, myalgia, and joint stiffness, collectively called AI-induced arthralgia (AIA). Symptoms can manifest early after the initiation of AI therapy and worsen up to two years. High rates of AI non-adherence (estimated at up to 50% by year three) due to an intolerance to the side effects, notably AIA, are now linked to a reduced benefit [2,3]. The majority of pharmacological and non-pharmacological intervention studies for AIA are negative. This is most likely due to the enrolment of patients who are on adjuvant AIs rather than the enrolment of those "at high risk" for developing AIA. Currently, there is a need for clinically validated biomarkers to predict who is at risk for AIA, explain AIA progression, and guide intervention studies to improve quality of life and reduce death from breast cancer by improving AI adherence.
While AIA is a well-known problem, the mechanism for its development is not well understood and grossly understudied. AIs block peripheral estrogen synthesis, thereby further decreasing estrogen levels [4]. Preclinical data suggest that AIA is directly related to loss of the anti-nociceptive action of estradiol; however, the level of estradiol depletion is not correlated with the degree of AIA symptoms [5]. Inflammation also plays a role in exacerbating AIA symptoms [6], and non-steroidal anti-inflammatory drugs (NSAIDs) provide relief for some women with AIA [7]. Our group previously reported that, in patients with breast cancer who are stable on an AI, intervention with the NSAID sulindac for six months resulted in improved pain, stiffness, and physical function as assessed by the Western Ontario and McMaster Universities Osteoarthritis (WOMAC) Index [8]. However, inflammation alone is unlikely to be the cause of AIA [9]. To date, there are no clear metabolic pathways identified to explain AIA etiology or to determine targets for interventions.
Oxylipins are produced via metabolism of ω-6 and ω-3 polyunsaturated fatty acids (PUFAs) by cyclooxygenase (COX), lipoxygenase (LOX), and cytochrome P450 (CYP450) enzymes. NSAIDs target COX, which metabolizes ω-6 and ω-3 PUFA to inflammatory prostaglandins [10]. Oxylipins have a spectrum of biological activity, including pro-and anti-inflammatory effects, as well as the induction and inhibition of pain. The presence of underlying inflammation and the nociceptive activity of oxylipins, may be a contributing factor for pain [11][12][13]. Our group previously published an overview of the oxylipin pathway and biological outcomes [14]. Oxylipin profiles are implicated in the development of inflammatory conditions including rheumatoid arthritis [15]. Preliminary evidence also implicates both the CYP450 and LOX pathways in the development of tendinopathy [16,17]. We previously published that tendon stiffness may play a role in the pain experienced by women taking AIs [18,19]. AIs are also involved in the upregulation of the CYP450 pathway [20] and cross-talk between the estrogen receptor (ER) and LOX-mediated oxylipins [21], suggesting a role of oxylipins in AIA. These findings suggest that the development or progression of AIA is likely attributed, in part, to an unfavorable oxylipin profile. Alterations in the entire oxylipin cascade that result in multiple biological effects, including inflammation, the development of tendon stiffness, and increased nociception [22], may play a role in the development of AIA in patients with breast cancer.
We conducted a prospective study to further explore these inflammatory metabolomic changes in the development of AIA. Women were enrolled after the completion of their definitive treatment at the initiation of their AI and were followed for six months. We previously reported in a subgroup of these patients that baseline stiffness in the abductor pollicis longus tendon evaluated using shear wave elastography could be used to predict the development of AIA [19]. Here we report the blood-based inflammatory biomarkers evaluated in these patients. To the best of our knowledge, this is the first study to report an inflammatory profile at baseline and the changes while on AI therapy.
Study Design
This single-arm, prospective clinical trial was conducted at the University of Arizona Cancer Center (NCT03665077). It was approved by the institutional review board, and all patients enrolled signed an informed consent. Postmenopausal women with early-stage (0-3) ER+ breast cancer who were candidates for adjuvant AI therapy and had completed their definitive treatment (surgery ± radiation) were enrolled into this study. The exclusion criteria included having received chemotherapy (adjuvant or neo-adjuvant), prior endocrine therapy (AI or tamoxifen), history of rheumatoid arthritis or other autoimmune arthritis, active daily NSAID use (other than low-dose aspirin), and active use of any corticosteroids or immunosuppressive therapies. Participants were recruited during their initial visit with their oncologist prior to the initiation of AI therapy. They completed blood draws and questionnaires at 0, 3, and 6 months after initiating AI therapy ( Figure 1). To decrease confounding effects, adjuvant AI therapy was prior endocrine therapy (AI or tamoxifen), history of rheumatoid arthritis or other autoimmune arthritis, active daily NSAID use (other than low-dose aspirin), and active use of any corticosteroids or immunosuppressive therapies. Participants were recruited during their initial visit with their oncologist prior to the initiation of AI therapy. They completed blood draws and questionnaires at 0, 3, and 6 months after initiating AI therapy ( Figure 1). To decrease confounding effects, adjuvant AI therapy was initiated 6 weeks after the completion of their definitive treatment. All women enrolled in this study were started on adjuvant anastrozole.
Arthralgia and Depression Outcome Measures
BCPT: The Breast Cancer Prevention Trial (BCPT) Symptom Checklist is a 42-item questionnaire validated in breast cancer survivors [23]. The BCPT total score comprises 8 subscores: hot flash (3 questions), nausea (3 questions), bladder control (2 questions), vaginal problems (3 questions), musculoskeletal pain (3 questions), cognitive problems (4 questions), weight problems (4 questions), and arm problems (2 questions). For each question, women indicate the presence or absence of symptoms and the extent to which they are bothered by those symptoms on a five-point Likert scale ranging from 0 (not at all) to 4 (extremely). The musculoskeletal pain (MS) subscale has been shown to be responsive to changes in AIA and is calculated as the mean of the responses to three questions addressing general aches and pains, joint pain, and muscle stiffness [24]
Arthralgia and Depression Outcome Measures
BCPT: The Breast Cancer Prevention Trial (BCPT) Symptom Checklist is a 42-item questionnaire validated in breast cancer survivors [23]. The BCPT total score comprises 8 subscores: hot flash (3 questions), nausea (3 questions), bladder control (2 questions), vaginal problems (3 questions), musculoskeletal pain (3 questions), cognitive problems (4 questions), weight problems (4 questions), and arm problems (2 questions). For each question, women indicate the presence or absence of symptoms and the extent to which they are bothered by those symptoms on a five-point Likert scale ranging from 0 (not at all) to 4 (extremely). The musculoskeletal pain (MS) subscale has been shown to be responsive to changes in AIA and is calculated as the mean of the responses to three questions addressing general aches and pains, joint pain, and muscle stiffness [24]. In the case of missing data for any question within the subscales, the entire subscale was considered missing.
WOMAC: The WOMAC is a 24-item instrument developed to assess pain (5 items), stiffness (2 items), and physical function (17 items) in participants with hip and/or knee osteoarthritis as well as for AIA [25,26]. Here, we evaluated the 3 subscales and the total score using the 5-point Likert format (0 = none to 4 = extreme). As discussed in Bellamy [25], for convenience and for comparison purposes to previous studies, the total scores and each subscale were normalized to a range of 0-100. In the case of missing data, the subscales were considered valid as long as no more than 1 item was missing for pain or stiffness, and no more than 4 items were missing for physical function.
Patient Health Questionnaire (PHQ)-9: The Patient Health Questionnaire (PHQ)-9 is a validated multipurpose tool used for screening, diagnosing, monitoring, and measuring the severity of depression [27]. It has been reported that up to 50% of newly diagnosed patients with breast cancer have symptoms of depression or anxiety [28], and the perception of pain may be altered in individuals with symptoms of depression [29]. The PHQ-9 is a 9-item questionnaire that asks how often a person has been bothered by symptoms within the past 2 weeks. Responses are measured on a 4-point Likert scale, including "0 = not at all", "1 = several days", "2 = more than half of the days", and "3 = nearly every day".
Plasma Sample Collection and Preparation
At collection, triphenylphosphine (TPP) and butylated hydroxytoluene (BHT 0.2% w/w) (MilliporeSigma, Burlington, MA, USA) were added to plasma that was collected in EDTA tubes. TPP reduces peroxides to their monohydroxy equivalents, and BHT quenches radical catalyzed reactions [30]. Both reagents prevent peroxyl radical propagated transformations of fatty acids. Three 300 µL aliquots of plasma plus antioxidant were frozen immediately at −80 • C. Based on our experience, oxylipins are not stable through multiple freeze-thaw cycles. Therefore, all samples were thawed only once for batch analysis.
Plasma samples were prepared for ultra-performance liquid chromatography (UPLC-MS) analysis as described in detail by Liu et al. [31]. Briefly, 250 µL plasma was spiked with a set of odd chain length analogues and deuterated isomers of several target analytes, including hydroxyeicosatetraenoic acids, thromboxanes, epoxides, prostaglandins, and diols, contained in 10 µL methanol (Cayman Chemical, Ann Arbor, MI, USA). Samples were then subjected to solid phase extraction using Oasis Prime HLB 3 mL, 60 mg sorbent (Waters, Milford, MA, USA). Eluents were evaporated to dryness and reconstituted in 50 µL methanol. Spiked samples were then vortexed, centrifuged, and transferred to autosampler vials for analysis.
Reverse Phase Chromatography with UPLC-MS
Oxylipin profiling was performed using UPLC with an Agilent Ultivo QQQ MS system coupled to an Agilent 1290 Infinity II UPLC system (Agilent, Santa Clara, CA, USA). Chromatographic separation of the oxylipins was achieved using a gradient of water, methanol, and acetonitrile, all with 0.1% acetic acid (v/v). The acquisition parameters were as previously described [32] with minor modifications, and the MS data were used for quantification. Surrogate analytes and internal and external standards were used to monitor extraction efficiency and ensure accurate quantitation with standard curves. The acquired data were quantified using Quant-My-Way (Agilent, Santa Clara, CA, USA) using 9 isotope-labeled internal standards. Here we report the data for oxylipins with >80% of values above the limit of detection (43 of 62 oxylipins) and for 4 PUFAs: arachidonic acid (ARA), linoleic acid (LA), eicosapentaenoic acid (EPA), and docosahexaenoic acid (DHA) (Cayman Chemical, Ann Arbor, MI, USA). UPLC was performed in 2 separate batches, ensuring that repeat measures across time for the same participant were all included in the same batch.
Statistical Analysis
Baseline characteristics were summarized using the median [interquartile range (IQR)] for continuous variables and proportions for categorical variables. The symptom scores and oxylipin levels were summarized at each time point using the mean ± standard deviation (SD). The associations between the baseline PUFA/oxylipin levels and baseline symptom scores were tested using Spearman correlations. The changes in symptom scores across time were tested using linear mixed-effects models with time (interval since baseline) as a continuous variable, adjusted for baseline symptom score, and clustered on the participant. Additional models further adjusted for age at baseline, BMI at baseline, and definitive therapy (mastectomy versus lumpectomy). Similar mixed-effects models were constructed for changes in PUFAs and oxylipins across time and adjusted for baseline level and batch. The associations between the baseline PUFA/oxylipin levels and symptom scores across time were tested using linear mixed-effects models as described above. PUFAs and oxylipins were log-transformed in all models. The statistical analyses were conducted using Stata 17.0 (StataCorp, College Station, TX, USA), and no adjustments were made for multiple comparisons.
Participants and Characteristics
Of the 30 patients recruited, one was ineligible due to prior therapies, and one withdrew on the same day as enrollment per difficulty with the blood draw, thus yielding a sample size of 28. The median (IQR) age was 66.0 (63.1-72.6) years at enrollment (Table 1). Median (IQR) time since diagnosis was 4.7 (3.6-5.9) months. Median (IQR) BMI was 25.1 (23.0-31.3) kg/m 2 , and the cohort was 89.3% non-Hispanic white. For their definitive breast surgery, 78.6% received a lumpectomy, and 67.9% required radiation. There were eight (28.6%) participants with stage 0 breast cancer, seventeen (60.7%) stage I, and three (10.7%) stage II. There were eight participants regularly taking low-dose aspirin (81 mg) and one participant taking other (non-NSAID) pain medication.
Change in Symptom Scores
BCPT, WOMAC, and PHQ-9 mean ± SD scores at baseline, three, and six months are presented in Table 2. In the fully adjusted model, there was a significant increase in the BCPT-total score (p = 0.008) and BCPT-MS subscore (p < 0.001) by six months. The BCPT-MS subscale has been shown to be responsive to changes in AIA with scores > 1.5, indicating clinically relevant arthralgia [24,33]. At baseline, there were five of 28 (18%) women with a score > 1.5 on the BCPT-MS subscale, seven of 22 (32%) at three months, and nine of 24 (38%) at six months. The BCPT-hot flash subscore also significantly increased by six months (p = 0.005). There was no change in the other BCPT subscores (nausea, bladder control, vaginal problems, cognitive problems, weight problems, and arm problems) across the 6-month study period.
WOMAC-pain significantly increased across time (p = 0.047); however, only eight of 24 women experienced a worsening of their symptoms. The mean ± SD change in the pain score for these eight women was 18.8 ± 7.9. WOMAC-stiffness also significantly increased (p = 0.031), which was driven by 11 of 24 women who experienced a worsening of their symptoms (27.3 ± 12.3 point change from baseline to six months for those 11 participants). Changes in the physical function subscore or the total score were not statistically significant. However, 14 women experienced a worsening of the physical function subscore (9.7 ± 8.0 point change from baseline to six months), and 14 women experienced a worsening of the WOMAC-total score (10.2 ± 8.2 point change from baseline to six months). There were no significant changes in the PHQ-9 total score for depression.
Correlation between Oxylipins and Symptom Scores at Baseline
Plasma samples were not available for three participants, thus yielding a sample size of 25 for these analyses. There were four PUFAs (EPA, DHA, ARA, and LA) plus 62 of their oxygenated lipid metabolites (oxylipins) in the original analytical platform. Of the 62 oxylipins, 43 had >80% of samples with levels above the limit of detection [14]. Table S1 shows the mean ± SD at baseline, three, and six months for the four PUFAs that were quantified, and Table S2 shows the mean ± SD at baseline, three, and six months for the 43 oxylipins. To characterize the relationship between the oxylipins and symptoms, the oxylipin and PUFA levels in the plasma were correlated with the symptom scores at baseline. There were no significant correlations between any oxylipins or PUFAs and the BCPT-total score or the BCPT-hot flash, BCPT-MS, or BCPT-cognitive subscores. Significantly correlated oxylipins with BCPT subscores are as follows: nausea with 9-OxoODE (ρ = 0.41; p = 0.041), bladder control with 8 (9) (20)-EpDPA (ρ = −0.41; p = 0.042). The PHQ-9 total score was significantly correlated with 9-OxoODE (ρ = 0.50; p = 0.010) and negatively correlated with 8(9)-EpETE (ρ = −0.41; p = 0.039). There were no significant correlations between the WOMAC-total, stiffness, physical function, or pain subscores and any oxylipins or PUFAs (data not shown).
Given that 8-HETE was the only oxylipin related to several AIA outcomes, Figure 2 illustrates the box plots comparing the baseline batch-adjusted 8-HETE measures among the participants who did and did not experience worsening symptoms (BCPT total, BCPT hot flash, BCPT-MS, WOMAC-pain, and WOMAC-stiffness) over six months. All scores were higher at baseline among those women that went on to have worsening symptoms by six months.
Discussion
The primary purpose of this study was to determine whether any baseline oxylip or PUFAs could predict who might develop symptoms related to AIA. In this prelimin
Discussion
The primary purpose of this study was to determine whether any baseline oxylipins or PUFAs could predict who might develop symptoms related to AIA. In this preliminary study, we found that baseline levels of 8-HETE were significantly related to worsening symptoms of AIA from baseline to six months of adjuvant therapy with anastrozole. 8-HETE is produced primarily from arachidonic acid via 15-LOX [34]. Early work showed that 8-HETE is a strong activator of peroxisome proliferator-activated receptor (PPAR) alpha and a weak activator of PPAR gamma, regulators of lipid homeostasis [35], and induces differentiation of preadipocytes [36]. Compounds that induce differentiation of adipocytes have been shown to inhibit aromatase expression and, thus, estrogen synthesis by adipose tissue [37]. To our knowledge, no studies have yet determined whether there is a relationship between 8-HETE and estrogen levels in circulation or in tissues. Another study showed that 8-HETE levels were higher in patients that had experienced a myocardial infarction relative to matched controls, and 8-HETE was significantly positively correlated with the pro-inflammatory cytokine tumor necrosis factor-alpha (TNF-α) [38]. In cell culture, 12/15-LOX overexpression has been directly linked to increased TNF-α production. These data taken together suggest the possibility that additional suppression of estrogen via 8-HETE as well as an inflammatory profile related to overexpression of 15-LOX (and thus 8-HETE production) predisposes women to AIA and explains the relationship between baseline 8-HETE and AIA development observed in our study.
In addition to 8-HETE, two LA metabolites produced via the LOX pathway, 13(S)-HOTrE and 9-OxoODE, as well as the α-LA metabolite 9-HOTrE also produced via LOX were all significantly related to the development of hot flashes by six months. To our knowledge, this is the first study to show an association between these oxylipins and hot flashes. Along with other oxidized LA metabolites, 9-OxoODE has been shown to induce nociceptive hypersensitivity in a rat model [22]. Other LOX products of LA, HODEs, have previously been shown to have pro-nociceptive properties in rodent pain behavioral models [39][40][41] and to be involved in inflammatory pain [42] and Achilles tendinopathy [16]. However, in the current study, there was no association between these LOX metabolites and pain scores on treatment with anastrazole.
We also noted that four CYP450 metabolites of DHA, all epoxydocasapentaenoic acids [7(8)-EpDPA, 13(14)-EpDPA, 16(17)-EpDPA, and 19(20)-EpDPA)], were significantly negatively correlated with arm pain as assessed with the BCPT arm subscore. To our knowledge, this is the first report to suggest an association between epoxydocasapentaenoic acids and pain. However, very few studies have investigated the relationship between these four epoxydocasapentaenoic acids and clinical outcomes. One clinical trial showed that they are elevated in hemodialysis patients [43]. Preclinical studies have shown that 19(20)-EpDPA increases the browning of white adipose tissue through the GPR120-AMPK signaling pathway [44]. ω-3 PUFAs have been shown to reduce inflammation through GPR120 [45]. Further, 19(20)-EpDPA is a potent vasodilator in microcirculatory vessels [46], and vasodilators have been shown to reduce different types of pain, including neuropathic [47,48]. Thus, women with higher circulating levels of epoxydocasapentaenoic acids may have reduced inflammation and increased vasodilation, which may explain the negative correlation with arm pain in the present study.
Interestingly, none of the PUFAs were associated with any symptom scores. Diets with a high ω-6:ω-3 ratio, associated with a Westernized eating pattern, have been associated with increased inflammatory profiles [49]. Conversely, diets rich in EPA and DHA have been associated with reduced pain and inflammation [50]. One study showed in a rat model that ω-6 fatty acids increased nociception related to nerve damage not inflammation, and dietary replacement with ω-3 PUFAs reverted the phenotype [51]. Here, the overall cohort had a 13.5:1 ω-6:ω-3 ratio, similar to the commonly reported 16:1 ratio seen in populations that consume a Western diet. Studies have shown that ratios below 5:1 are needed to have a beneficial effect on disease risk, and suppression of inflammation in rheumatoid arthritis patients was achieved at 3:1 [52]. In the current study, only two participants had an ω-6:ω-3 ratio less than 5:1. When comparing the ratios of the women that developed any symptoms relative to the women that did not in this study, there was no difference. One study in women on AI showed that supplementation with an ω-3 PUFA significantly reduced AIA; however, the reduction in pain was not different than that in the placebo [53]. Our study suggests that, while the presence of ω-3s is important, the underlying metabolism of PUFA may play a more profound role in the development of AIA, and more targeted prevention may be necessary, such as dual COX and LOX pathway inhibitors.
We also sought to characterize the change in oxylipins over time with anastrozole treatment. Overall, PUFAs and oxylipins did not change in the patients with breast cancer in response to administration of the AI anastrozole. Two EPA products, 8(9)-EpETE from the CYP450 pathway and 8(15)-DiHETE that is an sEH product, significantly increased from baseline to six months; however, given the large number of statistical tests and the lack of relationship of these oxylipins with pain outcomes, we cannot render any conclusions. To the best of our knowledge, this is the first study to prospectively characterize oxylipin and PUFA levels in women who started adjuvant anastrozole.
Our study also contributes to the literature by prospective longitudinal assessment of AIA symptoms with validated questionnaires over six months. The major limitations of our study are the small sample size and small proportion that developed symptoms of AIA, which limited our ability to interpret any changes in metabolomic profiles. Nonetheless, we are able to contribute data on baseline oxylipin and PUFA profiles in postmenopausal women, which should be explored in larger studies.
Conclusions
In conclusion, we found that the baseline level of the 15-LOX product of AA, 8-HETE, was related to worsening of several AIA symptoms. Epoxydocasapentaenoic acids may also play a role given their anti-inflammatory and vasodilating effects. Future studies should investigate 15-LOX and/or CYP450 as potential targetable pathways for AIA management.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author. The data are not publicly available due to containing information that could compromise the privacy of research participants.
Conflicts of Interest:
The authors declare no conflict of interest. | 5,642 | 2023-03-01T00:00:00.000 | [
"Medicine",
"Biology"
] |
Spin noise gradient echoes
Abstract Nuclear spin noise spectroscopy in the absence of radio frequency pulses was studied under the influence of pulsed field gradients (PFGs) on pure and mixed liquids. Under conditions where the radiation-damping-induced line broadening is smaller than the gradient-dependent inhomogeneous broadening, echo responses can be observed in difference spectra between experiments employing pulsed field gradient pairs of the same and opposite signs. These observed spin noise gradient echoes (SNGEs) were analyzed through a simple model to describe the effects of transient phenomena. Experiments performed on high-resolution nuclear magnetic resonance (NMR) probes demonstrate how refocused spin noise behaves and how it can be exploited to determine sample properties. In bulk liquids and their mixtures, transverse relaxation times and translational diffusion constants can be determined from SNGE spectra recorded following tailored sequences of magnetic field gradient pulses.
Simulations of spin-noise gradient echo
To assess the effect of SNGE, we perform simulations using Mathematica (Wolfram Research Inc., 2012) based on the Bloch equations (Bloch, 1946), and neglecting any effect of radiation damping.This last assumption corresponds to the used experimental conditions where the inhomogeneous broadening by the gradients exceeds the radiation-damping rate.In these simulations, spin noise originates from a series of random excitation events of random timing, random phase and a random small flip angle supposed to be below 0.01°.For a single noise event, the evolution of magnetization in the rotating reference frame is simulated by assuming a weak randomized initial transverse magnetization 0 starting at a random time within the first gradient 1 .Using as the position coordinate along the gradient axis, we get for a single isochromat with offset 0 at the end of 1 : In the case of a distribution of components along , it is necessary to integrate: For the sample with length , it results in: Then, the evolution during the second gradient leads further to: Here we assume the delays ∆ and (Fig. 1) to be negligibly short.After integration over the sample, we get the expression: By numeric summation over 100 uniformly random distributed (with respect to starting times, phases, and amplitudes) initial magnetizations 0 , we obtain the curves shown in Fig. S1.The case including 2 relaxation is described by Eq. (S6): These simulations assume that for small radiation damping contribution (gradient broadening of the linewidth larger than the initial linewidth due to transverse relaxation and radiation damping), spin noise can be envisaged as a series of spontaneous excitation events of random phase.Because of creating a series of random coherences, they cannot be applied to the cases with weaker gradients or for strong polarization situations, as it was the case in (Desvaux, 2013;Pöschko et al., 2017).After these excitations, the individual coherences can be defocused and refocused by field-gradient pulses, as usual.As a consequence of these simulations, the noise power amplitude of the (+) experiments (Fig. 1) is due to transverse coherences excited during the second gradient only.However, the additional contributions observed in the (-) experiments originate from events during first gradient with length of δ 1 refocused by the second gradient during As in Fig. 5, Fig. S2 reveals a correct linear behaviour for not too small 1 2 values.This is particularly true for acetone, while for benzene which molecular diffusion coefficient is smaller, more scattered measurements are observed.From this SNGE attenuation vs 1 2 experiment (Fig. S2), an estimation of ratio of slope for acetone component in this mixture to that for benzene component results can be determined.This ratio is comparable to the independent determination of diffusion coefficients in pure solvents (acetone and benzene) measured with a SNGE NMR-diffusion experiment (Fig. 3) with a small RF pulse of 0.2µ added (2.4 ⋅ 10 −9 m 2 s for benzene and 4.58 ⋅ 10 −9 m 2 s for acetone).These self-diffusion coefficients agree well with literature values of benzene and acetone as obtained by canonical RF NMR spin-echo experiments.(Ertl, and Dullien, 1973) This experiment (Fig. S2) illustrates that in two-component mixture with different chemical shifts it is possible to acquire SNGE attenuation data (Fig. 4) and to extract SNGE attenuation curves separately for each diffusion component.(Fig. 5 and Fig. S2).This is still not enough to get data set for calculation of diffusion coefficient according to Eq. ( 3).We could not increase measurable gradient values to carry out reliable SNGE experiment at gradients higher than 75 mT m . So, implementation of diffusion experiment limited by the fast gradient duty cycle -this is too fast for our high-resolution probe.Therefore, a search of additional experimental facilities can result in new advantages for realization of spin-noise gradient echo measurements.
Molecular diffusion and relaxation measurements
Figure S3 illustrates how 3 gradient pulses scheme (Fig. 3) produces SNGE data sets in 1:1 acetone-benzene mixture vs ∆.The difference in the relaxation times ( 2 * ) determined from the spin noise line shape and the one from spin noise amplitude 85 decay in the inter-gradient delay is ascribed to different radiation damping times which are subject to influence of preamplifier feedback (see main text) (Pöschko et al., 2017)
Figure S1 .
Figure S1.Numerical simulation of the time course of refocused spin noise during the second gradient of a gradient echo sequence, using Eq.S6; (a) neglecting transverse relaxation and diffusion, (b) including transverse relaxation = .Experimental parameters corresponding to Fig 1(b) are used in the simulation: = −. ; = . ; = . × ; M0=1; = .No additional non-spin random noise was added.It would exceed the echo amplitude without extensive averaging.
Fig.S1only shows the contribution from the refocused noise initiated during δ 1 .The signal envelope indicates an echo center at the point where the presence of transverse relaxation (Fig.S1(b)).The location of the maximum gradient echo is typically that observed with coherent excitation ( Figure S2 presents the results of SNGE attenuation vs gradient values for a mixture composed of acetone and benzene in 1:1 concentration.The associated spectra are reported in Fig. 4.
Figure S2 .
Figure S2.SNGE attenuations vs gradient increase for separate signals of acetone (top) and benzene (bottom) in a 1:1 mixture of
Figure S3 .,,,
Figure S3.SNGE experiments as a function of ∆ between and with a triple gradient sequence (Fig. 3) on a 1:1 mixture of acetone and benzene (with 10% of acetone-d6 for locking).The respective lower traces correspond to the (+) sub-experiment, the higher ones are to the (−) measurements.SNGE spectra are presented for 2 values of delay increasing from top to bottom: ms, 70
Figure
Figure S5.(blue) Spin noise spectrum of a 1:1 mixture of benzene and acetone (as in Figure 6), but recorded without any gradients or pulses.(red) Lorentzian best fit of experimental peaks amounting to FWHM = 37.6Hz / 39.8Hz and hence * = /( • ) = 8.5ms / 8.0ms for benzene / acetone respectively.(black) Simulated Lorentzian resonance peaks assuming * 's as determined from three-gradient experiments (c.f.Fig. 3) (92ms / 62.5ms for benzene/acetone respectively).The asterisks (*) indicate signals of 13 C satellites as described in (Pöschko et al., 2017) So, qualitative comparison of how SNGE signal is attenuated in diffusion experiment in mixtures with differently diffusing (and chemically shifted) molecules.The issue resides in obtaining enough measurements in the range of gradient values (i.e., in such linear | 1,680 | 2021-07-12T00:00:00.000 | [
"Physics",
"Chemistry"
] |
BMS symmetry of celestial OPE
In this paper we study the BMS symmetry of the celestial OPE of two positive helicity gravitons in Einstein theory in four dimensions. The celestial OPE is obtained by Mellin transforming the scattering amplitude in the (holomorphic) collinear limit. The collinear limit at leading order gives the singular term of the celestial OPE. We compute the first subleading correction to the OPE by analysing the four graviton scattering amplitude directly in Mellin space. The subleading term can be written as a linear combination of BMS descendants with the OPE coefficients determined by BMS algebra and the coefficient of the leading term in the OPE. This can be done by defining a suitable BMS primary state. We find that among the descendants, which appear at the first subleading order, there is one which is created by holomorphic supertranslation with simple pole on the celestial sphere.
Introduction
Operator product expansion (OPE) plays a central role in the non-perturbative formulation of conformal field theory. OPE is the statement that when two primary operators φ i and φ j come close to each other (inside a correlation function) we can replace the product φ i φ j by a sum over conformal families each of which contains a primary operator, say φ k and its descendants. In general, the OPE coefficient C k ij which multiplies the primary operator φ k , cannot be determined by conformal symmetry alone. But, once C k ij is specified, the
JHEP04(2020)130
coefficients of all the descendants of φ k are completely fixed by conformal invariance of the OPE. C k ij is known as the structure constant of the operator algebra. The goal of the present paper is to study this aspect of the OPE in case of Celestial Conformal Field Theory.
Celestial CFT is conjectured to be the holographic dual of quantum gravity in asymptotically flat space-time [4][5][6][7][8]. The observables of the celestial CFT are related to Mellin transformations of flat space scattering amplitudes [19][20][21][22][23][24][25][26]. Under Lorentz transformations, which act on the celestial sphere as global conformal group, Mellin amplitudes transform like correlation functions of a CFT. Now the correspondence between soft theorems [27,28] and Ward identities for asymptotic symmetries [29][30][31][32][33][34][35][36][37][38][39][40] show that the celestial CFT has, in fact, a much larger symmetry known as BMS [48][49][50]. The BMS group 1 is an extension of the usual Poincaré group and consists of superrotations [32][33][34][35], which are local conformal transformations of the celestial sphere, and supertranslations, which are local angle-dependent space-time translations at null-infinity. Due to the presence of the supertranslations, the properties of the celestial CFT are somewhat different from usual CFT. For example, BMS algebra is not a direct product of holomorphic and antiholomorphic transformations because supertranslation generators have both holomorphic and antiholomorphic weights. As a result, at least naively, we do not expect holomorphic factorisation at the level of BMS representations. This is a major difference from usual CFTs.
A useful way to study various aspects of celestial CFT and representation theory of BMS algebra is through the construction of celestial OPE. OPE of two primary operators can be obtained by Mellin transformation of the collinear limit of flat space scattering amplitudes [1][2][3]. In the collinear limit, at leading order an (n + 1) point function factorizes into an n point function times a universal splitting function [42,44]. By the Mellin transformation of the splitting function one obtains the leading term in the celestial OPE and the structure constant of the celestial operator algebra. It is conceivable that the subleading terms in the OPE can be generated by Mellin transforming the subleading terms in the collinear expansion [45]. Now for the celestial OPE what is remarkable is that one can obtain the structure constant by imposing a constraint coming from the subleading soft theorem in case of gluons and subsubleading soft theorem in case of gravitons [1]. This suggests that owing to an unusually large amount of global symmetry, algebraic techniques [1,[9][10][11][12] may play a crucial role in determining the structure of celestial correlation functions (or flat-space S-matrix elements). This, in particular, will require an understanding of BMS representation theory in the context of S-matrix theory or celestial amplitudes.
Motivated by this, in this paper we compute the first subleading correction to the (holomorphic) collinear limit directly in the Mellin space. The subleading terms in the collinear limit give the subleading terms in the celestial OPE. We focus on the tree level four graviton scattering amplitude in Einstein theory and compute subleading OPE of two positive helicity outgoing graviton primaries. Unlike in the case of 2-D CFT, the first correction to the leading order result contains the supertranslation descendant created by singular supertranslation of the form u → u + /z. We also show that the subleading JHEP04(2020)130 OPE coefficients can be derived from the BMS algebra once we define a suitable notion of BMS primary state. This suggests the possibility that just like in the case of ordinary CFT, celestial OPE also organizes itself into representations of BMS algebra with the OPE coefficients of BMS descendants determined by BMS algebra. It will be very interesting to prove or disprove this in complete generality.
BMS algebra
Let us now describe BMS transformations acting on a three dimensional space with coordinates (u, z,z) where u can be thought of as the retarded or Bondi time and (z,z) are the stereographic coordinates of the celestial sphere. At the end, when we derive the subleading OPE coefficients from the BMS algebra, we will restrict to the celestial sphere at u = 0.
The transformation of fields
Under an infinitesimal conformal transformation (2.1), the primary field φ h,h (u, z,z) of weight (h,h) transforms as [8], Similarly for antiholomorphic transformation [8], For infinitesimal supertranslation given by (2.4) the transformation of the primary field is given by [8], At this point we would like to mention one useful point. From the transformation laws (2.7), (2.8) and (2.9) it is easy to check that if φ h,h (u, z,z) is a primary then so is (∂/∂u) n φ h,h (u, z,z), with weight (h + n/2,h + n/2).
Superrotation and supertranslation Ward identities
In celestial CFT correlation functions of the two dimensional primary operator φ h,h (z,z) are defined as Mellin transformation of flat space S-matrix elements [4,5], where i = ±1 for outgoing and incoming particles, respectively. The null momentum p(ω, z,z) is parametrised as, and σ denotes the helicity of the particle. Under (Lorentz) global conformal transformation, the L.H.S of (3.1) transforms as the correlation function of primary operators of weight (h i ,h i ), given by The action of global space-time translation on (3.1) was studied in [26]. The two dimensional field φ h,h (z,z) is the restriction of the three dimensional field φ h,h (u, z,z) to the u = 0 celestial sphere. Correlation function of the three dimensional fields φ h,h (u, z,z) is defined as [8],
JHEP04(2020)130
Under (Lorentz) global conformal transformation, the L.H.S of (3.4) transforms as, Similarly, if we do a global space-time translation under which u → u + a + bz +bz + czz, with (z,z) remaining fixed, the correlation function (3.4) is invariant. Let us now discuss the transformation law of the correlation functions under local BMS transformations which are captured by BMS Ward identities.
It is well known that Cachazo-Strominger subleading soft graviton theorem [28] is equivalent to the (superrotation) conformal Ward identity [35], where the stress tensor T (z) can be constructed as the shadow of the subleading soft graviton. In [35] the Ward-identity was derived for the two dimensional fields φ h,h (z,z), but the same derivation can be easily repeated for the fields φ h,h (u, z,z) and one obtains (3.6). For details please see appendices A and B. It is important to note that the stress tensor T (z) does not depend on the time coordinate u because it is constructed from a soft graviton and in the soft limit the time coordinate decouples.
The singular terms in the OPE between the stress tensor T (z) and the primary φ h,h (u, w,w) are given by [11], This is consistent with the transformation law (2.7) which one can check by using the standard 2-D CFT method. The OPE (3.7) gives the commutation relation, where the Virasoro generator L n is defined in the usual manner as, with c 0 defined as a contour around z = 0. Similarly, Weinberg's soft graviton theorem [27] is equivalent to the supertranslation Ward-identity given by [29,30,39], Here P (z) is the supertranslation current and can be written as, P (z)=− lim iλ→0 iλ∂G + ∆=1+iλ , where G + ∆ is the positive helicity graviton primary of weight ∆(= h+h). Again, P (z) has no u dependence.
JHEP04(2020)130
The singular term in the OPE between P (z) and a matter primary φ h,h (u, z,z) is given by, This OPE is equivalent to the commutation relation, where the supertranslation generator P a,−1 , defined as [29], generates the holomorphic supertranslation u → u + z a . The commutation relation (3.12) shows that, From the BMS commutation relation (2.6) one can check that the supertranslation generator P a,−1 has weight (−a − 1 2 , 1 2 ). So for a > −1 the holomorphic weight of P a,−1 is negative and it annihilates the primary operator φ h,h (0).
Supertranslation
Let us first consider the (holomorphic) supertranslation descendants.
Let us assume that the standard CFT form for the OPE between the supertranslation current P (z) and a matter primary φ h,h (u, z,z) holds, i.e, where the leading term is given by, From the conformal transformation laws (2.7) and (2.8) of primary fields, we know that i ∂ ∂u φ h,h (u, z,z) also transforms like a primary field of weight (h + 1/2,h + 1/2). We denote this field by The nonsingular terms of the OPE (4.1) define the (holomorphic) supertranslation which are new local fields created by singular supertranslations of the form,
JHEP04(2020)130
The descendants can be defined by the usual contour integral formula, which follows from (4.1). Here c z is a contour around w = z. Later in the paper we will explicitly verify the existence of these descendants by taking the leading conformal soft limit of tree-level four graviton scattering amplitude in Einstein theory. We now need to find out correlation functions with the insertion of the descendants This can be computed in the standard way by using the Ward identity (3.10), and taking the limit w → z. In this limit we can use the OPE (4.1) and obtain, where the differential operator P −a,−1 (z) acting on correlation functions of primary operators is defined as, The rest of the (holomorphic) supertranslation descendants are of the form , where a, b > 1, will not appear in the OPE to the first subleading order and so we leave their discussion to future work. Mellin transform of graviton scattering amplitudes in string theory [25] is well defined without the time coordinate u. In this case the correlation function with the insertion of a holomorphic supertranslation descendant is given by a simple change of (4.7), JHEP04(2020)130 where = ±1 for an outgoing and incoming particle, respectively and
Superrotation or Virasoro
The discussion of superrotation or Virasoro descendants is identical to that in 2-D CFT. The correlation function with the insertion of (L −n φ h,h )(u, z,z) is given by, This can be obtained by assuming the following OPE between the stress tensor T (z) and the primary field φ h,h (u, z,z), In the absence of the time coordinate u, superrotation transformations act on the primaries φ h,h (z,z) exactly in the same way as local conformal transformations act on Virasoro primaries in 2-D CFT.
OPE from four graviton scattering amplitude in Einstein theory
In this section and the following we denote a graviton primary operator of scaling dimension ∆(= 1 + iλ) by G ± ∆ where ± is the helicity. The simplified notation G ± ∆ i (i) means that the primary operator is inserted at the point (u i , z i ,z i ).
For simplicity we focus on the four graviton tree-level scattering amplitude in Einstein theory, given by 2 where (1, 2) are incoming and (3, 4) are outgoing. We have also used the relations 3 An n-graviton amplitude is multiplied by a factor of ( κ 2 ) n−2 where κ = √ 32πGN . To simplify the formulas we work in units where κ = 2. 3 We work in split signature and parametrize a null momentum p as p = ωq(z,z) = ω(1 + zz, z +z, z −z,
JHEP04(2020)130
where i = ±1 for an outgoing and an incoming particle, respectively. As in [1], we work in split signature so that we can treat z andz as independent real variables. It is also important that in split signature there is a non-zero three point function which is crucial for our purpose.
In order to facilitate the (holomorphic) OPE expansion as z 3 → z 4 , withz 34 held fixed, we write the momentum-conserving delta function as, where Now we make a change of variable In terms of these new variables the Mellin amplitude can be computed easily and is given by, In writing down this integral we have assumed the OPE limit, i.e, |z 34 | |z 13 |, |z 23 |, |z 14 |, |z 24 |. The details of the derivation are given in appendices A and B.
Leading term of the OPE
We can see from (5.7) that the expression inside the bracket multiplying the termz 34 z 34 is finite in the limit z 34 ,z 34 , u 34 → 0 and so we can Taylor expand around z 34 =z 34 = u 34 = 0.
JHEP04(2020)130
The leading term in the expansion is given by, Now the three graviton amplitude in Mellin space, when there are two negative helicity incoming gravitons at 1 and 2 with weights ∆ 1 (= 1+iλ 1 ) and ∆ 2 (= 1+iλ 2 ) and one positive helicity outgoing graviton at 4 with weight ∆ 3 + ∆ 4 − 1(= 1 + iλ 3 + iλ 4 ), is given by, Using (5.9) we can write the leading term (5.8) in the four point function as, where B(p, q) is the Euler beta function. This leading term corresponds to the leading term in the OPE given by, where ∆ i = 1 + iλ i . In writing the last line of (5.11) we have used the fact that i ∂ ∂u 4 G + ∆ 3 +∆ 4 −1 (4) is a primary with weight (∆ 3 + ∆ 4 ). The leading answer (5.11) matches with [1].
JHEP04(2020)130
This can be written in a more suggestive form as, where we have used (4.7) and the definition of the supertranslation current 4 We would like to emphasize that this equation is true only in the OPE limit z 3 → z 4 . (6.9) is essentially the OPE (4.1) between the supertranslation current P (z) and the graviton primary G + ∆ 4 (4), is defined in (5.9). In the above expression we have obtained the O(z 34 ) term by expanding the Dirac delta function appearing in the integrand in (5.7). We will now identify each term in the expansion as contribution coming from some descendant of the operator G + ∆ 3 +∆ 4 −1 (4):
JHEP04(2020)130
(1) The first term can be written as, (2) The second term can be written as, Now we would like to stress one important point. The expression for the three point function M 3 (1 − 2 − 4 + ) contains the kinematic theta function Θ(z 42 /z 12 )Θ(z 14 /z 12 ) and so, when ∂/∂z 4 acts on it, it produces terms proportional to the delta functions δ(z 42 /z 12 ) and δ(z 42 /z 12 ). Now, as long as z 12 is finite, the delta function term is nonzero only if z 4 coincides with either z 1 or z 2 . But this is ruled out in the OPE limit and so we should set the (contact) terms, obtained by differentiating the theta functions, to zero. We have taken this fact into account in coming from the first line to the second line in (7.3).
(3) The third term can be written as, The fourth term can be written as, Therefore at the level of 4-point function we get the following OPE, We have boxed the descendant P −2,−1 G + ∆ 3 +∆ 4 −1 (4) just to emphasize the fact that it is not a Poincaré descendant. In the above equation we have not written an equality sign because some extra terms may be there which are not visible at the level of 4-point function. In the following section we will see that this is not the case, according to BMS representation theory.
OPE coefficients from BMS algebra
Let us start with the commutation relation between the supertranslation and (superrotation) Virasoro generators, In particular for n = 0 this reduces to, We can see that the supertranslation generator P a,b has negative holomorphic or antiholomorphic scaling dimension for a > 3) for generators of this mixed type is that it gives the correct OPE coefficients, as we will see. The definition (8.3) of the BMS primary can be motivated by the following heuristic argument. Suppose the primary operator φ h,h (0) and (a subset of) its descendants 5 [P a,b , φ h,h (0)], appear in the OPE of two primaries φ h 1 ,h 1 (u, z,z) and φ h 2 ,h 2 (0). The operator [P a,b , φ h,h (0)] appears in the OPE with a prefactor proportional to 6 Now, symmetry requires all descendants to be present in the OPE and this implies that the order of the pole in z orz cannot be bounded from above because a, b can be arbitrarily large positive integers. But, this is practically impossible because in the present situation the leading pole can be obtained by Mellin transforming the splitting function in the collinear limit [1][2][3]. So it seems reasonable to assume that the order of the pole in z orz will be bounded from above and therefore, we should impose the condition (8.3) on the primary φ h,h .
For the Virasoro generators we have the standard conditions that,
JHEP04(2020)130
and Therefore we can now consider the descendants generated by the raising operators only.
A Hilbert space picture
In the following discussion it will be convenient to assume a Hilbert space picture and define a vacuum |0 by the condition, We also define the state h,h as, using the above definitions the BMS-primary state condition can be written as, For the restricted class of supertranslation generators P a,b with both (a, b) > −1, condition (8.7) was also proposed in [41]. With this definition of the primary state the BMS descendants are given by, where n 1 ≥ n 2 ≥ . . . n p > 0 and a i > 0, b i > 0. We do not put any order on the {a i } and {b i } because the supertranslation generators commute among themselves.
Primary descendants
Suppose h,h is a BMS-primary state. Then it is easy to check using the commutation relations that the state (P −1,−1 ) n h,h , with n ≥ 1, is also a BMS-primary with weight (h + n/2,h + n/2). We denote this state by, (P −1,−1 ) n h,h = h + n/2,h + n/2 (8.14)
Graviton-graviton OPE
From now on our discussion will be confined to the OPE of two positive helicity gravitons denoted by G + ∆ (u, z,z). In our case, ∆ = 1 + iλ and so,
JHEP04(2020)130
At this point let us note that although the Mellin amplitudes in Einstein gravity are UV divergent, the OPE (7.6) has no singularity as u 34 → 0 and we can safely restrict the OPE to the celestial sphere at constant u, if we wish. From this point of view, the time coordinate u acts as a covariant regulator in the celestial CFT, which can be set to zero at the end of the calculation. For simplicity, we will do so. Once we set u = 0, modulo the singular prefactor, the OPE expansion becomes a Taylor series in z andz around the origin (z = 0,z = 0) and we can write the following (7.6), where G + ∆ 1 (z,z) = G + ∆ 1 (u = 0, z,z). Let us now enumerate the possible subleading terms in the OPE at O(z) and O(z). For this we remind ourselves that P −1,−1 is a (1/2, 1/2) operator. Taking this into account we get, If we keep powers of u then there will be an additional operator at O(u) given by (P −1,−1 ) 2 , which follows from the fact that u has scaling dimension (−1/2, −1/2). Now generically, at every order, there can be descendants and also new primaries. But here we focus on the contribution arising from a specific primary G + ∆ 1 +∆ 2 −1 (0). So let us write the first subleading order in the OPE as, Herec i is not the complex conjugate of c i . Also, for simplicity of notation, we have kept the dependence of the OPE coefficients (c i ,c i ) on the scaling dimensions of primary operators implicit.
We will now compute the coefficients (c i ,c i ) from the fact that both sides of the OPE must transform in the same way under BMS transformation. In order to do this it is more convenient to write the OPE as,
JHEP04(2020)130
where, The BMS generators act on the operators as, The first two relations are obtained by setting u = 0 in commutation relations (3.8). The last commutator is obtained from, by setting u = 0 and recognizing that i∂ u φ h,h (u, z,z) is the BMS-primary (descendant) (P −1,−1 φ h,h )(u, z,z) with dimension (h + 1/2,h + 1/2). With this information one can readily compute the OPE coefficients, in the standard way, by applying the lowering operators to both sides of (8.20). Let us now state the equations for the OPE coefficients obtained in this way, (1) Applying P 0,−1 we get, (2) Applying P −1,0 we get, (3) Applying L 1 we get, This matches exactly with the OPE coefficients (7.6), obtained from graviton scattering amplitude, with the replacement λ 3 → λ 1 and λ 4 → λ 2 . We would like to point out that since the descendants P −2,−1 h 3 ,h 3 and P −1,−2 h 3 ,h 3 appear in the OPE (8.20), the above equations for the OPE coefficients probe the BMS algebra beyond the Poincaré subalgebra.
Before we conclude, we would like to point out that to arrive at the equations for the OPE coefficients we need commutators of the form [L 1 , P −2,−1 ] h 3 ,h 3 = P −2,0 h 3 ,h 3 ,
JHEP04(2020)130
which by the definition (8.10) of a primary state is equal to zero. The generator P −2,0 generates supertranslation of the mixed type, u → u + z z . So we can see that the condition that generators of the mixed type should also annihilate a primary state is necessary to obtain the correct OPE coefficients starting from the BMS algebra, at least to the order we are working.
Virasoro representations
Since Virasoro algebra is a subalgebra of the BMS algebra, one should be able to decompose a BMS representation into irreducible Virasoro representations. This is useful because in this way one can use the known results from the (highest-weight) Virasoro representation theory to compute some of the OPE coefficients. At the first subleading order this can be done in the following way.
Let us write the OPE at the first subleading order as, where we have setc 2 = 0 following (8.29). Among these states, the stateL −1 P −1,−1 h 3 ,h 3 is the antiholomorphic Virasoro descendant of the BMS primary P −1,−1 h 3 ,h 3 . We have two more states, given by L −1 P −1,−1 h 3 ,h 3 and P −2,−1 h 3 ,h 3 , one of which is a Virasoro descendant and the other one is a supertranslation descendant. So let us consider the state, The corresponding field can be written as, where the correlation function with the insertion of (P −2,−1 G h 3 ,h 3 )(z,z) is given by (4.9). Now one can check using the BMS algebra and the definition of the BMS primary state h 3 ,h 3 that |ψ is a Virasoro primary but, not a BMS primary. For example, one can check that and so on. Now using the Virasoro primary |ψ we can rewrite the OPE (8.30) as,
JHEP04(2020)130
where, The OPE (8.34) now has the familiar structure of OPE in 2-D CFT. The first line consists of the Virasoro primary h 3 + 1/2,h 3 + 1/2 and its level 1 descendants. In the second line we have a new Virasoro primary |ψ with the (Virasoro) structure constant B(iλ 1 , iλ 2 )c 2 . The Virasoro structure constant B(iλ 2 , iλ 2 )c 2 cannot be determined by Virasoro algebra alone but, the coefficients c 1 andc 1 are determined by Virasoro algebra. They are given by the known formulas, Using the values (8.21) for the scaling dimensions we get, from (8.36) and (8.37), that We can see that the value ofc 1 obtained in this way using Virasoro algebra matches with the value (8.27) obtained using translational invariance of the OPE. Similarly, the value of c 1 also matches with (8.35) once we substitute the values (8.26) and (8.28) of c 1 and c 2 , obtained using BMS algebra. This matching is a check of the overall consistency of the procedure.
Future directions
The problem of extracting the celestial OPE from flat space scattering amplitudes consists of two parts. The first part is the (holomorphic or antiholomorphic) collinear expansion including subleading terms and the second part is the determination of the celestial correlation functions with the insertion of BMS descendants. The correlation functions involving superrotation or Virasoro descendants are well known from the work of Belavin-Polyakov-Zamolodchikov on two dimensional CFTs. The new objects are the supertranslation descendants. Among these, the simplest ones are the descendants {P −a,−1 φ , a > 1} created by singular (anti) holomorphic supertranslations. These are captured in a straightforward manner by supertranslation Ward-identity (3.10) following from Weinberg's soft graviton theorem. The correlation functions with the insertion of (anti) holomorphic supertranslation descendants follow from this Ward-identity and is given by (4.7) or (4.9). But, there are also descendants of the form {P −a,−b φ, a > 1, b > 1} created by supertranslations, which are neither purely holomorphic nor antiholomorphic. For example, if we want to go to higher order in the OPE expansion (8.19), then at O(zz) we encounter the descendant P −2,−2 G + ∆ 1 +∆ 2 −1 (0). Unless we know the Mellin amplitude with the insertion of this JHEP04(2020)130 descendant, it will not be possible to extract the OPE at higher order. Although the Wardidentity (3.10) captures all the supertranslation descendants [29] -not just holomorphic -it may require a more involved procedure to determine the correlation function of a general supertranslation descendant. Another important point is that the BMS algebra may have (field-dependent) central extension [46,47]. In this paper the central extension does not play any role because in the first subleading order the Virasoro descendants {L −n φ, n > 1} do not appear, although they appear in the higher order of the OPE. The values of the OPE coefficients should depend on the central charge and perhaps one can determine the central charge by demanding that the OPE coefficients determined from the BMS algebra match with those obtained from the subleading terms in the collinear expansion of the Mellin amplitude. We hope to return to these problems in future.
A Mellin transform of four graviton amplitude and holomorphic collinear expansion
In this appendix we discuss the Mellin transform of the tree-level four graviton scattering amplitude in Einstein theory in detail. The four graviton amplitude in momentum space is given by, where we have taken (1, 2) to be incoming and (3,4) to be outgoing. We work in split signature so that we can treat z andz as independent real variables. The OPE limit, we are interested in, is z 3 → z 4 withz 34 held fixed.
JHEP04(2020)130
Therefore the residue at the pole is the leading soft operator S 0 (z,z, σ), up to a sign. So we can write When inserted in an S-matrix element this leads to the leading conformal soft theorem. We would like to stress that this construction is valid only when all the operators are inserted in an S-matrix element and then we can meaningfully talk about analytic continuation in λ. In the same fashion we can talk about subleading (n = 0) conformal soft limit which is obtained in the limit λ → i: we have The insertion of such an operator will lead to the subleading conformal soft theorem without the contamination from the leading soft theorem. It is worth noticing that the appearance of a single power of u is determined by dimensional analysis because u transforms like a primary of dimension −1 and spin 0. Similarly we can write for p ≥ 1 S p+1 (z,z, σ) = lim We can see that in the absence of n > 1 terms in the soft expansion (B.3) the poles in the upper-half λ plane correspond to the IR behaviour of the scattering amplitude.
We're ready now to show how the Virasoro Ward identity arises from the subleading soft theorem. Let A n+1 ({p i } i∈{1,...,n} , q) be the n + 1 scattering amplitude involving n massless particles and one graviton of momentum q µ and polarization (±) µν (q). The soft limit for the graviton of momentum q → 0 gives [27,28] A n+1 ({p i } i∈{1,...,n} , q) → S where J λν,(±) j is the sum of the spin and angular momentum of the j-th particle. Let's focus, without loss of generality, on the minus helicity case. In the coordinates (ω, z,z) the subleading soft factor becomes − ∂ z j (B.12) 7 We are using here the convention κ = √ 32πG = 2 as done in the main text of the paper.
JHEP04(2020)130
whereĥ j = 1 2 (σ j − ω j ∂ ω j ). As we have already explained the insertion of the operator S 1 defined in (B.6) in the scattering amplitudes will extract the subleading soft behaviour. Now supposing that we have n 1 incoming and n 2 outgoing hard particles (n 1 + n 2 = n) we can define the subleading soft graviton contribution to a (n + 1)-point scattering amplitude in the modified Mellin basis as follows 0| S 1 (z,z, −) a out (p j 2 (u j 2 , z j 2 ,z j 2 ), σ j 2 ) as expected. At this point it is worth considering the following operator [35] T (z) = 1 2π If we identify the correlators of the dual theory with S-matrix elements transformed to the modified Mellin basis as follows then the insertion of such an operator T (z) will give, after some algebra, 8 T (z)φ h 1h1 (u 1 , z 1 ,z 1 ) . . . φ hnhn (u n , z n ,z n ) = z 1 ,z 1 ) . . . φ hnhn (u n , z n ,z n ) (B.17) where h i = ∆ i +σ i 2 . A similar calculation can be done for the positive helicity graviton using the following operator and after similar steps we end up with T (z)φ h 1h1 (u 1 , z 1 ,z 1 ) . . . φ hnhn (u n , z n ,z n ) = z 1 ,z 1 ) . . . φ hnhn (u n , z n ,z n ) (B.19)
JHEP04(2020)130
Open Access. This article is distributed under the terms of the Creative Commons Attribution License (CC-BY 4.0), which permits any use, distribution and reproduction in any medium, provided the original author(s) and source are credited. | 7,666.4 | 2020-04-01T00:00:00.000 | [
"Physics",
"Mathematics"
] |
Canonical and log canonical thresholds of multiple projective spaces
In this paper we show that the global (log) canonical threshold of $d$-sheeted covers of the $M$-dimensional projective space of index 1, where $d\geqslant 4$, is equal to one for almost all families (except for a finite set). The varieties are assumed to have at most quadratic singularities, the rank of which is bounded from below, and to satisfy the regularity conditions. This implies birational rigidity of new large classes of Fano-Mori fibre spaces over a base, the dimension of which is bounded from above by a constant that depends (quadratically) on the dimension of the fibre only.
Introduction 0.1. Statement of the main results. In [1] general d-sheeted covers of the complex projective space P = P M which are Fano varieties of index 1 with at most quadratic singularities, the rank of which is bounded from below, were shown to be birationally superrigid. In this paper we will prove that for almost all values of the discrete parameters defining these varieties a general multiple projective space of index 1 satisfies a much stronger property: its global canonical (and the more so, log canonical) threshold is equal to 1. Now [2] immediately implies the birational rigidity type results for fibre spaces, the fibres of which are multiple projective spaces, and new classes of Fano direct products [3]. Let us give precise statements.
Fix a pair of positive integers (d, l) ∈ Z ×2 + in the set described by the following table: with homogeneous coordinates x 0 , . . . , x M , ξ, where x i are of weight 1 and ξ is of weight l, and a quasi-homogeneous polynomial of degree dl (that is, A i (x 0 , . . . , x M ) is a homogeneous polynomial of degree il i = 1, . . . , d). The space H 0 (P, O P (il)) parameterizes all such polynomials. If the hypersurface V = {F = 0} ⊂ P has at most quadratic singularities of rank 7 (and we will consider hypersurfaces with stronger restrictions for the rank), then V is a factorial variety with terminal singularities, see [1], so that where H is the class of a "hyperplane section", that is, of the divisor V ∩ {λ = 0}, where λ(x 0 , . . . , x M ) is an arbitrary linear form. Below for all the values of d, l under consideration we will define explicitly a positive integral-valued function ε(d, l), which behaves as 1 2 M 2 as the dimension M grows. As in [1], we identify the polynomial F ∈ F and the corresponding hypersurface {F = 0}, which makes it possible to write V ∈ F . The following theorem is the main result of the present paper.
Theorem 0.1. There is a Zariski open subset F reg ⊂ F such that: (i) every hypersurface V ∈ F reg has at most quadratic singularities of rank 8 and for that reason is a factorial Fano variety of index 1 with terminal singularities, (ii) the inequality codim((F \F reg ) ⊂ F ) ε(d, l) holds, (iii) for every variety V ∈ F reg and every divisor D ∼ nH the pair (V, 1 n D) is canonical. Now [2, Theorem 1.1] makes it possible to describe the birational geometry of Fano-Mori fibre spaces, the fibres of which are multiple projective spaces of index 1.
Let η: X → S be a locally trivial fibre space, the base of which is a non-singular projective rationally connected variety S of dimension dim S < ε(d, l), and the fibre is the weighted projective space P. Consider an irreducible hypersurface W ⊂ X, such that for every point s ∈ S the intersection η −1 (s) ∩ W ∈ F is a multiple projective space of the type described above. The claim (ii) of Theorem 0.1 implies that we may assume that W s = η −1 (s) ∩ W ∈ F reg for every point of the base s ∈ S, if the linear system |W | is sufficiently mobile on X, and the hypersurface W is sufficiently general in that linear system. Set The variety W by the claim (i) of Theorem 0.1 has ay most quadratic singularities of rank 8, and for that reason is a factorial variety with terminal singularities. Therefore, η: W → S is a Fano-Mori fibre space, the fibres of which are multiple projective spaces of index 1. Let η ′ : W ′ → S ′ be an arbitrary rationally connected fibre space, that is, a morphism of projective algebraic varieties, where the base S ′ and the fibre of general position (η ′ ) −1 (s ′ ), s ′ ∈ S ′ , are rationally connected, and moreover, dim W ′ = dim W . Now [2, Theorem 1.1], combined with Theorem 0.1, immediately gives the following result.
Theorem 0.2. Assume that the Fano-Mori fibre space η: W → S satisfies the following condition: for every mobile family C of curves on the base S, sweeping out S, and a general curve C ∈ C the class of an algebraic cycle is not effective, that is, it is not rationally equivalent to an effective cycle of dimension M. Then every birational map χ: W W ′ onto the total space of the rationally connected fibre space W ′ /S ′ (if such maps exist) is fibre-wise, that is, there is a rational dominant map ζ: S S ′ , such that the following diagram commutes: Corollary 0.1. In the assumptions of Theorem 0.2 on the variety W there are no structures of a rationally connected fibre space (and, the more so, of a Fano-Mori fibre space), the fibre of which is of dimension less than M. In particular, the variety W is non-rational and every birational self-map of the variety W commutes with the projection η and for that reason induces a birational self-map of the base S.
The condition for the cycles of dimension M, described in Theorem 0.2, is satisfied if the linear system |W | is sufficiently mobile on X. Let us demonstrate it by an especially visual example, when X = P × S is the trivial fibre space over S. Let o * = (0 : . . . : 0 : 1) = (0 M +1 : 1) ∈ P be the only singular point of the weighted projective space P. Consider the projection "from the point o * " where π P ((x 0 : . . . : x M : ξ)) = (x 0 : . . . : x M ). Let H be the π P -pull back of the class of a hyperplane in P on P. The pull back of the class H on X = P × S with respect to the projection onto the first factor we denote for simplicity by the same symbol H. Now Pic X = ZH ⊕ η * Pic S, so that for some class R ∈ Pic S the relation W ∼ dlH + η * R holds and for that reason This implies that the condition of Theorem 0.2 holds if for any mobile family of curves C, sweeping out S, and a general curve C ∈ C, the inequality holds. Therefore, the following claim is true. Theorem 0.3. Assume that the class R + K S ∈ Pic S is pseudo-effective and for every point s ∈ S we have η −1 (s) = W s ∈ F reg . Then in the notations of Theorem 0.2 every birational map χ: W W ′ is fibre-wise. In particular, every birational self-map χ ∈ Bir W induces a birational self-map of the base S.
Another standard application of Theorem 0.1 is given by the theorem on birational geometry of Fano direct products [3,Theorem 1]. Recall that the following statement is true.
Theorem 0.4. Assume that primitive Fano varieties V 1 , . . . , V N satisfy the following properties: (i) for every effective divisor there are no other structures of a rationally connected fibre space, apart from projections on to direct fibres The property (ii) was shown in [1] for a wider class of multiple projective spaces than the one that is considered in this paper. Of course, Theorem 0.1 implies that the conditions (i) and (ii) are satisfied for every variety V ∈ F reg . Therefore, every variety considered in the present paper can be taken as a factor of the direct product in Theorem 0.4. 0.2. The regularity conditions. The open subset F reg are given by explicit local regularity conditions, which we will now describe. To begin with, let us introduce an auxiliary integral-valued parameter ρ ∈ {1, 2, 3, 4}, depending on (d, l). Its meaning, the number reductions to a hyperplane section, used in the proof of Theorem 0.1, will become clear later. Set ρ = 4, if d = 4 and 21 l 25 and ρ = 1, if d 18 and l 2. For the remaining possible pairs (d, l) the value ρ 2 is given by the following If the pair (d, l) is not in the table, then ρ = 1 (for instance, for d = 14, l 4). One more table gives the function ε(d, l), bounding from below the codimension of the complement to the set F reg . Write this function as a function of the dimension M = (d − 1)l, for each of the possible values of the parameter ρ defined above.
Now let us state the regularity conditions. Let o ∈ V be some point. The coordinate system (x 0 : x 1 : · · · : x M : ξ) can be chosen in such a way that o = (1 : 0 : · · · : 0 : 0) (see [1, §1]). The corresponding affine coordinates are where the (non-homogeneous) polynomial a i (z * ) is of degree il. Furthermore, the following fact is true ([1, ]): for any homogeneous polynomial γ(x 0 , . . . , x M ) of degree l the equation ξ = γ(x * ) defines a hypersurface R γ ⊂ P that does not contain the point o * = (0 M +1 : 1), and moreover the projection π P | Rγ : R γ → P is an isomorphism. In this way, the hypersurface V ∩ R γ in R γ identifies naturally with a hypersurface in P = P M , and its intersection with the affine chart {x 0 = 0} identifies with a hypersurface in the affine space A M z 1 ,...,z M . The regularity conditions, given below, are assumed to be satisfied for the hypersurface V γ = V ∩ R γ for a general polynomial γ(x * ).
Assume that the point o ∈ V is non-singular, so that o ∈ V γ is non-singular, too. Let P ⊂ A M be an arbitrary linear subspace of codimension (ρ − 1), that is, o ∈ P , that is not contained in the tangent hyperplane T o V γ . Let f P = q 1 + q 2 + · · · + q dl be the affine equation of the hypersurface P ∩ V γ , which is non-singular at the point o, decomposed into homogeneous components (with respect to an arbitrary system of linear coordinates on P ).
(R1.1) For any linear form λ ∈ q 1 the sequence of homogeneous polynomials is at least 8 + 2(ρ − 2). We say that a non-singular point o ∈ V is regular, if for a general polynomial γ(x * ) and any subspace P ⊂ T o V γ the conditions (R1.1-3) are satisfied.
Assume now that the point o ∈ V is singular, so that the hypersurface V γ is also singular at that point.
(R2.1) The point o ∈ V γ is a quadratic singularity of rank 2ρ + 6. Let P ⊂ A M be an arbitrary linear subspace of codimension ρ + 2, that is, o ∈ P , and f P = q 2 + q 3 + · · · + q dl is the affine equation of the hypersurface P ∩ V γ , decomposed into homogeneous components (in particular, q 2 is a quadratic form of rank 2). (R2. 2) The sequence of homogeneous polynomials is regular in the local ring O o,P . We say that a singular point o ∈ V is regular, if for a general polynomial γ(x * ) and any subspace P ⊂ A M of codimension ρ + 2 the conditions (R2.1,2) hold.
Finally, we say that the variety V is regular, if it is regular at every point o ∈ V , singular or non-singular. Set F reg ⊂ F to be the Zariski open subset of regular hypersurfaces (that it is non-empty, follows from the estimate for the codimension of the complement). Obviously, every hypersurface V ∈ F reg has at worst quadratic singularities of rank 8, so that the claim (i) of Theorem 0.1 is true.
0.3. The structure of the paper, historical remarks and acknowledgements. A proof of the claim (ii) of Theorem 0.1 is given in Subsections 1.2 and 1.3. A proof of the claim (iii) of Theorem 0.1 in Subsection 1.1 is reduced to two facts about hypersurfaces in the projective space P N , which are applied to the hypersurface V γ ⊂ P, both in the singular and non-singular cases. Proofs of those two facts are given, respectively, in §2 and §3.
The equality of the global (log) canonical threshold to one is shown for many families of primitive Fano varieties, starting from the pioneer paper [3] (for a general variety in the family). For Fano complete intersections in the projective space the best progress in that direction (in the sense of covering the largest class of families) was made in [4]. The double covers were considered in [5]. Fano three-folds, singular and non-singular, were studied in the papers [6,7,8,9] and many others. However, the non-cyclic covers of index 1 in the arbitrary dimension were never studied up to now: the reason, as it was explained in [1], was that the technique of hypertangent divisors does not apply to these varieties in a straightforward way. As it turned out (see [1]), the technique of hypertangent divisors should be applied to a certain subvariety, which identifies naturally with a hypersurface (of general type) in the projective space. This approach is used in the present paper, too.
The author thanks The Leverhulme Trust for the support of the present work (Research Project Grant RPG-2016-279).
The author is also grateful to the colleagues in the Divisions of Algebraic Geometry and Algebra at Steklov Institute of Mathematics for the interest to his work, and to the colleagues-algebraic geometers at the University of Liverpool for the general support.
Proof of the main result
In Subsection 1.1 the proof of part (iii) of Theorem 0.1 is reduced to two intermediate claims, the proofs of which are given in §2 and §3. In Subsections 1.2,3 we show part (ii) of Theorem 0.1. First (Subsection 1.2) we give the estimates for the codimension of the sets of polynomials, violating each of the regularity conditions, after that (Subsection 1.3) we explain how to obtain these estimates.
1.1. Exclusion of maximal singularities. Fix the parameters d, l. Recall that the integer ρ ∈ {1, 2, 3, 4} depends on d, l (see the table in Subsection 0.2). Fix a variety V ∈ F reg . Assume that D ∼ nH is an effective divisor on V , such that the pair (V, 1 n D) is not canonical. Our aim is to get a contradiction. This would prove the claim (iii).
If there is a non-canonical singularity of the pair (V, 1 n D), the centre of which is of positive dimension, then the pair where Γ = V γ for some polynomial γ(x * ) of general position (see Subsection 0.2) and D Γ = D| Γ , is again non-canonical. If the centres of all non-canonical singularities of the pair (V, 1 n D) are points, let us take a polynomial γ(x * ) of general position such that the hypersurface Γ = V γ contains one of them. In that case the pair (Γ, 1 n D Γ ) is even non log canonical.
In any case we obtain a factorial hypersurface Γ ⊂ P = P M of degree dl with at worst quadratic singularities of rank 2ρ+6 8, and an effective divisor D Γ ∼ nH Γ on it (where H Γ is the class of a hyperplane section, so that Pic Γ = ZH Γ ), such that the pair (Γ, 1 n D Γ ) is non-canonical. Now we work only with that pair, forgetting about the original variety V (within the limits of the proof of the claim (iii) of Theorem 0.1). Let be the union of the centres of all non-canonical singularities of that pair. Proposition 1.1. The closed set CS(Γ, 1 n D) is contained in the singular locus Sing Γ of the hypersurface Γ.
Proof makes the contents of §2. Therefore, Let us define a sequence of rational numbers α k , k ∈ Z + , in the following way: (We can simply write α k = 2 − 1 2 k , but for us it is important how α k+1 and α k are related.) In order to exclude the maximal (non-canonical) singularities, we will need only the four values: Let o ∈ Γ be a point of general position on the irreducible component of maximal dimension of the closed set CS(Γ, 1 n D Γ ). Consider a general 5-dimensional subspace in P, containing the point o. Let P be the section of the hypersurface Γ by that subspace. Obviously, P ⊂ P 5 is a hypersurface of degree dl with a unique singular point, a non-degenerate quadratic point o. Denoting D Γ| P by the symbol D P , we get D P ∼ nH P , where H P is the class of a hyperplane section. By the inversion of adjunction, the point o is the centre of a non log canonical singularity of the pair (P, 1 n D P ), and moreover, LCS P, This implies that mult o D P > 2n and therefore mult o D Γ > 2n = 2α 0 n.
There is a sequence of irreducible varieties Γ i , i = 0, 1, . . . , ρ, such that: (i) Γ 0 = Γ and Γ i+1 is a hyperplane section of the hypersurface Γ i ⊂ P M −i , containing the point o, (ii) on the variety Γ ρ there is a prime divisor D * ∼ n * H * , where H * is the class of a hyperplane section of the hypersurface Γ ρ , satisfying the inequality Proof makes the contents of §3. Note that by the condition (R2.1) all hypersurfaces Γ 1 , . . . , Γ ρ are factorial, so that Pic Γ ρ = ZH * . Furthermore, ρ 1, so that with this construction, is ensured by the condition (R2.2). Note that the first step of this construction is possible because the hypertangent divisor D 2 is irreducible, D 2 ∼ 2H * and the equality mult o D 2 = 6 = 3 · 2 holds, so that Y 1 = D 2 . The hypertangent divisor D 3 does not take part in the construction.
For the irreducible surface which is impossible (the last inequality checks directly for each of the possible values of ρ and the corresponding values of d, l). Thus we obtained a contradiction, which completes the proof of the claim (iii) of Theorem 0.1. For these values of i.j we set, respectively, We omit the symbols d, l in order to simplify the formulas, however ε i.j = ε i.j (d, l) are functions of these parameters. The following claim is true. Proposition 1.3. The following inequalities hold: Proof. The regularity conditions must be satisfied for any point o, any linear subspace P of the required codimension and any linear form λ (the polynomial γ(x * ) is assumed to be general and does not influence the estimating of the codimension of the sets F i.j ). Therefore, the problem of getting a lower bound for the numbers ε i.j reduces obviously to a similar problem for varieties V ∈ F violating the condition be the closed subset of quadratic forms of rank r. Let X 2,3 ⊂ P [2,3] be the closed subset of pairs (w 2 , w 3 ), such that the closed set {w 2 = w 3 = 0} ⊂ P N −1 has at least one degenerate component (that is, a component, the linear span of which is of dimension N − 2). Let Q ⊂ P N −1 be a factorial quadric. For m 4 let X m,Q ⊂ P m be the closed subset of polynomials w m , such that the divisor {w m | Q = 0} on Q is reducible or non-reduced. The following claim is true. Proposition 1.4. (i) The following equality holds: codim(X 2, r ⊂ P 2 ) = N − r + 1 2 .
(iii) The following inequality holds: Proof. The claim (i) is well known. Let us show the inequality (ii). Taking into account the part (i), we may assume that the quadratic form w 2 is of rank 5, so that the quadric {w 2 = 0} is factorial. If the closed set w 2 = w 3 = 0 has a degenerate component, then the divisor {w 3 | {w 2 =0} = 0} on the quadric {w 2 = 0} is either reducible, or non-reduced, so that in any case it is a sum of a hyperplane section and a section of the quadric {w 2 = 0} by some quadratic hypersurface. Calculating the dimensions of the corresponding linear systems, we get that for a fixed quadratic form w 2 of rank 5 the closed set of polynomials w 3 ∈ P 3 , such that the divisor {w 3 | {w 2 =0} = 0} is reducible or non-reduced, is of codimension It is easy to see that this expression is higher than the right hand side of the inequality (ii). This proves the claim (ii). Let us show the inequality (iii). Recall that the quadric Q ⊂ P N −1 is assumed to be factorial (that is, the rank of the corresponding quadratic form is at least 5).
is a polynomial in m with positive coefficients. This implies that for 0 < s < t 1 2 m the inequality holds, which can be re-written as If the divisor {w m | Q = 0} is not irreducible and reduced, then it is a sum of two effective divisors on Q, which are cut out on Q by hypersurfaces of degree 1 a By what was said above, the right hand side of that inequality is h Elementary computations show that the right hand side of the last equality is (m + (N − 4)) . . . It is slightly less obvious, how to obtain the estimate for ε 1.3 , starting from the claim (iii) of Proposition 1.4, for that reason we will explain briefly, how to do it. Fixing the linear subspace P and the linear forms q 1 and λ, consider the quadric The codimension of the set of quadratic forms, for which this quadric is of rank 4 and so not factorial, is given by the claim (i) of Proposition 1.4. It is from here that we get the estimate for ε 1.3 in Proposition 1.3. It remains to show that the violation of the condition (R1.3) under the assumption that the quadric (1) is factorial, gives at least the same (in fact, much higher) codimension. It is to the factorial quadric (1) that we apply the estimate (iii) of Proposition 1.4. There is, however, a delicate point here. The hypersurface P ∩ V γ is given by a polynomial that has at the point o the linear part q 1 and the quadratic part q 2 , which both vanish when restricted onto the quadric (1). The other homogeneous components q 3 , . . . , q dl are arbitrary. In the inequality (iii) of Proposition 1.4 the codimension of the "bad" set X m,Q is considered with respect to the whole space P m , whereas in order to prove the inequality (iii) of Proposition 1.3, we need the codimension with respect to the space of homogeneous polynomials of degree dl, the non-homogeneous presentation of which at the fixed point o has zero linear and quadratic components. However, this does not make any influence on the final result, because the codimension of the set X m,Q in P m is very high. Now let for 2 k N − 2 X [2,k] ⊂ P [2,k] be the set of non-regular tuples (h 2 , . . . , h k ) of length k − 1 N − 3, where h i ∈ P i = P i,N , that is, the system of equations
Exclusion of maximal singularities at smooth points
In this section we consider factorial hypersurfaces X ⊂ P N , satisfying certain additional conditions. We show that the centre of every non-canonical singularity of the pair (X, 1 n D X ), where D X ∼ nH X is cut out on X by a hypersurface of degree n 1, is contained in the singular locus Sing X. In Subsection 2.1 we list the conditions that are satisfied by the hypersurface X, state the main result and exclude non-canonical singularities with the centre of a small ( 3) codimension on X. In Subsections 2.2 and 2.3, following (with minor modification) the arguments of Subsection 2.1 in [3], we exclude non-canonical singularities of the pair (X, 1 n D X ), the centre of which is not contained in Sing X. In subsection 2.3 we use, for this purpose, the standard technique of hypertangent divisors. As a first application, we obtain a proof of Proposition 1.1.
Regular hypersurfaces.
Let X ⊂ P N , where N 8, be a hypersurface, satisfying the condition codim(Sing X ⊂ X) 5.
In particular, X is factorial and Pic X = ZH X , where H X is the class of a hyperplane section. Let o ∈ X be a non-singular point and is irreducible and reduced. Proposition 2.1. Assume that the hypersurface X satisfied the conditions (N1-3) at every non-singular point o ∈ X. Then for every pair (X, 1 n D X ), where D X ∼ nH X is an effective divisor, the union of the centres of all non-canonical singularities CS (X, 1 n D X ) of that pair is contained in the closed set Sing X. Proof. Assume the converse: for some effective divisor D X ∼ nH X CS X, 1 n D X ⊂ Sing X.
Let Y be an irreducible component of the set CS (X, 1 n D X ), which is not contained in Sing X, the dimension of which is maximal among all such components.
Lemma 2.1. The following inequality holds: Proof. Assume the converse: codim(Y ⊂ X) 3. Since Y is the centre of some non canonical singularity of the pair (X, 1 n D X ) and Y ⊂ Sing X, we get the inequality mult Y D X > n. Since the codimension of the set Sing X is at least 5, we can take a curve C ⊂ X, such that C ⊂ X\ Sing X.
Obviously, mult C D X > n. Now repeating the arguments in the proof of Lemma 2.1 in [10, Chapter 2] word for word, we get a contradiction which completes the proof of Lemma 2.1.
Restriction onto a hyperplane section.
Let o ∈ Y be a point of general position, o ∈ Sing X. Consider the section P ⊂ X by a general linear subspace of dimension 4, containing the point o. The hypersurface P ⊂ P 4 is non-singular, so that Pic P = ZH P by the Lefschetz theorem, where H P is the class of a hyperplane section of the variety P . Set D P = D X | P , so that D P ∼ nH P . By inversion of adjunction, the pair (P, 1 n D P ) is not log canonical; moreover, by construction, Let ϕ P : P + → P be the blow up of the point o, E P = ϕ −1 P (o) ∼ = P 2 the exceptional divisor, D + P the strict transform of the divisor D P on P + . Lemma 2.2. There is a line L ⊂ E P , satisfying the inequality Proof. This follows from [3,Proposition 9]. Q.E.D. The blow up ϕ P can be viewed as the restriction onto the subvariety P of the blow up ϕ X : X + → X of the point o with the exceptional divisor E X ∼ = P N −2 . Lemma 2.2 implies that there is a hyperplane Θ ⊂ E X , satisfying the inequality The rest of the proof of Proposition 2.1 repeats the proof of part (i) of Theorem 2 in [3, . 2.1] almost word for word. For the convenience of the reader we briefly reproduce those arguments. By the symbol |H X − Θ| we denote the pencil of hyperplane sections R of the hypersurface X, such that R ∋ o and R + ∩ E X = Θ (where R + ⊂ X + is the strict transform). Let R ∈ |H X − Θ| be a general element of the pencil. Set D R = D X | R . Lemma 2.3. The following inequality holds: Proof. This is Lemma 3 in [3] (our claim follows directly from the inequality (3) and the choice of the section R). Q.E.D. for the lemma.
Consider the tangent hyperplane T o R ⊂ P N −1 to the hypersurface R at the point o. The intersection T R = R ∩ T o R is a hyperplane section of R. Therefore, T R ∼ H R is a prime divisor on R. By the condition (N1) the equality mult o T R = 2 holds. Therefore, if D R = aT R + D ♯ R , where a ∈ Z + and the effective divisor D ♯ R ∼ (n − a)H R does not contain T R as a component, then the inequality holds. In order not to make the notations too complicated, we assume that a = 0, that is, D R ∼ nH R does not contain T R as a component. Moreover, by the linearity of the inequality (4) in D R , we may assume that D R is a prime divisor.
Hypertangent divisors.
Getting back to the coordinates z 1 , . . . , z N , write down . . , deg X and consider the second hypertangent system where s 0 ∈ C and s 1 runs through the space of linear forms in z * . By the condition (N3) the base set Bs Λ R 2 is irreducible and reduced, and by the condition (N1) it is of codimension 2 on R. Therefore, a general divisor D 2 ∈ Λ R 2 does not contain the prime divisor D R as a component, so that we get a well defined effective cycle By the linearity of the equivalent inequality in Y 2 we may replace the cycle Y 2 by its suitable irreducible component and assume Y 2 to be an irreducible subvariety of codimension 2. Lemma 2.4. The subvariety Y 2 is not contained in the tangent divisor T R .
Proof. The base set of the hypertangent system Λ R 2 is It is irreducible, reduced and therefore deg S R = 2 deg X.
By the condition (N1) the equality holds. Therefore, Y 2 = S R . However, a certain polynomial vanishes on Y 2 , where s 0 = 0, since the divisor D 2 ∈ Λ R 2 is chosen to be general. If we had h 1 | Y 2 ≡ 0, then we would have got h 2 | Y 2 ≡ 0. Since h 2 = h 1 + h 2 , this would have implied that h 2 | Y 2 ≡ 0 and Y 2 ⊂ Bs Λ R 2 = S R , which is not true. Q.E.D. for the lemma. By the lemma that we have just shown, the effective cycle The cycle Y 3 can be assumed to be an irreducible subvariety of codimension 3 on R for the same reason as Y 2 . Now applying the technique of hypertangent divisors in the usual way [10, Chapter 3], we intersect Y 3 with general hypertangent divisors using the condition (N1), and obtain an irreducible curve C ⊂ R, satisfying by (2) the inequality which is impossible. This proves Proposition 2.1. Q.E.D.
Proof of Proposition 1.1. It is sufficient to check that the hypersurface Γ satisfies all the assumptions that were made about the hypersurface X. Indeed, Γ has at most quadratic singularities of rank 8, so that codim (Sing X ⊂ X) 7.
Reduction to a hyperplane section
In this section we consider hypersurfaces X ⊂ P N with at most quadratic singularities, the rank of which is bounded from below, which also satisfy some additional conditions. For a non-canonical pair (X, 1 n D X ), where D X ∼ nH X does not contain hyperplane sections of the hypersurface X, we construct a special hyperplane section ∆, such that the pair (∆, 1 n D ∆ ), where D ∆ = D X | ∆ , is again non-canonical and, into the bargain, somewhat "better" than the original pair: the multiplicity of the divisor D ∆ at some point o ∈ ∆ is higher than the multiplicity of the original divisor D X at this point.
3.1. Hypersurfaces with singularities. Take N 8 and let X ⊂ P N be a hypersurface, satisfying the following conditions: (S1) every point o ∈ X is either non-singular, or a quadratic singularity of rank 7, (S2) for every effective divisor D ∼ nH X , where H X ∈ Pic X is the class of a hyperplane section and n 1, the union CS(X, 1 n D X ) os the centres of all non log canonical singularities of the pair (X, 1 n D X ) is contained in Sing X, (S3) for every effective divisor Y on the section of X by a linear subspace of codimension 1 or 2 in P N and every point o ∈ Y , singular on X, the following inequality holds: The condition (S1), Grothendieck's theorem on parafactoriality [11] and the Lefschetz theorem imply that X is a factorial variety and Cl X = Pic X = ZH X , since codim (Sing X ⊂ X) 6. As every hyperplane section of the hypersurface X is a hypersurface in P N −1 , the singular locus of which has codimension at least 4, it is also factorial.
Assume, furthermore, that D X ∼ nH X is an effective divisor, such that we have CS(X, 1 n D X ) = ∅, and moreover, there is a point o ∈ CS(X, 1 n D X ) ⊂ Sing X (see the condition (S2)), which is a quadratic singularity of rank 8. Let ϕ: X + → X be its blow up with the exceptional divisor E = ϕ −1 (o), which by our assumption is a quadric of rank 8. For the strict transform D + X ⊂ X + we can write where by the condition (S3) we have α < 2, since mult o D X < 4n. Remark 3.1. As we will see below, under our assumptions the inequality α > 1 holds. Since for every hyperplane section ∆ ∋ o of the hypersurface X and its strict transform ∆ + ⊂ X + we have the pair (X, ∆) is canonical, so that we may assume that the effective divisor D X does not contain hyperplane sections of the hypersurface X as components (if there are such components, they can be removed with all assumptions being kept). For that reason, for any hyperplane section ∆ ∋ o the effective cycle (∆ • D X ) of codimension 2 on X is well defined. We will understand this cycle as an effective divisor on the hypersurface ∆ ⊂ P N −1 and denote it by the symbol D ∆ . and D + ∆ ∼ n(H ∆ − α ∆ E ∆ ), and moreover the following inequality holds: (Here H ∆ is the class of a hyperplane section of the hypersurface ∆ ⊂ P N −1 , and E ∆ = ∆ + ∩ E is the exceptional divisor of the blow up ϕ ∆ : ∆ + → ∆, where ∆ + is the strict transform of ∆ on X + and D + ∆ is the strict transform of the divisor D ∆ on ∆ + .) Proof. Obviously, D ∆ ∼ nH ∆ . We have for every hyperplane section ∆, so that we only need to show the existence of the hyperplane section ∆ for which the inequality (6)
Preliminary constructions.
Consider the section P of the hypersurface X by a general 5-dimensional linear subspace, containing the point o. Obviously, P ⊂ P 5 is a factorial hypersurface, o ∈ P is an isolated quadratic singularity of the maximal rank. Let P + ⊂ X + be the strict transform of the hypersurface P , so that E P = P + ∩E is a non-singular three-dimensional quadric. Set D P = (D • P ) = D| P . Obviously, by the inversion of adjunction the pair (P, 1 n D P ) has the point o as an isolated centre of a non log canonical singularity. Since a(E P ) = 2 and D + P ∼ nH P − αnE P (where H P is the class of a hyperplane section of the hypersurface P ⊂ P 5 ), and moreover α < 2, we conclude that the pair (P + , 1 n D + P ) is not log canonical and the union LCS E (P + , 1 n D + P ) of the centres of all non log canonical singularities of that pair, intersecting the exceptional divisor E P , is a connected closed subset of the quadric E P . Let S P be an irreducible component of maximal dimension of that set. Since S P is the centre of certain non log canonical singularity of the pair (P + , 1 n D + P ), the inequality mult S P D + P > n holds. Furthermore, codim(S P ⊂ E P ) ∈ {1, 2, 3} (and if S P is a point, then we have LCS E (P + , 1 n D + P ) = S P by the connectedness of that set). Coming back to the original pair (X, 1 n D X ), we see that the pair (X + , 1 n D + X ) has a non log canonical singularity, the centre of which is an irreducible subvariety S ⊂ E, such that S∩E P = S P ; in particular, codim(S ⊂ E) = codim(S P ⊂ E P ) ∈ {1, 2, 3}, and if the last codimension is equal to 3, then S ∩ E P is a point and for that reason S ⊂ E is a linear subspace of codimension 3. However, on a quadric of rank 8 there can be no linear subspaces of codimension 3, so that codim(S ⊂ E) ∈ {1, 2}.
Proof. Assume that this case takes place. Then S ⊂ E is a prime divisor, which is cut out on E by a hypersurface of degree d S 1, that is, S ∼ d S H E , where H E is the class of a hyperplane section of the quadric E. We have (D + X • E) ∼ αnH E , so that 2 > α d S , and for that reason S ∼ H E is a hyperplane section of the quadric E. Let ∆ ∈ |H| be the uniquely determined hyperplane section of the hypersurface X, such that ∆ ∋ o and (∆ + • E) = ∆ + ∩ E = S. For the effective divisor D ∆ the inequality holds. Taking into account that deg(∆ • D X ) = n deg X, we get a contradiction with the condition (S3), which by assumption is satisfied for the hypersurface X. Q.E.D. for the proposition.
3.3. The case of codimension 2. We proved above that S ⊂ E is a subvariety of codimension 2. Following [12, Section 3], for distinct points p = q on the quadric E we denote by the symbol [p, q] the line joining these two points, provided that it is contained in E, and the empty set, otherwise, and set (where the line above means the closure).
Lemma 3.1. One of the following two options takes place: (1) Sec (S ⊂ E) is a hyperplane section of the quadric E, on which S is cut out by a hypersurface of degree d S 2, (2) S = Sec (S ⊂ E) is the section of the quadric E by a linear subspace of codimension 2.
Proof repeats the proof of Lemma 4.1 in [2], and we do not give it here. (The key point in the arguments is that due to the inequality α < 2 every line L = [p, q] ⊂ E, joining some point p, q ∈ S and lying on E, is contained in D + X , because mult S D + X > n.) Proposition 3.3. The option (2) does not take place. Proof. Assume the converse: the case (2) takes place. Let P ⊂ X be the section of the hypersurface X by the linear subspace of codimension 2 in P N , that is uniquely determined by the conditions P ∋ o and P + ∩ E = S.
The symbol |H − P | stands for the pencil of hyperplane sections of the hypersurface X, containing P . For a general divisor ∆ ∈ |H − P | we have the equality Write down (∆ • D X ) = G + aP , where a ∈ Z + and G is an effective divisor on ∆, not containing P as a component. Obviously, G ∈ |mH ∆ |, where m = n − a and H ∆ is the class of a hyperplane section of ∆ ⊂ P N −1 . The symbols G + and ∆ + stand for the strict transforms of G and ∆ on X + , respectively. Now where E ∆ = ∆ + ∩ E is a hyperplane section of the quadric E and, besides, By construction, the effective cycle (G•P ) of codimension 2 on ∆ is well defined. One can consider it as an effective divisor on the hypersurface P ⊂ P N −2 . The Proposition 3.4. For some irreducible divisor S 1 ⊂ E S , such that the projection σ S | S 1 is birational, the inequality holds, where D ∆ and Λ are the strict transforms, respectively, of D + ∆ and Λ on ∆. Proof. This is a well known fact, see [3,Proposition 9]. (Note that the subvariety S is, generally speaking, singular, however ∆ + is non-singular at the general point of S and ∆ is non-singular at the general point of S 1 .) 3.5. End of the proof. Set µ S = mult S D + ∆ and β = mult S 1 D ∆ . One of the two cases takes place: -the case of general position S 1 = E S ∩ Λ, so that S 1 ⊂ Λ, -the special case S 1 = E S ∩ Λ. Let us consider them separately. In the case of general position the inequality (8) takes the form µ S + β + a > 2n, since mult S 1 Λ = 0. Furthermore, µ S β, so that the more so 2µ S + a > 2n.
The inequality (6) in the case of general position is now proven.
Proof of Proposition 1.2. Let us check that the operation of reduction, described in Subsection 3.1, can be ρ times applied to the hypersurface Γ ⊂ P M . Consider the hypersurface Γ i ⊂ P M −i , where i ∈ {0, . . . , ρ − 1}. Let us show, in the first place, that Γ i satisfies the condition (S1). Let p ∈ Γ i be an arbitrary singularity. If i = 0, then by the condition (R2.1), the point p is a quadratic singularity of rank 8. If i 1, then there are two options: either p ∈ Γ is a non-singular point, or p ∈ Γ is a singularity (recall that Γ i is a section of the hypersurface Γ by a linear subspace of codimension i in P M ). In the second case by the condition (R2.1) the point p is a quadratic singularity of Γ of rank 2ρ + 6 2i + 8, since ρ i + 1. Since a hyperplane section of a quadric of rank r 3 is a quadric of rank r − 2, we conclude that p ∈ Γ i is a quadratic singularity of rank 8, so that the condition (S1) is satisfied at that point (for the hypersurface Γ i ).
In the first case the point p is non-singular on Γ, so that Γ i is a section of Γ by a linear subspace of codimension i, which is contained in the tangent hyperplane T p Γ. By the condition (R1.4) the point p ∈ Γ i is a quadratic singularity of rank 8 + 2(i + 1) − 4 − 2(i − 1) = 8 (one should take into account that the cutting subspace is of codimension i − 1 in T p Γ). Therefore, the condition (S1) is satisfied in any case.
Let us show that the hypersurface Γ i satisfies the condition (S2) as well. In order to do it, we must check that for Γ i all assumptions of Subsection 2.1 are satisfied. By what was said above, the codimension of the set Sing Γ i with respect to Γ i it at least 7 -this is higher than we need. The inequality (2) takes the form of the estimate which is easy to check. Finally, the conditions (N1), (N2) and (N3) follow from the conditions (R1.1), (R1.2) and (R1.3), respectively. By Proposition 2.1 we conclude that the hypersurface Γ i satisfies the condition (S2). Finally, let us consider the condition (S3). Obviously, it is sufficient to check that the inequality (5) holds for any prime divisor Y on the section of the hypersurface Γ i by a linear subspace P * of codimension 2 in P M −i . Assume the converse: In some affine coordinates with the origin at the point o on the subspace P * = P M −i−2 the equation of the hypersurface P * ∩ Γ i has the form 0 = q * 2 + q * 3 + · · · + q * dl , where by the condition (R2.2) the sequence of homogeneous polynomials q * 2 , q * 3 , . . . , q * | 11,316.2 | 2019-06-27T00:00:00.000 | [
"Mathematics"
] |
Modeling quantitative traits for COVID-19 case reports
Medical practitioners record the condition status of a patient through qualitative and quantitative observations. The measurement of vital signs and molecular parameters in the clinics gives a complementary description of abnormal phenotypes associated with the progression of a disease. The Clinical Measurement Ontology (CMO) is used to standardize annotations of these measurable traits. However, researchers have no way to describe how these quantitative traits relate to phenotype concepts in a machine-readable manner. Using the WHO clinical case report form standard for the COVID-19 pandemic, we modeled quantitative traits and developed OWL axioms to formally relate clinical measurement terms with anatomical, biomolecular entities and phenotypes annotated with the Uber-anatomy ontology (Uberon), Chemical Entities of Biological Interest (ChEBI) and the Phenotype and Trait Ontology (PATO) biomedical ontologies. The formal description of these relations allows interoperability between clinical and biological descriptions, and facilitates automated reasoning for analysis of patterns over quantitative and qualitative biomedical observations.
Introduction
The worldwide COVID-19 pandemic has brought into focus the need in research to have patient data more available and accessible to get new insights more efficiently and rapidly. Clinicians monitor biomolecular concentrations, other physiological signs, and symptoms manifested in different organ systems of the patient at different points in time. These clinical measures are very valuable data because they give intrinsic information about the underlying biological mechanism and patient disease trajectory that could be used to make informed and tailored therapeutic decisions. One problem that clinicians face is how to interpret patient laboratory results and quantitative traits, such as serum electrolyte concentrations or blood pressure, and link to their significance at a phenotypic conceptual level. To bridge this gap, computational biologists use ontologies to encode knowledge about how altered gene products and molecular processes affect cellular and tissue functions that ultimately are expressed as abnormal phenotypes and diseases. The life science community has been developing different ontologies to represent both molecular biology, clinical measures, and disease phenotypes. For instance the CMO [1] is designed to encode morphological and physiological measurement data generated from clinical and model organism research and health records. However, there is still the need to link this quantitative measurement standard to phenotypic ontologies. It is necessary to establish if a . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted June 20, 2020. . measurement falls into an abnormal range, if so what kind of phenotype might result, and what the consequences of the phenotype might be, either diagnostically or prognostically. Assignment of some phenotypes requires several measurements to be abnormal; these then need to be integrated. A clinician can then understand that given this phenotype, possibly along with others, a diagnosis may be made of a specific disease which then carries with it diagnostic and prognostic knowledge. The ability to collect, understand and integrate such measurements at the conceptual level is fundamental to personalised or "precision" medicine. To aid researchers to connect these different pieces of information and reason over it, here we present our work on modeling of case report forms for COVID-19, then on an ontology for representing quantitative traits, and finally on integration within PhenoPackets, an exchange standard for the transmission of patient phenotype data, to showcase its applicability.
Mapping to ontologies
We used the WHO COVID-19 case report form (CRF) RAPID version 23 March 2020 to manually extract 1 quantitative traits relevant for COVID-19. This CRF is divided in three modules: 1) admission to healthcare; 2) admission to ICU and follow-up patients; and 3) outcome. We extracted quantitative traits and units terms from modules 1 and 2 (located in the vital signs and laboratory results sections). Following recommendations by domain-experts immunologists with experience of COVID-19 patients, the initial list was augmented with further traits, reaching a final total of 57 trait terms. The CMO, an OBO Foundry ontology designed to define phenotype measurement data, has the precise concepts to annotate 67% of terms. We annotated the rest of terms using the Experimental Factor Ontology (EFO) [2] and the NCI Thesaurus (NCIT) [3]. Units were annotated using the Units Ontology (UO) [4], where 63% of terms are represented. We then extracted and annotated patient metadata to capture mainly comorbidities, pre-admission medication and treatment terms, which were prioritized as clinical metadata for future application on sequence submission. Comorbidities were annotated with the Disease Ontology (79%) [5], the Human Phenotype Ontology [6] and EFO, and medications using mainly ChEBI (65%) [7] and NCIT.
Lexical mining of CMO labels and axiom patterns
We lexically decomposed the labels of CMO classes using the labels of the Uberon [8], ChEBI, and PATO [9] ontologies, following a strategy similar to the one previously applied to extract mappings from Gene Ontology labels [10] . We built a dictionary from the labels and synonyms of classes in Uberon, ChEBI, 2 and PATO, and identified their occurrence in the labels of CMO classes. For example, the CMO class "heart measurement" (CMO:0000670) matches with the Uberon class "heart" (UBERON:0000948).
(which was not certified by peer review)
The copyright holder for this preprint this version posted June 20, 2020. . https://doi.org/10.1101/2020.06.18.20135103 doi: medRxiv preprint We then added OWL axioms to the CMO based on the patterns used to define phenotype ontologies [11] and integrated the CMO with the Mammalian Phenotype Ontology (MP) [12]. Using the ELK reasoner [13] we were then able to infer relations between measurements and the phenotypes that may be detected using these measurements. For example, the CMO class "heart measurement" may be used to detect phenotypes falling under the MP classes "abnormal heart morphology" (MP:0000266) and "abnormal cardiovascular system physiology" (MP:0001544). Source code is freely available at https://github.com/leechuck/qto .
Quantitative trait data model
From the list of semantically annotated quantitative traits and units, we developed a data model which enables quantitative trait data to be machine-readable and interoperable for sharing and amenable to bioinformatics analysis. The data model is based on the Information Artifact Ontology (IAO) [14], the BioAssay Ontology (BAO) [15], the Ontology for Biomedical Investigations (OBI) [16] and UO. We use CMO classes to define the types and we implemented the Shape Expressions (ShEx) shape to 3 communicate the graph structure, generate and validate the instance data. Finally, we would like to highlight the knowledge gap detected in the biomedical ontological space for COVID-19 related clinical concepts thus offering an opportunity to expand and enrich viral-related content for CMO and UO. For example the cytokine measurement "IL-8 pg/mL ", we couldn't find a term neither in CMO nor in UO. Data model and ShEx shape are freely available at https://github.com/NuriaQueralt/BioHackathon/tree/master/bh20-ontology-qt .
Integration into GA4GH PhenoPackets standard
PhenoPackets is an exchange standard for packaging observable indicators and phenotype data together into an interlinked model of disease, patient data and corresponding characteristic values of traits. Phenotypes, indicators and disease expression are not symptomatically linked, but mediated by patients (in biomedical context) or organisms acting as key links. PhenoPackets model the interlinking and enable quantitative analysis of these interactions. Based on the GA4GH inlined PhenoPackets schema , an 4 extension to integrate the quantitative trait data model with additional trait ontologies and data was defined. This schema provides a "QuantitativeTraitFeature" base element to characterize and attach trait features and observable data. Conforming with the PhenoPackets reference implementation, Protobuf-based serialization was implemented to facilitate interoperable exchange with biomedical databases, repositories as well as computational pipelines. It also enables integration of phenotypic and quantitative data with sequence data. Future work will aim at applying this quantitative trait model to many other use-cases and applications. Source code for the PhenoPackets extension is freely available at https://github.com/cp-weiland/QuantitativeTraitFeature . | 1,813 | 2020-06-20T00:00:00.000 | [
"Computer Science",
"Medicine"
] |
Kinetic Theory for Finance Brownian Motion from Microscopic Dynamics
Recent technological development has enabled researchers to study social phenomena scientifically in detail and financial markets has particularly attracted physicists since the Brownian motion has played the key role as in physics. In our previous report (arXiv:1703.06739; to appear in Phys. Rev. Lett.), we have presented a microscopic model of trend-following high-frequency traders (HFTs) and its theoretical relation to the dynamics of financial Brownian motion, directly supported by a data analysis of tracking trajectories of individual HFTs in a financial market. Here we show the mathematical foundation for the HFT model paralleling to the traditional kinetic theory in statistical physics. We first derive the time-evolution equation for the phase-space distribution for the HFT model exactly, which corresponds to the Liouville equation in conventional analytical mechanics. By a systematic reduction of the Liouville equation for the HFT model, the Bogoliubov-Born-Green-Kirkwood-Yvon hierarchal equations are derived for financial Brownian motion. We then derive the Boltzmann-like and Langevin-like equations for the order-book and the price dynamics by making the assumption of molecular chaos. The qualitative behavior of the model is asymptotically studied by solving the Boltzmann-like and Langevin-like equations for the large number of HFTs, which is numerically validated through the Monte-Carlo simulation. Our kinetic description highlights the parallel mathematical structure between the financial Brownian motion and the physical Brownian motion.
Inspired by these successes, physicists have attempted to apply statistical physic approaches even to social science beyond material science. In particular, financial markets have attracted physicists as an interdisciplinary area [18,19] since they exhibit quite similar phenomena to physics, represented by the Brownian motion. It is noteworthy that the concept of the Brownian motion was historically first invented by Bachelier in finance [20] before the famous work by Einstein in physics [21]. After the work by Bachelier, various characters of Brownian motions in finance and their differences from physical Brownian motions have been found by both theoretical and data analyses. On the level of price time series, the power-law behavior of price movements has been reported empirically [22][23][24][25][26]. Such universal characters have been summarized as the stylized facts [19] and have been theoretically studied by time-series models [19,[27][28][29] and agent-based models [30][31][32][33][34][35][36][37]. In addition, characters of order books (i.e., current distributions of quoted prices) are studied by both empirical analysis and order-book models [19,[38][39][40][41][42][43][44]. For example, the zero-intelligence order-book models [38][39][40][41][42][43][44] have been investigated from various viewpoints, such as power-law price movement statistics [38], order-book profile [41], and market impact by large meta orders [43,44]. The collective motion of the full order book was further found by analyzing the layered structure of the order book [45,46], which was a key to generalize the fluctuation-dissipation relation to financial Brownian motion. To date, however, the modeling of individual traders' dynamics based on direct microscopic evidence has not been fully studied, which was a crucial obstacle to apply the statistical mechanics from microscopic dynamics. To fully apply statistical mechanics to financial systems, it is expected necessary to establish the microscopic dynamical model of traders based on microscopic evidence and to develop a non-equilibrium statistical mechanics for such non-Hamiltonian many-body systems.
Recently, an extension of the kinetic framework for financial Brownian motion has been proposed by studying high-frequency data including traders identifiers (IDs) [46]. The dynamics of high-frequency traders (HFTs) were directly analyzed by tracking trajectories of the individuals, and a microscopic model of trend-following HFTs have been established showing agreeing with empirical analyses of microscopic trajectories. On the basis of the "equation of motions" for the HFTs, the Boltzmann-like and Langevin-like equations are finally derived for the mesoscopic and macroscopic dynamics, respectively. This framework is shown consistent with empirical findings, such as HFTs' trend-following, average order book, price movement, and layered order-book structure. However, the mathematical argument therein was rather heuristic similarly to the original derivation of the conventional Boltzmann and Langevin equations. Considering the traditional stream of kinetic theory, a mathematical derivation beyond heuristics is necessary for the financial Brownian motion paralleling to the works by BBGKY and van Kampen.
In this paper, we show the mathematical foundation for the financial Brownian motion in the parallel mathematics in kinetic theory. For the trend-following HFT model [46], we first define the phase space and the corresponding phase-space distribution (PSD) according to analytical mechanics [15,47]. We then exactly derive the time-evolution equation for the PSD, which corresponds to the Liouville equation in analytical mechanics. The many-body dynamics for the PSD are reduced into few-body dynamics for reduced PSD according to the reduction method by BBGKY. By assuming the molecular chaos, we obtain the non-linear Boltzmann equation for the order-book profile and the master-Boltzmann equation for the market price dynamics. We also present their perturbative solutions for large number of HFTs to study the dynamical behavior of this system for all hierarchies. The validity of our framework is finally examined by Monte Carlo simulation.
This paper is organized as follows: In Sec. II, we briefly review the mathematical structure of the standard kinetic theory before proceeding to our work. In Sec. III, we describe the detail of the trend-following HFTs model as the microscopic setups. In Sec. IV, the microscopic dynamics of the model are exactly formulated in terms of the Liouville equation and the corresponding BBGKY hierarchal equation. In Sec. V, the financial Boltzmann equation is derived as the mesoscopic description of this financial system. In Sec. VI, the macroscopic behavior is analyzed by deriving the financial Langevin equation. In Sec. VII, implications of our theory are discussed for several related topics. We conclude this paper in Sec. VIII with some remarks.
II. BRIEF REVIEW OF CONVENTIONAL KINETIC THEORY FOR BROWNIAN MOTION
Before proceeding to the core part of our work, we here briefly review the scenario of conventional kinetic theory for Brownian motion to convey our essential idea for generalization toward financial systems. Let us consider the Hamiltonian dynamics of N gas particles of mass m and a tracer particle of mass M with the hard-core interaction in a hard-core box of volume V (see Fig. 1a for a schematic). The momentum and position of the ith gas particle are denoted by p i ≡ (p i;x , p i;y , p i;z ) and q i ≡ (q i;x , q i;y , q i;z ) for 1 ≤ i ≤ N , and those of the tracer are denoted by P = p 0 and Q = q 0 . The dynamics of this system are described by the equation of motions, with interaction force F ij between particles i and j for 0 ≤ i, j, ≤ N (m i = M for i = 0 and m i = m otherwise).
A. Liouville equation
In analytical mechanics, the phase space is defined as S ≡ N i=0 (−∞, ∞) 6 . The state of the system can be designated as the phase point defined by Γ ≡ (P , Q; p 1 , q 1 ; . . . ; p N , q N ) ∈ S, and the corresponding PSD is denoted by P t (Γ). The time evolution of PSD is described by the Liouville equation, with the Liouville operator L [61] (see Refs. [14,15,[47][48][49][50] for the details). This equation is exactly equivalent to the equation of motions (1) mathematically, and is the fundamental equation for the microscopic description (Fig. 1a). This equation is however not analytically solvable as it fully addresses the original many-body dynamics without any approximation. Microscopic setup for the Brownian motions. Gas particles and a massive tracer interact with each other, where the dynamics are described by the Liouville equation (2). As the mesoscopic description (Fig. b), the full-dynamics are reduced to the one-body distribution φ (1) for the gas particles, which are governed by the Boltzmann equation (6). The macroscopic dynamics of the tracer (Fig. c) are described by the master-Boltzmann equation (8), or the Langevin equation (9) asymptotically for large system size M → ∞.
Reduce
(d-f) Hierarchal structure of financial markets parallel to molecular kinetic theory. In the microscopic hierarchy (Fig. d), each traders make decisions to submit or cancel orders. The dynamics of the traders correspond to those of molecules in kinetic theory. In the mesoscopic hierarchy (Fig. e), the information on traders identifiers is lost by coarse-graining. We thus obtain the dynamics of the order book (i.e., the quoted price distribution). The order-book profile corresponds to the velocity distribution in the conventional kinetic theory. In the macroscopic hierarchy (Fig. f), the dynamics of the market price movement is finally deduced by the coarse-graining, which exhibits the anomalous random walks. The market price dynamics corresponds to those of the Brownian motion in kinetic theory.
B. BBGKY hierarchy and Boltzmann equation
To focus on the one-body dynamics of a gas particle or the tracer, let us introduce the reduced PSDs, On the assumption of binary interaction, we can exactly derive hierarchies of PSDs, such that with one-body Liouville operators L (1) , L (T) and two-body collision operators L (2) , L (TG) . These equations are exact but not closed in terms of φ . To obtain analytical solutions, a further approximation is necessary. The standard approximation in kinetic theory is a mean-field approximation, called molecular chaos, which is mathematically shown asymptotically exact for dilute gas in the thermodynamic limit N, V → ∞ (called the Boltzmann-Grad limit [51]). We then obtain the closed dynamical equation for φ (1) as which is the fundamental equation for the mesoscopic description (Fig. 1b). The steady solution for φ (1) of the non-linear Boltzmann equation (6) is then given by the celebrated Maxwell-Boltzmann distribution.
C. Langevin equation
The stochastic dynamics for the macroscopic variables (P , Q) can be also obtained within kinetic theory. By applying molecular chaos for P (TG) (P , Q, p 1 , q 1 ) as we obtain the master-Boltzmann equation (or the linear Boltzmann equation) which belongs to the linear-master equations in the Markov process and describes the dynamics of the tracer particle. Equation (8) can be further approximated as the Fokker-Planck equation within the system size expansion [16]. One can thus deduce the Langevin equation for the tracer as the macroscopic description of the Brownian motion ( Fig. 1c), with viscous coefficient γ, temperature of the gas T , and the white Gaussian noise ξ G with unit variance. The above formulation shows the systematic connection from the microscopic Newtonian dynamics to the mesoscopic dynamics and macroscopic dynamics. This methodology is shown valid even for non-equilibrium systems when the gas is sufficiently dilute (see Refs. [3][4][5][6][7][8][9]12] for its application to various nonequilibrium systems), and is one of the most successful formulations in statistical physics.
D. Idea to generalize kinetic theory toward finance
Here, let us remark our idea to generalize the framework toward financial Brownian motion. Financial markets have a quite similar hierarchal structure to the conventional Brownian motion (see Fig. 1d-f for a schematic): In the microscopic hierarchy, individual traders make decisions to buy or sell currencies at a certain price (Fig. 1d). In the mesoscopic hierarchy, the dynamics are coarse-grained into the order-book dynamics with removal of traders' IDs ( Fig. 1e). In the macroscopic hierarchy, the dynamics are reduced to the price dynamics (Fig. 1f). One can notice that these hierarchies directly correspond to those in kinetic theory; traders, order book, and price correspond to molecules, velocity distribution, and Brownian particle, respectively. In this sense, the financial markets have a similar hierarchal structure to that in kinetic theory. From the next section, we present a parallel mathematical framework for the description of financial markets from microscopic dynamics.
III. MICROSCOPIC SETUP
In this section, the dynamics of the trend-following HFT model in Ref. [46] is mathematically formulated within the many-body stochastic processes with collisions on the basis of microscopic empirical evidences.
A. Notation
We here briefly explain the notation in this paper. Any stochastic variable accompanies the hat symbol such as to stress its difference to non-stochastic real numbers such as A. For example, the probability distribution function (PDF) of a stochastic variableÂ(t) at real time t is denoted by P (A, t) ≡ P (Â(t) = A) with a non-stochastic real number A (i.e., the probability ofÂ(t) ∈ [A, A + dA) is given by P (A, t)dA). The complementary cumulative distribution function (CDF) is also defined as P (≥ A, t) ≡ ∞ A P (A , t)dA . To simplify the notation, arguments in functions are sometimes abbreviated without mention if they are obvious. The ensemble average of any stochastic We next explain the terminology for the order book for the whole market (Fig. 2a). The highest bid (lowest ask) quoted price among all the traders is called the market best bid (ask) priceb M (â M ). The average of the market best bid and ask prices is called the market mid priceẑ M ≡ (b M +â M )/2. The difference between the market best bid and ask prices is called the market spread. The market transacted price means the price at which a transaction occurs in the market. In this paper, the market price (mathematically denoted byp) means the market transacted price for short.
As for a single trader, the highest bid (lowest ask) quoted price by a single trader is called the best bid (ask) price of the trader (denoted byb i (â i ) for the ith trader). The average of the best bid and ask prices of the trader is called the mid price of the trader (denoted byẑ i ). Also, the difference between the best bid and ask prices of the trader is called the buy-sell spread of the trader (denoted byL i ≡â i −b i ), which is different from the market spread.
There are two types of time in this paper. One is the real time t and the other is the tick time T (Fig. 2b). The tick time T is defined as a discrete time incremented by every market transaction and corresponds to the real time as a stochastic variable, such as t =t [T ]. Here the square brackets for the function argument (e.g.,Â[T ]) means that the stochastic variableÂ(t) is measured according to the tick time T (i.e.,Â[T ] ≡Â(t[T ])), highlighting the differences to that measured according to the real time t (e.g.,Â(t) with the round brackets).
B. Characters of real HFTs
Here we describe the characters of real HFTs on the basis of high-frequency data analysis of a foreign exchange (FX) market. We analyzed the order-book data including anonymized trader IDs and anonymized bank codes in Electronic Broking Services (EBS) from the 5th 18:00 to the 10th 22:00 GMT June 2016. EBS is an interbank FX market and is one of the biggest financial platforms in the world. The minimum volume unit for transaction was one million US dollars (USD) for the FX market between the USD and the Japanese Yen (JPY). We particularly focus on HFTs, who frequently submit or cancel their orders according to algorithms. As reported in our previous work [46], HFTs have several characters quite different from low frequency traders (LFTs). For this paper, an HFT is defined as a trader who submitted more than 2500 time during the week, similarly to a previous research [52]. With this definition, the number of HFTs was 135 during this week, while the total number of traders submitting limit orders was 922 [62], and 89.6% of all the orders in this market were submitted by the HFTs. Here we summarize the reported characters with several additional evidence: (α1). Small number of live orders and volume: HFTs typically maintain a few live orders, less than ten (see Fig. 3a and b). Furthermore, a single order submitted by HFTs typically implies one unit volume of the currency. These characters are in contrast to those of LFTs, who sometimes submit a large amount of volumes by a single order (see Fig. 3a and c for the fat-tailed distributions of the number of orders or volumes for LFTs).
(α2). Liquidity providers: Typical HFTs plays the role of key liquidity providers (or market makers) and have the obligation to maintain continuous two-way quotes during their liquidity hours according to the EBS rulebook [53] Orders and volumes (HFT) Orders and volumes (LFT) Volumes filled in one transaction Typical Trajectory of the top HFT (see Fig. 3d for a typical trajectory of the top HFT). The balance between the ask and bid order book is kept statistically symmetric to some extent, seemingly thanks to the liquidity providers.
(α3). Frequent price modification: Typical HFTs frequently modify their quoted prices by successive submission and cancellation of orders (see Fig. 3d for a typical trajectories of the top HFT). The lifetime of orders were typically within seconds for the top HFT, while the typical transaction interval was 9.3 seconds in our dataset. In addition, 94.4% of the submissions by all the HFTs were canceled finally without transactions.
(α4). Trend-following property: HFTs tend to follow the market trends. We here denote the best bid and ask quoted price of the ith trader and the market price at the T tick time byb , respectively (see Fig. 2b). We also denote the mid quoted price of the ithe trader bŷ L min , respectively. According to Ref. [46], the buy-sell distribution ρ L is directly measured to obey the γ-distribution, such that with decay length L * and empirical exponent α ≈ 3.
Trend-following random walks
HFTs have a tendency to maintain continuous two-sided quotes by frequently modifying their prices (i.e., successive cancellation and submission of limit orders), as required by the market rule [53]. This implies that the mid-price trajectory of an HFT can be modeled as a continuous random trajectory (i.e., the characters (α2) and (α3)). Remarkably, there is a mathematical theorem guaranteeing that the Itô processes (i.e., SDEs driven by the white Gaussian noise) are the only Markov processes with continuous sample trajectory [13]. As a minimal model satisfying all the characters of real HFTs (α1)-(α4), the dynamics of the HFTs are modeled within the Itô processes as in the absence of transactions (Fig. 4a) by taking into account the empirical trend-following properties (α4). Here c and ∆p * are constants characterizing the strength and threshold of trend-following effect andη R i is the white Gaussian noise with unit variance. The presence of the trend-following effect in Eq. (14) is the character of our HFT model, which induces the collective motion of limit orders [46]. The trend-following effect triggers translational motion of the full order book, which was crucial to reproduce the layered structure of the order book reported in Ref. [45].
Transaction rule
When the best bid and ask prices coincide, there occurs an transaction (see Fig. 4b). The transaction condition (i.e., the condition of price matching) is mathematically given bŷ for i = j. In the following, we assume that the index i is an integer always different from another integer j. At the instance of transactionb i =â j , let us assume that the traders requote their prices simultaneously (see Fig. 4c) such thatb whereb pst i andâ pst i are post-transactional bid and ask prices after transaction for between traders i and j, respectively. By introducing the mid-price of the individual traders asẑ i ≡ (b i +â i )/2, the transaction rule is rewritten aŝ We here define the market pricep(t) and the previous price movement ∆p(t) at time t.p(t) is the market price at the previous transaction; ∆p(t) is the price movement by the previous transaction. They are updated after transactions under the following post-transaction rule ( Fig. 4b and c): with signature function sgn(x) defined by sgn(x) = x/|x| for x = 0 and sgn(0) = 0.
D. Complete model dynamics
We here specify the complete dynamics of the quoted prices {ẑ i (t)} i within the framework of stochastic processes with collisions. When the previous price movement is ∆p, we assume that traders' quoted prices are described by the trend-following random walks: whereη T i is requotation jump term andτ k;ij is the kth transaction time between traders i and j satisfying The requotation jumpη T i corresponds to collisions in molecular kinetic theory. The price-matching condition (15) and the requotation rule (16) correspond to the contact condition and the momentum exchange rule in standard kinetic theory for hard-sphere gases, respectively. The summary of the model parameters is presented in the Table I with their dimensions. A sample trajectory of this model is depicted in Fig. 5a. We note that this model is a generalization of the previous theoretical model in Refs. [31,[34][35][36][37] on the basis of the above empirical facts (α1)-(α4) on HFTs.
The dynamics of the pricep and the previous price movement ∆p can be specified within the framework of stochastic processes. Sincep and ∆p are updated at the instance of transactions, their dynamics synchronizes with collision timeτ k;ij . Considering the transaction rule for prices (18), their concrete dynamical equations are thus given by with the price after collisionp pst ij ≡ẑ i − (L i /2)sgn(ẑ i −ẑ j ) and the price movement after collision ∆p pst ij ≡p pst ij −p. In this paper, the Itô convention is used for the multiplication to δ-functions.
Parameter
Meaning Buy-sell spreads of traders price c Strength of trend-following price/time ∆p * Saturation for trend-following price σ 2 Variance of random noise price 2 /time Introduction of slow variables is the key for reduction of the complex dynamics in general (e.g., the center of mass (CM) of the Brownian particle [16] and the slaving principles in synergetics [54]). Here we introduce the CM of the quoted prices as the slow variable of this system (Fig. 5a). The definition of the CM and its dynamics are given bŷ T i . The CMẑ CM characterizes the macroscopic dynamics of this system. As will be shown in Sec. VI C 1, indeed, the diffusion coefficient of the CM turns out to be proportional to N −1 for the weak trend-following case, implying that the selection ofẑ CM is reasonable as a slow variable.
Another motivation to introduce the CM is to define the relative price from the CM such that since the relative pricer i has better mathematical characters thanẑ i . For example, the relative pricer i fluctuates around zero (see Fig. 5b for the dynamics in the comoving frame of CM) and has the stationary distribution, while the original variableẑ i diffuses to infinity for a long time and has no stationary distribution.
F. Difference to other order-book models
One of the unique characters of the HFT model is the collective motion of order book due to trend-following. As shown in Ref. [45], the order book has the layered structure in the sense that the difference in volumes of bid (ask) order book near best price has positive (negative) correlation with price movements. This implies that the order book exhibits the translational motion like inertia in physics (Fig. 5c), and thus movements of HFTs are not independent of each other like herding behavior. This collective motion has not been implemented in conventional order-book models, which are based on independent Poisson processes for order submission and cancellation, and is minimally implemented in our HFT model as trend-following for the consistency with the layered order-book structure [46].
IV. MAIN RESULT 1: MICROSCOPIC DESCRIPTION
As the main results of this paper, the analytical solutions to the trend-following HFT model are presented by developing the mathematical technique of kinetic theory. We first introduce the phase space for the HFT model in the standard manner of analytical mechanics, and derive the dynamical equation for the PSD, which we call the financial Liouville equation. We next derive the hierarchy for the reduced distributions similarly to the BBGKY hierarchy in molecular kinetic theory, which is the theoretical key to understand the financial system systematically as shown in Secs. V and VI.
A. Phase space and phase-space distribution
Here first we introduce the phase space for the HFT model according to the standard manner of analytical mechanics. Let us introduce a vectorΓ ≡ (ẑ 1 , . . .ẑ N ;ẑ CM ,p, ∆p), which corresponds to a phase point in the phase space S ≡ (21), and (22) are the complete set of dynamical equations for the phase point, corresponding to the Newtonian equations of motions in conventional mechanics. Also, let us define the PSD function P t (Γ). Using the PSD, the probability is given by P t (Γ)dΓ where the phase point Γ exists at the time t in the volume element dΓ ≡
B. Financial Liouville Equation
As the first main result in this paper, we present the Liouville equation for the trend-following trader model (19)- (22) as the dynamical equation for the PSD. The dynamical equation for the PSD is given by where the advective and diffusive Liovuille operator L a and the binary collision Liouville operator L c are defined by Here we have introduced the symmetric absolute derivative |∂ ij |f ≡ |∂ i f | + |∂ j f | for an arbitrary function f (z i , z j ) and abbreviated derivatives ∂ i ≡ ∂/∂z i and ∂ CM ≡ ∂/∂z CM (see Appendix. A for the detailed derivation). We have also introduced a difference vector: with movement of the CM ∆z CM ≡ −(L i − L j )/2N . This is the first main result in this paper. The advective and diffusive Liovuille operator L a describes the continuous dynamics of the system in the absence of transactions, while the binary collision Liouville operator L c describes the discontinuous dynamics in the presence of transactions. Equation (24) formally corresponds to the Liouville equation (2) in molecular kinetic theory, and is called the financial Liouville equation in this paper. The financial Liouville equation completely characterizes the microscopic dynamics of all traders (Fig. 1d).
C. Financial BBGKY Hierarchy
The financial Liouville equation (24) is exact but cannot be solved analytically. We therefore reduce Eq. (24) toward a simplified dynamical equation for a one-body distribution in the parallel method to molecular kinetic theory. According to the standard method in the kinetic theory, the Boltzmann equation, a closed dynamical equation for the one-body distribution, can be derived by systematically reducing the Liouville equation in the parallel method to BBGKY (see Sec. II B). We here present the lowest-order equation of reduced distributions for the trend-following HFT model in the parallel calculation in kinetic theory. We first introduce the relative price from the CM as r i ≡ z i − z CM . We also define the one-body, two-body and three-body reduced distribution functions for the relative price: We then obtain the lowest-order hierarchal equation for the one-body distribution as with one-body, two-body, and three-body Liouville operators L (i) , L (ij) , L (ijk) defined by effective varianceσ 2 ≡ σ 2 (1 − 1/N ), and jump size ∆r ij;s ≡ ∆r Here ∆r (1) ij;s indirectly originates from the movement of the CM during requotation. The detailed derivation of Eq. (28) is described in Appendix. B. Equation (28) formally corresponds to the conventional BBGKY hierarchal equation (3) for the mesoscopic description. On the basis of Eq. (28), the Boltzmann-type closed equation for the one-body distribution is derived in the next section.
We also derive the hierarchal equation for the macroscopic dynamics. For the macroscopic variables Z ≡ (z CM , p, ∆p), we here define the reduced distributions: We then obtain the hierarchal equation for the macroscopic dynamics, with advective and diffusive Liouville operator L a CM and collision Liouville operator L a;ij CM between particles i and j: Equation (32) formally corresponds to the lowest-order conventional BBGKY hierarchal equation (8) for the macroscopic description. Using this hierarchal equation (32), a closed master-Boltzmann equation is derived for the macroscopic variables in the next section. The set of Eqs. (28) and (32) is the second main result in this paper. Equation (28) connects the microscopic description (Fig. 1d) to the mesoscopic description (Fig. 1e), and Eq. (32) connects the mesoscopic description (Fig. 1e) to the macroscopic description (Fig. 1f). Their detailed derivation is presented in Appendix. B. These equations are derived in a parallel calculation to the conventional BBGKY hierarchal equations (3) and (8), and are called the financial BBGKY hierarchal equations in this paper. Similarly to the conventional BBGKY hierarchal equations (3) and (8), our hierarchal equations (28) and (32) are exact but are not closed: the dynamics of low-order distributions are driven by those of higher-order distributions. Appropriate approximations are necessary to derive closed equations, such as the molecular chaos, which will be studied in the next section. Remark on the three-body collision term.
We here remark the emergence of the three-body collision term L (ijk) in the BBGKY hierarchy (28), which is slightly different from the conventional BBGKY hierarchy (3). This term appears because our kinetic theory is formulated on the basis of the relative pricer i . To understand this point, let us consider the movement of the relative pricer i of the ith trader during collision between traders j and k (see Fig. 7 for a schematic of three-body collision). While the mid priceẑ i of the ith trader does not move during the collision between traders j and k, the CM of this system z CM moves through a distance of ∆ẑ CM ≡ẑ pst . The relative pricer i thus moves indirectly through a distance of ∆r i ≡r pst i −r i = −∆ẑ CM = ∆r (1) jk;s , which appears in the three-body collision operator (29c). This effect is intuitively small for the large N limit and is finally shown irrelevant to the leading-order (LO) and next-leading-order (NLO) approximations as discussed later.
V. MAIN RESULT 2: MESOSCOPIC DESCRIPTION
From microscopic dynamics, we have derived the BBGKY hierarchal equation (28) for the mesoscopic description of the HFT model in a parallel manner to the conventional BBGKY hierarchal equation (3). Here we proceed to derive the closed mean-field model for the mesoscopic description, which will be finally shown useful to understand the order-book profile systematically.
A. Financial Boltzmann Equation
We here derive a closed equation for the one-body distribution function by assuming a mean-field approximation. (37) of the tent function (36). For the δ-distributed spread (Case 1), the profile is the tent function (38). For the γ-distributed spread (Case 2), the profile obeys Eq. (39).
body distribution φ L t (r) is thus obtained as with mean-field probability flux J LL t;s (r) for s = ±1. The systematic derivation of this equation is the third main result in this paper (see Appendix. C for the detail). Equation (35) is a closed equation for the one-body distribution function, and corresponds to the Boltzmann equation in molecular kinetic theory (see Fig. 1b). Equation (35) is therefore called the financial Boltzmann equation in this paper. Here the dummy variable s = +1 (s = −1) implies the transactions as a bidder (an asker), and the integrals on the right-hand side (rhs) correspond to the collision integrals in the standard Boltzmann equation (6). Remarkably, Eq. (35) is derived from a systematic calculation from the Liouville equation (24), whereas it was originally introduced with a rather heuristic discussion in our previous paper [46].
B. Solution
Let us focus on the steady solution of Eq. (35). Equation (35) can be analytically solved for N → ∞ on an appropriate boundary condition (See Appendix. D for the detail) for the steady state. The LO steady solution is given by the tent function: The average order-book profile for the ask side f A (r) is given by the superposition of the tent function: We note that the average order-book profile has a symmetry, such that f B (r) = f A (−r) for the average bid orderbook f B (r). We also note that the NLO correction (E4) can be obtained as shown in Appendix. E. Though the LO solution (36) is sufficient to understand the average order-book profile, the NLO solution (E4) is necessary to understand the dynamics of the financial Langevin equation, as shown in Sec. VI.
Numerical comparison 1: δ-distributed spread. We here study the theoretical order-book profiles for two concrete examples with numerical validation (see Appendix. F for the detailed implementation). Let us first consider the case of a single spread L * . The corresponding average order-book profile is given by the tent function We have numerically examined the validity of this formula in Fig. 9a, which shows the numerical agreement with our formula (38). The LO solution (38) works quite well for the description of the order-book profile, and the numerical convergence in Fig. 9a implies that Eq. (38) might be exactly valid for N → ∞. Numerical comparison 2: γ-distributed spread.
The formula (37) works well even for L min → 0 and L max → ∞ when the integrals converge. As an example, let us consider the case where the spread obeys the γ-distribution which was empirically validated through single-trajectory analysis of individual traders in our previous work [46]. We have numerically examined the validity of this formula in Fig. 9b, which shows the numerical agreement with our formula (39). The numerical convergence in Fig. 9b implies that the LO solution (39) might be also exact for N → ∞.
VI. MAIN RESULT 3: MACROSCOPIC DESCRIPTION
In this section, we derive the stochastic equations for the macroscopic dynamics of this system from the BBGKY hierarchal equation (32) in the parallel method to the master-Boltzmann equation (8) for physical Brownian motions.
A. Master-Boltzmann Equation for Financial Brownian Motion
On the basis of the financial BBGKY hierarchy (32) for the macroscopic dynamics, we derive a closed dynamical equation for the macroscopic variables Z ≡ (z CM , p, ∆p). Here we first make the assumption of molecular chaos, Using the NLO solution (E4), we deduce a closed master-Boltzmann equation for the macroscopic dynamics (see Appendix. G for the detailed calculation): where the mean-field collision Liouville operators for the macroscopic variables L c;MF CM is defined by with 1/L * 2 ρ ≡ dLρ L /L 2 , Gaussian distribution N (x; σ 2 ), jump size distribution w N (y), and mean transaction interval τ * defined by by assuming ρ L is zero for L ∈ [L min , L max ]. Note that Eq. (42) is a master equation (or the differential form of the Chapman-Kolmogorov equation [13]) and is equivalent to a set of stochastic differential equations (SDEs) (see Eq. (G4) in Appendix. G).
B. Financial Langevin Equation
We have derived the stochastic dynamics for the three macroscopic variableẐ = (ẑ CM ,p, ∆p) as the master equation (42) (or equivalently SDEs (G4)) in the continuous time t. We next simplify the dynamics (42) of the three macroscopic variables into that of a single macroscopic variable ∆p in the tick time T . In the tick time T ...., the dynamical equation for the price movement ∆p is given by (9), and is thus called the financial Langevin equation in this paper.
Within the mean-field approximation, we can specify all the statistics of the random noise terms from analytics. The time intervalτ [T ] is given by the exponential random number with mean interval τ * , The zigzag noise ∆ξ[T ] is defined by the difference of two Gaussian random numbers as whereξ[T ] is a discrete-time white Gaussian noise with unit variance. The random noise termζ[T ] is specified aŝ whereμ[T ] is a discrete-time white Gaussian noise with unit variance andν[T ] is a discrete-time white noise term obeying P (ν) =w(ν) with an N -independent distributionw(ν) = w N (ν/N )/N . We next discuss the interpretation of each term on the rhs of Eq. (44). The trend-following term induces the collective motion of the order book and thus keeps the price movement in the same direction for a certain timeinterval similarly to the inertia in physics. On the other hand, the zigzag noise term exhibits one-tick negative autocorrelation, such that and has the effect to change the price movement direction alternately. In this sense, the trend-following term and the zigzag noise have the opposite effect to each other; the balance between their strengths is crucial for the qualitative behavior of the market price dynamics. The random noise termζ
C. Solution
The macroscopic dynamics of the price strongly depends on the balance between the strength of trend-following effect and that of the zigzag noise. Here we present the solutions of the financial Langevin equation depending on the strength of trend-following with the dimensional analysis. The price movement originating from trend-following behavior is estimated to be cτ * (of price dimension). On the other hand, the amplitude of the zigzag noise is estimated to be L * ρ / √ 2N (of price dimension). Their balance is thus characterized by the dimensionless parameterc defined bỹ Another dimensionless control parameter is the ratio ∆p * between the average movement by the trend-following cτ * (of price dimension) and the saturation threshold against the market trend ∆p * (of price dimension): The set of dimensionless parameters (c, ∆p * ) governs the qualitative dynamics of the market price. For consistency with the empirical report [46], we focus on the case of ∆p * 1 in this section, whereby the saturation of the hyperbolic function is valid. (see Sec. VII I for the discussion on the case with ∆p * 1). Here we introduce three classifications in terms of the strength of trend-following: 1. Weak trend-following case:c 1 2. Strong trend-following case:c 1 3. Marginal trend-following case:c ∼ 1 Sample trajectories are plotted in Fig. 10 to highlight the character of each case: For the weak trend-following case (Fig. 10a), the price tends to move upward and downward alternatively every tick because of the zigzag noise ∆ξ. For the strong trend-following case (Fig. 10b), the unidirectional movement of price is kept for a certain time period.
For the marginal trend-following case (Fig. 10c), both zigzag and unidirectional movements randomly appear because both effects are in balance. As will be shown later in detail, the marginal case may be the most realistic, at least in our dataset. We next study these qualitative characters through statistical analysis of price time series within the mean-field approximation. (42) can then analytically solved in continuous time t. By applying the system size expansion [16] (see Appendix. I for derivation), we obtain the diffusion equation for the CM with the renormalized diffusion coefficient D(N ) up to the order of N −1 and the second-order Kramers-Moyal coefficient α 2 ≡ ∞ −∞ dyy 2w (y). The diffusion constant D(N ) decays for N → ∞, which implies that the dynamics of the CM become slower as the number of the traders increases. Given that the dynamics of pricep coincides with that of the CMẑ CM for a long timescale, the diffusion of the price is also shown normal for a long timescale with the same diffusion coefficient D(N ) in the real time t. The mean square displacement (MSD) based on real time t is thus analytically obtained as showing the normal diffusion for a long time.
We also study price movement at one-tick precision. For the weak trend-following case, the only relevant term in Eq. (44) is the zigzag noise ∆ξ(T ) for a short timescale. Price movement ∆p then obeys the Gaussian distribution The autocorrelation function of the price movement ∆p is also given by Interestingly, this property is consistent with an empirical fact that price movements typically exhibit zigzag behavior for a short timescale, which is reflected in the one-tick strong negative autocorrelation of the price movement.
Here we discuss the origin of the strong negative correlation in terms of price movement. Remarkably, only the random noiseζ[T ] is dominant for long time whereas only the zigzag noise ∆ξ[T ] is dominant for a short timescale. For K N , indeed, we obtain which implies that the contribution by the zigzag noiseξ[T ] is negligible compared with that of the random noisê ζ[T ] (i.e., Considering that the random noiseζ[T ] originates from the diffusion of the CM, Eq. (55) means that the macroscopic behavior of price is governed by the slow dynamics of the CM. Even though the price movement at one-tick precision is much larger than that of the CM, such movement is irrelevant to the macroscopic dynamics of the whole system. This is the origin of the strong negative correlation for price movement in this model with weak trend-following. To relieve such negative correlation, stronger trend-following is necessary to induce the collective motion of the order book as discussed in Ref. [46]. We note that similar slow diffusion is observed in the conventional zero-intelligence order-book models [38][39][40], with which the trend-following effect is not incorporated likewise.
We also note that the negative correlation (54) is also related to the slow diffusion of price for a short timescale. Indeed, the MSD is given by within the mean-field approximation. This formula implies that the MSD is almost constant (i.e., no diffusion) for a short timescale K N while it is asymptotically linear (i.e., the normal diffusion) for a long timescale K N . Numerical comparison. Here we examine the validity of our formulas through comparison with numerical results for the γ-distributed spread (see Appendix. F for the implementation).
Transaction interval. We first check the statistics of the time-interval between transactionsτ . The mean transaction interval τ * ≡ τ is numerically plotted in Fig. 11a, showing the quantitative agreement with the theoretical prediction (45) including the coefficient. We also numerically plotted the probability distribution ofτ with scaling parameters for horizontal and vertical axes, qualitatively showing the exponential tail for largeτ . Here, we have introduced a scaled transaction intervalτ ≡ c ττ /τ * and plotted the scaled probability distribution in Fig. 11b P with scaling parameters for the horizontal and vertical axes c τ and Z τ . The coefficients c τ and Z τ were determined by the least-square method to fit the exponential tail for each N . The numerical results imply the modification for the decay length c τ ≈ 1.6, whereas the mean-field solution (45) predicts c τ = 1. This means that the mean-field solution (45) is not exact but is rather qualitatively correct for the probability distribution P (τ ). This factor modification c τ ≈ 1.6 can be roughly understood from the viewpoint of the order statistics, as discussed in Ref. [46]. The mean-field approximation predicts the exponential interval distribution (45), which means that the transaction obeys the exact Poisson process. As the numerics shows, however, the transaction obeys the Poisson process not exactly but only asymptotically. One candidate of its origin is that a transaction occurs as a pair of arrivals of both bid and ask quotes. Let us assume that the arrival of a bid (ask) quote at the transaction price obeys the Poisson statistics as P (τ B ) = e −τB/τ * B /τ * B (P (τ A ) = e −τA/τ * A /τ * A ). Any transaction is assumed to occurs when both bid and ask quotes arrive at the transaction price. We then make an approximation thatτ ≈ max{τ B ,τ A } and τ * B = τ * A . On the basis of the orders statistics [55], we obtain where the fitting parameter was determined by the consistency condition for the average interval as τ = τ * ⇐⇒ τ * B = 3τ * /2. We thus obtain the modification factor c τ = 3/2 as an approximation. We note that the transaction interval is not under the influence of the trend-following effect. The above statistical characters on transaction interval are therefore shared for any parameter set of (c, ∆p * ).
MSD. Our theoretical prediction on the MSD is numerically examined here for analyses based on both real time t and tick time K. We first numerically check the MSD (52) based on real time t in Fig. 11c. This figure shows the quantitative agreement with our theoretical formula (52) without fitting parameters. We also check the MSD based on tick time K in Fig. 11d, showing a quantitative agreement with the theoretical prediction (56) for K 1. For small K ∼ 1, the agreement is not perfect between the numerical data and the theoretical line, but the slowness of the diffusion is qualitatively observed as predicted in the mean-field solution (56).
Price movement. The dependence of the variance of price movement is checked in Fig. 11e on the number of traders N . We numerically obtained ∆p 2 ≈ C ∆p 2 (L * 2 ρ /2N ) with modification factor C ∆p 2 ≈ 0.4 and L * 2 ρ = 6L * 2 . Though there is a discrepancy in terms of the factor C ∆p 2 , the mean-field solution (53) qualitatively works well for the variance of price movement. We also checked the PDF P (|∆p|) of the scaled price movement ∆p ≡ √ N ∆p/L * ( Fig. 11f and g for the peak and tail of PDF, respectively). In Fig. 11g, we also show a Gaussian-type fitting curve h(∆p) = exp −h * 0 − h * 1 ∆p − h * 2 ∆p 2 for the tail with parameters h * 0 = 0.75 ± 0.05, h * 1 = 0.54 ± 0.04, and h * 2 = 0.238 ± 0.006. These figures suggests that the PDF of the price movement has the Gaussian tail, which is qualitatively consistent with the theoretical prediction (53) (h * 1 = 0 and h * 2 = 1/6). Fig. 11h, which supports the qualitative consistency between the theory (55) and the numerical results in terms of the negative correlation at K = 1 tick. This negative correlation implies that the price time series exhibits zigzag behavior in the absence of the trend-following effect. Indeed, the probability of ∆p[T + 1]∆p[T ] < 0 is theoretically 2/3 = 66.6...% for the mean-field model (see Appendix. J), considerably higher than 50% (i.e., the pure random walks). This result is also qualitatively consistent with the numerical result (around 61%) as shown in Table II We numerically obtained the probability that the next price movement ∆p[T + 1] has the same (different) sign as (from) that of the previous price movement ∆p [T ] for N = 100. (a) For the weak trend-following casec = 0, the probability of taking different sign is higher than that of taking same sign, implying the zigzag motion of the price movement. (b) For the strong trendfollowing case (c, ∆p * ) = (2.0, 0.1), the probability of taking the same successive sign is much higher than that of taking different sign, implying the ballistic motion of the price movement. (c) For the marginal trend-following case (c, ∆p * ) = (0.5, 2.5), the probability of taking different sign is slightly higher than that of taking same sign. (d) We also obtained the probabilities from the real price time series in our dataset, showing that the probability of taking different sign is slightly higher than that of taking same sign. For simplicity, we omitted zero, such as ∆p[T ] = 0, during the data analysis of real price movement time series {∆p[T ]}T . This table implies that the marginal trend-following case is consistent with the real price time series and is the most realistic at least for stable markets.
Autocorrelation. The autocorrelation function C ∆p [K] is checked in
with decay length κ for |∆p| → ∞. The decay length is given by the mean movement originating from the trendfollowing as κ = cτ * within the mean-field approximation (45). By applying the improved mean-field approximation (58), more consistent coefficient κ = 2cτ * /3 is obtained with the numerical result below. The trend-following effect plays similar roles to momentum inertia in physics, which are reflected in the autocorrelation function and the MSD plot as shown numerically in the next paragraph. Numerical comparison. Numerical characters are studied here for the strong trend-following case under the parameter set (c, ∆p * ) = (2.0, 0.1). We first study the price movement distribution P (|∆p|). In Fig. 12a, the price movement distribution is plotted by scaling the horizontal and vertical axes, qualitatively showing the exponential tail for the scaled price movement ∆p ≡ ∆p/κ. Here the scaling parameters κ and Z ∆p were determined by the least square method for the tail. The mean-field solution (45) and the improved mean-field solution (58) predicts κ = cτ * and κ = 2cτ * /3, respectively. These theoretical predictions are qualitatively consistent with the numerical estimation κ ≈ 0.64cτ * . We next study the autocorrelation function C ∆p [K] of the price difference ∆p based on tick time K in Fig. 12b by scaling the horizontal line. For our parameter sets, the numerical result implies that the autocorrelation function can be written as with fitting parameters τ AC and Z AC . This autocorrelation suggests that the strong trend-following keeps unidirectional price movements for a certain time-interval. Indeed, the probability of ∆p[T ]∆p[T + 1] > 0 is much higher than 50% under this condition as shown in Table II. In addition, the numerical MSD plot in Fig. 12c shows the rapid diffusion (almost ballistic motion K 2 ) for a short time and the normal diffusion for a long time
Marginal case
The most complex case is the marginal casec ∼ 1, where both trend-following effect and zigzag noise contribute to the price movement as |cτ random noise term ∆ξ are relevant on this condition, the main contribution to the price movement tail originates from the trend-following term because the former yields the exponential tail while the latter yields the Gaussian tail.
We thus obtain the exponential tail (59) for the price movement for the marginal case. This theoretical conjecture is to be validated numerically below. Numerical comparison. We studied the marginal case under the parameter set (c, ∆p * ) = (0.5, 2.5). In Fig. 13a, we plot the price movement distribution by scaling both horizontal and vertical axes as Eq. (60). We thus obtain the exponential-law tail (60) for the price movement qualitatively.
In Fig. 13b, we also studied the autocorrelation function C ∆p [K] on tick time K through both numerical simulation (points) and empirical data analysis (solid line) of the real time series. This figure shows the slight negative correlation around K = 1, which was qualitatively consistent with the empirical result in our dataset. This result also implies that the price time series exhibits a slight zigzag behavior for a certain tick period. This theoretical implication was validated by analyzing the probability of ∆p[T ]∆p[T + 1] < 0 as summarized in Table II. The table II shows the quantitative consistency between the marginal trend-following case and the real price time series.
We also discuss the behavior of MSD in Fig. 13c and d, which shows both slow and rapid diffusions dependently on the parameters. For example, we set the parameters (c, ∆p * ) = (0.5, 2.5) and (c, ∆p * ) = (0.86, 1.43) for Fig. 13c and d, respectively. In Fig. 13c, the MSD plot exhibits a slightly slow diffusion for a short time and the normal diffusion for a long time. In Fig. 13d, on the other hand, the MSD plot exhibits a slightly rapid diffusion with the Hurst exponent H = 0.65 for a short time and the normal diffusion for a long time. We thus conclude that our HFT model can reproduce a variety of diffusion by adjusting the trend-following parameters.
VII. DISCUSSION
We here discuss implications of our theory to understand various topics intensively.
A. Comparison with real dataset
Here we provide a detailed comparison between empirical facts and the above theoretical predictions as follows: As for the order-book profile f A (r), the validity of the formula (39) was examined by analyzing daily average order- Prob. of diff. sign (a) Weak trend-following case Gaussian Exponential Strongly negative at K = 1 around 60% (b) Strong trend-following case Exponential Exponential Strongly positive less than 10% (c) Marginal trend-following case Exponential Exponential Slightly negative around K = 1 around 52% (d) Empirical facts Exponential Exponential Slightly negative around K = 1 around 52% book in Ref. [46]. The exponential-tail for time interval distribution P (τ ) ∼ e −τ /τ * was studied in Ref. [56] by removing the non-stationary property of time series. The price movement was reported to obey the exponential-law P (|∆p|) ∼ e −|∆p|/κ in Ref. [46] by removing the non-stationary property of time series. The price time series tended to exhibit zigzag behaviors, which were reflected in the negative autocorrelation function C ∆p [K] around K = 1 (see Fig. 13c) and the probability of ∆p[T ]∆p[T + 1] < 0 (i.e., taking different signs) slightly over 50% (see Table II). All these characters are consistent with our theoretical prediction for the marginal trend-following case (see Table III for the summary of the comparison). The HFT model presented here can show precise agreements with these empirical facts. Considering that the market was stable in our dataset, we concluded that our HFT model can describe the FX market well, at least during the stable period. Description of unstable markets is out of scope of this paper and is a next interesting problem for future studies.
B. Validity of Mean-Field Approximation
We have numerically validated the mean-field theory. The LO solution (36) quantitatively describes the orderbook profile (37) with high precision and the NLO solution (E4) qualitatively describes the price movement (44). Possible reasons are discussed here why the mean-field approximation works so well for the trend-following HFT model considering the common sense in physics.
The mean-field approximation is expected invalid for low-dimensional physical systems because two-body correlations do not disappear between colliding pairs for a long time. Colliding particles are not allowed to be separated far from each other because of the continuity of paths and the low-dimensional space geometry. For one-dimensional Hamiltonian systems with hard-core interactions, for example, any particle successively collides against the fixed neighboring particles and two-body correlations then remain forever. The mean-field approximation is therefore shown valid only for high-dimensional systems, at least for several concrete setups. From this viewpoint, the precise agreement is not trivial between the mean-field solution (37) and the numerical result.
In contrast, the continuity of the path is absent due to requotation jumps though our model is a one-dimensional system. The transaction rule (17) compulsorily separates the transaction pairs after their collision, because of which there is no restriction on the combination of possible transaction pairs. In the N → ∞ limit, in addition, transactions between the same pair traders becomes rare (i.e., the probability of successive transaction between the same pair decays as the order of N −2 ), which implies quick disappearance of the two-body correlation between transaction pairs for N → ∞. This is our conjecture to validate the mean-field approximation for this model. If this conjecture is correct, kinetic-like descriptions may be valid for various agent-based systems, if agents are separated compulsorily to avoid successive interactions between the same pairs. C. Non-stationary property for price movements: power-law behavior Financial markets are known to exhibit strong non-stationary properties statistically, such as the intraday activity patterns. Here we discuss the impact of such non-stationary properties on the price movements and its relation to the celebrated power-law behavior for a long time.
Our theoretical model implies that the exponential law (59) for the price movement as the basic statistical property. This property was shown consistent with the real price movement in Ref. [46] for a short time, by removing the nonstationary property in terms of the decay length κ. The decay length κ is related to the number of traders N and the strength of trend-following c, both of which are expected to have non-stationary properties. At least, indeed, the number of traders N exhibits a trivial but strong non-stationary property with a correlation with the decay length.
To illustrate this character, let us analyze the statistical relation between the mean absolute price movement |∆p| and the number of HFTs N in our dataset. We measured |∆p| as a representative of the market volatility for a short time and studied its correlation with N every two hours in Fig. 14a. Spearman's rank correlation coefficient was 0.63 between |∆p| and 1/N . This result implies that the market volatility is relatively small when N is large, which is qualitatively consistent with our theoretical prediction of |∆p| ≈ κ ∼ 1/N β (e.g., β = 1 if parameters are time-constant other than N ). The regression analysis between log |∆p| and log N implies β = 0.86 ± 0.1 as shown in Fig. 14b. We also note that both ∆p and 1/N had a tendency to become large during inactive hours of the EBS market (Fig. 14c).
The non-stationary property of the market volatility is related to the power-law behavior of the price movement for a long time. In Ref. [46], the decay length κ is shown to have a power-law distribution P (κ) ∝ κ −α−1 , which implies the power-law price movement for a long time as the superposition of the short-time exponential distribution, with the complementary cumulative price movement distribution P long (≥ |∆p|) and P short (≥ |∆p|) ∼ e −|∆p|/κ . This result is consistent with previous empirical researches [23][24][25][26]. We thus concluded that both exponential law and power-law can consistently coexist at least in our dataset. We here note that the FX market in our dataset was rather stable without any external shocks. While the exponential-law was essential for a short time in our dataset, we do not deny the possibility that the power-law may be essential even for a short time for unstable markets under external shocks. We believe that that there would exist essentially different structures in unstable markets and it would be interesting to study the statistics of traders' behavior in unstable markets under financial crisis for a future perspective.
D. Non-stationary property for transaction interval: power-law behavior As for the transaction interval, our theory predicts that the exponential-law (45) is essential rather than the powerlaw. This result is consistent with the previous report in Ref. [56], showing that the exponential-law is essential for a short time but it superposition leads the power-law behavior of transaction interval for a long time.
E. Non-stationary property for order-book dynamics: stability of the order-book profile We have discussed that both price movement and transaction interval are quite sensitive to non-stationary properties of the market. On the other hand, the average order-book profile f A (r) is relatively insensitive to such non-stationary properties, in contrast to the price movement and transaction interval. Indeed, the average order-book profile f A (r) is independent of the trend-following propertyc. In addition, the order-book profile shows a convergence for N → ∞, such that lim N →∞ f A (r) is an L 2 -functions, which implies that large variation of N does not have impact on the order-book profile.
Similar insensitivity does not exist for the price movement and transaction interval. Indeed, they exhibit the strong divergence for N → ∞ as lim N →∞ P (|∆p|) = δ(|∆p|) and lim N →∞ P (τ ) = δ(τ ), which implies the huge impact of large variation of N on their statistics.
In this sense, the average order-book profile is a stable quantity to measure under non-stationary processes, whereas the price movement and transaction interval are unstable quantities. Our theory provides the insight on the sensitivity of measured quantities to the non-stationary nature of the market. We believe that developing systematic methods to remove such non-stationary nature is the key to understand not only the origin of power-laws in finance but also the essence of market microstructure. One of the most interesting features in statistical physics lies in the fact that many-body systems can exhibits essentially different characters from few-body systems, such as the critical phenomena and collective motion. Though the current HFT model here does not exhibit critical phenomena, an essential difference can be shown between the cases of N = 2 and N 1. To illustrate this point, let us consider the case ofc = 0 without trend-following. Our theory is applicable to solve the case of N = 2 exactly, which leads the same solution presented in Ref. [36]. The price movement is then predicted to obey the exponential-law even without trend-following, which is qualitatively different from the Gaussian-law for N → ∞. This difference appears because the dynamics of the CM are not sufficiently slow for N = 2. For N = 2, indeed, one can show the absence of the zigzag noise term ∆ξ [T ] in the financial Langevin equation (44), which leads the dominance of the random exponential noiseζ [T ]. For N 1, on the other hand, the random noiseζ[T ] is negligibly small due to the slow CM dynamics, and the trend-following effect becomes necessary to explain the exponential price movements statistics. The model presented here thus exhibits essentially different characters as the number of traders increases.
G. Does the trend-following effect break the random walk hypothesis?
Seemingly, the trend-following effect is strongly contradictory to the conventional assumption of the random walk hypothesis. Our analysis however implies that the situation is not so simple: In the absence of the trend-following, the market price exhibits the strong zigzag behavior, which is far from the pure random walks. By adjusting the strength of trend-following appropriately (i.e., the marginal trend-following case), on the other hand, the zigzag behavior is somewhat relieved and the market price time series rather approaches the random walks. In this sense, the trendfollowing strategy might originate from the rational behavior of HFTs to equilibrate the strategies among traders. It would be interesting to pursue the origin of trend-following behavior from economical viewpoints as future studies.
We also note that the real price time series exhibits slightly zigzag behaviors (i.e., the negative autocorrelation and the tendency for price movement to take different sign), which are consistent with our HFT model for the marginal trend-following case. These different characters from the pure random walks have been well-known in finance and are obviously applicable to predict the direction of price movement in one-tick future. It is not easy however to make profits over the market spread (i.e., the difference between the market best bid and ask prices) by utilizing only these properties. While the real price time series slightly deviates from the pure random walks, it is not obvious whether these characters provide easy opportunities to statistically make profits. Making profits requires us to predict price movements beyond the market spread, which is out of scope of this paper but is an interesting topic for a future study.
H. Possible generalization 1: multiple-tick trend-following random walks and the PUCK model In this manuscript, we have addressed the trend-following HFT model with one-tick memory. It is straightforward to generalize the one-tick memory model toward a multiple-tick memory model, such that where ∆p EMA [T ] is the exponential moving average for the price movements {∆p[T ]} T with decay time τ EMA and renormalization constant Z EMA ≡ 1/(1 − e −1/τEMA ). In the authors' view, this model is more realistic because such an exponential moving average is a popular strategy among HFTs according to a detailed regression analysis for trend-following [57]. We then obtain a generalization of the financial Langevin equation as The generalized financial Langevin equation (64) is equivalent to the potentials of unbalanced complex kinetics (PUCK) model [29], which was previously introduced by time-series data analyses. Here we use an identity for the exponential moving averages ∆p EMA [T ] andp EMA [T ], which leads the PUCK model under a random potential U (p) = −ce −1/τEMA ∆p * Z EMAτ [T − 1] log cosh(e 1/τEMA p/∆p * Z EMA ) . In this sense, our theory is straightforwardly applicable to a derivation of the PUCK model.
I. Possible generalization 2: reduction to the random multiplicative processes In Sec. VI C, we assume ∆p * 1 both for analytical simplicity and for consistency with the empirical report [46]. Here we discuss the case with ∆p * 1, whereby the hyperbolic trend-following reduces to the linear trend-following as c tanh(∆p/∆p * ) ≈ c∆p/∆p * . The financial Langevin equation (44) [46]. Since Eq. (68) belongs to the random multiplicative processes [58], the price movement obeys the power-law statistics, consistently with the previous exact solution [36] for the two-body case N = 2.
VIII. CONCLUSION
In this paper, we have presented a systematic solution for the trend-following trader model, which was empirically introduced in our previous work [46]. Starting from the microscopic dynamics of the individual traders, we have systematically reduced the multi-agent dynamics by generalizing the mathematical method developed in molecular kinetic theory. We first introduce the phase space for our model and derive the dynamical equation for the phase space distribution function, which corresponds to the Liouville equation in the conventional analytical mechanics. On the basis of the Liouville equation for the trend-following trader model, we derive a hierarchy of reduced distributions in the parallel method to the BBGKY hierarchy. By introducing the mean-filed approximation, corresponding to the assumption of molecular chaos, we derive the mean-field dynamical equation for the one-body distribution function, similarly to the Boltzmann equation. We then derive the analytic solution for the mean-field model, whose validity is numerically examined when the number of traders is sufficiently large. We also derive the financial Langevin equation, governing the macroscopic dynamics of the financial Brownian motion, and study the macroscopic properties of the market price movements.
Here we have clarified the power of the kinetic frameworks in describing financial markets from microscopic dynamics. In our conjecture, this success lies on the fact that the financial markets approximately satisfy the key assumptions of the binary interaction and molecular chaos (see Secs. III B and VII B for related discussions); the one-to-one transaction (i.e., the binary interaction) is the most basic interaction, and traders less likely transact with the same counterparty for N 1. We believe that the financial market is one of the best subjects to apply the kinetic theory, besides traffic flow and wealth distribution [7][8][9]12]. We also believe that generalization of kinetic theories would be a key to clarify various social systems from microscopic dynamics, since we have access to various microscopic data these days. We here derive the financial Liouville equation for the trend-following trader model. The dynamics of our model is given by where we have introduce the colored Gaussian noiseη R i;ε satisfying η R i;ε = 0 and η R i;ε (t)η R i;ε (s) = e −|t−s|/ε /2ε. For the mathematical convenience below, we finally take the white noise limit ε → +0: lim ε→0η R i;ε =η R i . We next consider the dynamics of the center of the massz: Let next us consider the dynamics of an arbitrary function f (Γ) forΓ ≡ (ẑ 1 , . . . ,ẑ N ;ẑ CM ,p, ∆p) ∈ S. The timeevolution of f (Γ) is governed by the continuous movement by the continuous noise termη R i;ε and the discontinuous jumps by the deterministic transaction termη T i . We then obtain where we have introduced the difference vector ∆Γ ij induced by transactions defined by withp pst ij ≡ẑ i − (L i /2)sgn(ẑ i −ẑ j ) and ∆p pst ij ≡p pst ij −p. Let us decompose the sum of δ-functions here as where we have usedη R i;ε −η R j;ε > 0 just beforer i −r j − (L i + L j )/2 = 0 (or equivalently,η R i;ε −η R j;ε < 0 just beforê r i −r j + (L i + L j )/2 = 0) by taking collision directions into account. We then take the ensemble average of both hand side of Eq. (A3) with the aid of the Novikov's theorem [59] for an arbitrary functional for the colored Gaussian noiseη R i;ε . Here we remark the following two important relations for the δ-function for the with the dummy variable By substituting f (Γ) = δ(Γ − Γ), we take the ensemble average for both hand-sides of Eq. (A3) in the ε → 0 limit. We then obtain with an abbreviation symbol∂ ij ≡ ∂ i − ∂ j . Here, let us pay attention to the signature of the derivatives. Considering P (Γ) ≥ 0 for all Γ and P (Γ) = 0 for z i − z j > (L i + L j )/2, we obtain the signature of derivatives Equation (A10) can be simplified into Eq. (24) in terms of signatures by introducing the symmetric absolute derivative Note that Eq. (24) is a partial integro-differential equation because of the transaction jumps, though the conventional Liouville equation is a partial differential equation. This implies that our financial Liouville equation (24) technically corresponds to the pseudo-Liouville equation [14,[48][49][50] rather than the Liouville equation.
Appendix B: Detailed Derivation of Financial BBGKY Hierarchy
We here derive the lowest BBGKY hierarchal equation for the reduced distribution function (28), starting from the financial Liouville equation (24). We first introduce the relative price from the CM as r i ≡ z i − z CM . By making transformation Γ = (z 1 , . . . , z N ; z CM , p, ∆p) → Γ r ≡ (r 1 , . . . , r N ; z CM , p, ∆p), the financial Liouville equation can be rewritten as where we have used the chain rule for the variable transformation: We have also introduced ∆Γ ij;r = ∆Γ There is a small deviation from the LO solution around the boundary layer R2 because of the finite number effect for N . The deviation is studied within the NLO approximation for the financial Boltzmann equation (35).
The financial Boltzmann equation (35) is then approximated for r > +L/2 as which is consistent with the LO solution (36) for N → ∞: lim N →∞ φ L (r) = ψ L (r). We have obtained the NLO solution (E4) rather intuitively, but we can check that the solution satisfies the original Boltzmann equation (35) up to the order of N −1/2 by direct substitution. Around r ∼ L/2, indeed, we obtain where we have ignored the inflowJ LL (r + L/2) ∝ φ L (r + L/2) = O(exp(−N L 2 /4L * 2 ρ )) around r ∼ L/2. This implies that the solution (E4) satisfies the financial Boltzmann equation (35) directly. We also note that the NLO correction is the order of N −1/2 and is consistent with the assumptions in Appendix. C, where correction terms of O(N −1 ) are ignored for the derivation of Eq. (C8).
Appendix F: Numerical simulation of the microscopic model
Here we explain the numerical implementation of the trend-following HFT model. We focused on two type of buysell spread distributions given by the δ-distributed spread (38) and the γ-distributed spread (39). The length and time units of this system are taken by L * and L * 2 /(σ 2 N ), respectively. We performed the Monte Carlo simulation for various number of traders N and trend-following parameters (c, ∆p * ) under a fixed discretization time ∆t = 0.01L * 2 /(σ 2 N ). For initialization, we first run the simulation for the time interval of 10L * 2 /σ 2 and then run the simulation again to take samples. The simulation time was set to be 10 5 ticks except for the MSD plots in Fig. 11c,d, Fig. 12d, and Fig. 13c,d. For Fig. 11c,d, Fig. 12d, and Fig. 13c,d, the simulation time was set to be 10 6 ticks. | 16,874.2 | 2018-02-16T00:00:00.000 | [
"Physics",
"Mathematics"
] |
Optimum adaptive bandwidth selection method of local fitting in kernel regression analysis for non-uniform data
Selection of a global bandwidth is commonly used in kernel regression. On the other hand, the pointwise choice of a local bandwidth can lead to better results in kernel regression because it has direct effect on smoothing the signal. These smoothing bandwidths affect the filtering capacity of all signals and systems. It demonstrates a greater adaptability to a variety of analysis ranging from one-dimensional to multidimensional problems, as well as different classes of engineering branches of human–machine interactions. In this paper, we propose a new method called optimum adaptive local bandwidth selection method (OALB), which depends on the bias-variance optimization ratio. It is based on Stankovic optimization of the bias-variance of the signal (Stanković in IEEE Trans Signal Process 52:1228–1234, 2004). The bandwidth is calculated independently for every point based on the intersection of confidence interval (ICI).
in the given system. All the research in last 3-4 decade is looking for some mathematical solution independent of the platform. Hence, we are trying to relate this broad class of representation in somewhat simplest and sophisticate way of representation in all this emerging technology and it motivates us to come out with the simple solution applicable to various platform.
Nonparametric local polynomial regression (LPR) [1][2][3][4][5][6][7][8][9][10][11] has been widely applied in many research areas, such as data smoothing, density estimation, and nonlinear modeling. Given a set of noisy samples of a signal with equally or non-uniformly distributed data values, the scale parameters or specific bandwidths are used to fit the samples locally by a polynomial using the least-squares (LS) criterion with a kernel (window) function. The bandwidth in LPR is closely related to the concept of scale in wavelet transform. The scale parameter in wavelet transform is usually constrained to be dyadic so that it forms a basis for expansion in the l 2 space, while the bandwidth in LPR is flexibly chosen to optimize the bias-variance tradeoff. The ICI rule with reducing bias in regression has proven to be excellent for such spatial adaptation for LPR in terms of asymptotic analysis [12,13].
Since there are various applications requiring perfection in signal acquisition, processing, and reconstruction, it is very crucial to denoise the signals to achieve the best bias-variance tradeoff in estimating the local polynomial coefficients for the reconstruction of unknown non-stationary signals. For timevarying and slow-varying parts of a signal, it is desirable to process it by using a large window length, so that the additive noise reduced to negligible values by averaging out the signal values. In contrast, for fast-varying parts of a signal, it is recommended to have small kernel window, so that the bias errors will be reduced due to using of the lower order of the fitting polynomial curves. Mostly, the analytical relation of locally optimal bandwidth can be easily derived. However, these formulae are not directly applicable in practice as they always involve quantities that are complicated to estimate. Accordingly, various empirical bandwidth selection methods to determine the optimal bandwidth from a finite set of given bandwidth parameters were proposed.
Fully adaptive data-driven local bandwidth of the signal was suggested by Fan in a series of publications [1,2]. Fan-Gijjbels' bandwidth selection (FGBS) method was based on the fact that the mean squared error (MSE) gets the minimum to optimal local bandwidth. The optimal bandwidth can be found with the smallest MSE results from a bandwidth set, i.e., a group of possible bandwidth values.
The empirical-bias bandwidth selection (EBBS) algorithm for multivariate LPR has been suggested by Ruppert [6][7][8]. These innovative and useful methods suffer from high complexity in implementation.
Lepskii et al. [12][13][14] have eliminated the estimation of too many asymptotic quantities in the FGBS method. Lepskii's approach compares within the set of bandwidths and chooses the optimal bandwidth as the largest estimate. The largest estimate of selected bandwidths is similar in performance with the estimate of smaller bandwidth in practice.
Goldenshluger and Nemirovski [15] and Katkovinik [16][17][18][19][20] have proposed an intersection of confidence intervals (ICI) method. The optimal bandwidth obtained by comparing the estimates of the confidence interval with different bandwidths in a set of values. The importance of this method is that there is no need to have the estimation of the asymptotic bias and MSE; hence, it results in lower arithmetic complexity.
Vieu [9], investigated kernel estimators of a regression function. The minimization of a local cross-validation criterion in data-driven method is employed for choosing bandwidths locally. The technique was shown to be optimal asymptotically in terms of local quadratic values of the errors.
Sheather and Jones [10] presented a data-based selection in the estimation of kernel density. The method chooses the bandwidth to minimize good quality signal estimates of the Mean Integrated Squared Error (MISE). The approach showed considerably reliability in performance.
Hall et al. [11] proposed a bandwidth selection technique which involves injecting the estimates into the usual asymptotic representation for the optimal bandwidth with two main modifications. Firstly, his work mainly focused on the estimation of kernel density, and secondly, the convergence of the optimal solution was improved using this method relative to the previous works.
Kai et al. [21] proposed a nonparametric regression mechanism termed "local composite quantile regression smoothing" to enhance the LPR. The authors derived normality, variance of the estimates and asymptotic bias. In this work, the asymptotic relative efficiency of the estimate according to the LPR was examined.
Zhang et al. [22][23][24] studied the problem of adaptive selection of kernels for multivariate LPR. The authors studied the applications of kernel selection in smoothing techniques and reconstruction of images containing noise. The resultant multivariate LPR technique is called the steering kernel-based LPR with Refined ICI technique (SK-LPR-RICI).
Papp et al. [25] considered the problem of optimizing the expected shortfall in the presence of a l 2 regularizer for uncorrelated Gaussian yields. The transition between the regularizer-dominated and data-dominated regimes is narrow. In this transition region, the tradeoff between the variance and the bias was balanced, such that there is domination in the estimation.
Chen et al. [26] presented a technique to select the optimal bandwidth for Kernel Density Functional Estimation (KDFE), where the optimal solution was searched by minimizing the MSE of the KDFE. Two methods of selection of bandwidth are proposed, namely the direct plug-in method and the normal scale method. The simulation results showed better performance in bimodal distributions.
Cheng et al. [27] introduced a nonparametric localized bandwidth estimator in which an asymptotic theory was established. The work can be further investigated in terms of the applicability of the localized bandwidth in the estimation of the kernel of nonparametric regression.
Other researchers use many different ideas or variations to propose various adaptive bandwidth selection methods for LPR. These innovative methods are useful, but the error and the complexity involved are usually high. Hence, an optimized bandwidth selection algorithm and the bias-variance tradeoff problem are discussed in detail. The method did not necessitate any explicit knowledge of bias estimation. In the proposed work, the bias and variance expressions derived from modified, optimized adaptive ICI rule are used for the selection of local bandwidth and used in the LPR.
The bias-variance tradeoff and the ICI adaptive bandwidth selection algorithm have been discussed by Stanković in [3], which have been used to get the optimum adaptive local fit bandwidth selection ICI rules. In our work, we use the bias and variance expressions derived from modified adaptive ICI rule for local bandwidth selection and it is used in LPR. We found the methods to be comparatively better in terms of mean square estimate (MSE), their performance, memory requirement, and implementation complexity to previous analysis.
In this paper, LPR is used for efficient recursive implementations and online signal processing. On the basis of memory requirements and delay time, it is shown that the ICI-based method is simpler to implement than the other methods. The outcomes of our research are: (1) ICI rule based on [3] that helps eliminate unwanted noisy signals. (2) The optimal values of smoothing bandwidth calculated are independent of the other values of the signal. (3) The pointwise smoothing obtained by the parameter 'h' is effectively calibrating the signal values to desired localized optimum data value. (4) The window size can be kept varying in every iteration as noisy values eliminated in every iteration. (5) There is no need to optimize the different values of the algorithms in iterations as it tends to generate the optimized 'h' from the algorithm. (6) Signal regrets optimizing value under limiting conditions because the variance of the signal directly helps modify the value based on curvature at the point. This occurs because of the optimum selection of variance and smoothing parameters at every data point.
The paper is structured as follows. In section II, we review the bias variance method. Section III indicates the modification and the procedure we adapted to get optimum adaptive values for the smoothing bandwidth as well as the steps involved in the calculating the results. Simulations on some standard signals are carried out, and they are presented in section IV with some tabulation, and section V concludes this paper. We begin with brief introduction of bias variance method [3].
Review of ICI rules [3]
Consider the unknown signal f (k) superimposed with stationary noise (k) Consider a quantity Q(k) we want to obtain from this noisy signal that is time-dependent. Let bias and variance depend on 'h' smoothing parameter. Since bias and variance are longitudinally related to each other and both are increasing or decreasing functions of each other or vice versa, the nature of bias and variance is assumed as where B(k) and V depending on f (k) are temporary values that cannot be known in advance. Here, n, m are integer values.
Eqs. (2) and (3) are unknown values and its form will be based on input values of data. Number of researchers used the form of the equations based on kernel parameters even though it is not known prior [6-8, 12, 14]. Instead of representing by such complicated assumed form, we start with simplest representation and dependence of these equations on unknown values of input signal values, and the relation between bias and variance (i.e., reciprocal forms) on smoothing bandwidth 'h'.
The mean square error (MSE) takes the following form: The minimum values of MSE are obtained by differentiating with respect to h at h h opt (k) and equating to zero.
For optimum value, multiplying by h and rearranging, we get the relationship between variance and bias as At h h opt , the bias to standard deviation ratio is independent of h. Now a set of H discrete values of parameter h will be considered as first approximation, as follows with a > 1 and h 0 > 0. The general idea is to find out accurate values of h opt , but the exact optimal choice h opt is not equal to any of the values contained in the set H. To relate h opt with the values in the set H, consider that h opt is close to a parameter h s+ and belonging to H,h s+ ∈ H, i.e.,h s+ ≈ h opt . Thus, we obtain h s+ a P h opt , where p is a constant closely to 0. Now, we can write h s+ or h opt as Thus, fine variations of a signal values in H are taken care by s and their coarse variation around the signal value of h is based on a P .
To get the relationship between two consecutive values, confidence intervals of the random variable are introduced. The confidence intervals have significance in the algorithm. The estimate is a random variable distributed around the true value y with bias(k, h s ) and standard deviationσ (h S ). Suppose that the probability of obtaining an unbiased estimate from the accurate values is κ.
where κ is selected in such a way that probability of κ tends to one, i.e., P(κ) → 1 so that, we get the true values.
The confidence intervals of the estimate for the parameter h S ∈ H are defined by D s [L s , U s ], and it allows the distribution function to vary by simple relation.
Here, Q(k) has an estimateQ s (k), which is obtained by parameters h h S , and σ (h S ).
To get the relationship between two consecutive values in H, confidence intervals of the random variable are introduced. Let us consider two consecutive intervals s − 1ands and their confidence intervals D s−1 and D s . When s << s + , negligible bias will be present [see Eq. (10)]; thus, Q(k)∈D s (with probability P(κ + κ)) → 1). Then, consecutive intersection of confidence interval has null set, i.e., D s−1 ∩ D s =0, hence at least the true value Q(k) will be part of both intervals. For s > > s + , the variance is negligible, and hence, the bias becomes a large quantity. As a result, for larger s, an intersection of confidence intervals exists as, D s−1 ∩ D s Ø; for finite (κ + κ).
As the algorithm suggested, we need to consider only two consecutive intervals of values of s.
When there is a positive bias, this condition means that the minimum positive value of upper bound, which is denoted by min{U s + −1 } is always greater than maximum the lower bound max{L s + }, i.e., min{U s + −1 } ≥ max{L s + } The condition that do not intersect is given by max{U s + } < min{L s + +1 } Equation (11) gives maximal and minimal values ofQ s (k) as follow, The above two inequalities result in following equations by substituting these values into (10), Because the worst case of existence is indicated by these inequalities, the algorithm parameters are evaluated by applying equalities in these equations. We get the parameters by using (14) and (15) κ 2κ The parameter κ in D s can be determined for two successive values of s, for which the sequence of pairs of the confidence intervals D s−1 and D s intersect is s s + . Then, when the following condition occurs, intervals D s−1 and D s will intersect: This is the important equation by which the algorithm calculates and differentiates the points by which the signal values are separated.
The a p is selected from the set of values proportional to h, and then, the next value of σ (h S ) is related by (κ + κ). Hence, Eq. (18) is updated for these new values. By this way, we get the lowest difference of values of the signals based on h.
Optimum adaptive bandwidth selection method
As the above optimizations indicated that varying values of m, n, and a will not affect the selection of the desired adaptive point-wise parameters. Hence, the values of these parameters are kept constant and the values of s and p are search out of the combination of the above equations. The κ is obtained from Eq. (17), even though it will be some burden on processing part of the algorithm, it will be more beneficial as we approach the near optimum of the limiting values of desired confidence intervals. Equation (18) is used to check the proper values of h. This value is the key parameter for the estimation of the pointwise bandwidth of the signal. Equation for the D indicating the upper and lower values of the confidence interval in ICI is derived by Eq. (12).
The κ determines the values of the consecutive separation of the points. The points that correspond to the same group, e.g., all the points on the flat portion of the curve show little variation in this parameter or value of the variance is constant for that portion of the curve. The points on the sharp varying slope of the curve show different values for κ. Hence, grouping of these points is very difficult as the portion of the resultant curve shows more departure from actual values of the curve. Therefore, the anisotropy of the curve will be difficult to adjust.
In selection of the data points in a group of particular dimension, the ICI rule is observed for every data point. The points that fall within the limit of ICI will only contribute to the points; otherwise, it is considered the outside values. This type of classification is observed in the kernel-based selection of the points in a group also. The points closely placed in a group are contributing the higher values to the weight of the kernel; other points far away from the center point have little contribution, provided they are under constrain of ICI rule.
Since the procedure to get the value of the smoothing factor 'h' is independent of the other data points, and it is based on the selection rule of Eq. (18) which is related to the error function, we obtain the value of the smoothing bandwidth with minimum optimal bandwidth at a location. Hence, our logic of selection of the parameters for optimum adaptive ICI rule will come out as follows.
The procedure to implement OALB-ICI-based method is as follows: 8. If the condition satisfied, use Eq. (12) for upper limit and lower limit.
As observed in the procedure, the method helps in reducing the error in the signal very rapidly. It provides higher gain to denoise the signal at every point. The process of denoising is based on grouping the points based on variance of the noise signals. The method gives optimized values in few iterations only and also notes that the value of the smoothing bandwidth is kept constant throughout the regression or it is multiple of a constant value. These multiplying values in various iteration are selected in such a way that it will reduce the noise.
Simulation results and comparisons
The local bandwidth and its use in reconstruction of original signal from noisy values by different kinds of methods have been evaluated in simulation. Four test signals are used, namely Blocks, Heavisine, Bumps, and Doppler, are simulated and they are given by [1,15], and [28]: Blocks: Heavisine: Bumps:
Variation in different algorithms
We consider the experiment in [23]. The signals with white Gaussian noise of unit variance (σ 2 1) and zero mean in the interval (0, 1] are considered here. The observation data are uniformly sampled with a data point of 1024; hence, the sampled length is 1024. The test signals and their corresponding noisy signals are plotted in Figs. 1a, b, 2a, b, 3a, b, 4a, and b, respectively. In our analysis, we do not need the threshold values but we compare our results with the threshold parameter by the other methods as in [23] with parameter κ selected as 2.58 so that it has 99% (Pr(2.58) 0.99) probability that Q will lie in the confidence interval [23]. To compare the results, we evaluated these different adaptive bandwidth selection methods (with abbreviations and reference number): FGBS, [1] and [2]: The method by the Fan (RSC-based bandwidth selector with L 8).
OALB-ICI: The proposed optimum adaptive local fit ICIbased bandwidth selector in this paper. Figures 1, 2, 3, 4 indicate the results of above adaptive bandwidth selectors for the four test signals. Even though these are with different techniques and criteria, their results are satisfactory. The most of the signals details are preserved in implementation by all these methods, while the additive noise is successfully suppressed even though the adaptive pointwise bandwidths of the algorithms, which indicate local characteristics of the observation data for the different methods, have different forms, and the same width of the data values is used in the results. Mostly flat areas of the signal have increased bandwidths in some of the results, so more data samples are required to be included to eliminate the additive noise. Sharp or abrupt changes will necessitate a small bandwidth to reduce the estimation bias. In the experiment, the proposed method's smoothing bandwidth ranged between 0and10 −8 . However, the adaptive bandwidths obtained by these methods have considerable differences. First, optimal bandwidths obtained by RICI are smaller than FGBS due to the refining operation.
As compared with ICI, pointwise bandwidths of FGBS are smoother because smoothing operations have been carried out for a series of subintervals while calculating the optimal bandwidth. Hence, at sudden changes like jump or step discontinuities have lower response in its characteristics as compared with the ICI-based methods, which independently compute the optimal bandwidth for each data point. Hence, for 'Block' signal, which has many jump discontinuities between the flat areas, this phenomenon is observed more in the result. The bandwidth selection in all ICI is automatically set to select small bandwidths around the jump discontinuities, while using large bandwidths for flat areas as shown in Figures. In OALB-ICI method, the algorithms test the values of the data points independent of successive values, so the 'h' under the condition with variance calculates optimized value.
As the procedure indicates, the value of κ is selected as a function of the noise variance and being kept at lower value, the term (κ + κ) will be adjusted itself and the optimum adaptive pointwise selection of the 'smoothing bandwidth' has been calculated as stated above. By this method, the noise variance is automatically taken into account at the point with explicitly used lower and upper bound of the confidence interval. is used to measure quickly the performance and features for the four test signals are listed in Table 1.
Observation of parameter
The small value of κ results in the values of adaptive bandwidths as small bandwidths, and little improvements can be achieved by traditional as well as refining the conventional ICI-based adaptive bandwidths selector. When κ is selected as a large value, considerable improvements in MSE values can be achieved. In our proposed method, κ is made the function of noise variance itself, and thus, the bandwidth has shown significant effect depending on noise values.
In the method adopted to calculate the parameter, there is strong relationship among the calculation of various parameters. The κ is helpful in reducing the noise between the consecutive values. Also, its dependence on variance correlates this algorithm to reach optimum values at every point of data values and also this algorithm to reach optimum values as early as possible. Particularly in terms of performance, it provides large gains with selection of proper selection of variance at every point. The change in κ value essentially provides the relation among consecutive values and variance. For examples, an abrupt change in κ value also shows large change in variance values. Thus, it provides the strong relationship to the results in simplest ways. As observed, it provides better logarithmic or exponential characteristics to both phase and gain at the points. It is thus possible to have the better upper and lower values of the confidence interval t range around the mean values.
As the observations indicate, to keep the constant value of κ is not possible because there is always strong relation between subsequent positions of data values; there are only few places where one has to eliminate the correlation. As in wavelet transform adaptive to abrupt change,D s [L s , U s ] provides the way of successive adaptation to optimum values in the range because of its powerful ways to reach the nonlinear variable gain.
When comparing the performances of different methods, the FGBS method, traditional ICI, the refined ICI-based method [23], and our optimum adaptive local fit ICI method, the situation is not as clear as we expect. In Table 1, the optimal value of MSE varies from signals to signals. The Doppler signal with continuous point to point variations κ with smaller value is preferred so that variation in local bandwidth which is varying constantly can be represented. In this case, smaller bandwidths are required and it can be used to reduce the estimation bias. Instead of this, 'Heavisine' which has slow variations, a large value of κ should be preferred, which will lead to larger bandwidths to reduce the additive noise [23,24]. In our optimum adaptive bandwidth selection, we make the values of κ as a function of noise values; hence, for every points on the curve, the κ shows the variation and While comparing the results of refined ICI methods where implementation has been carried out by a fully data-driven method for choosing κ adaptively, the refined ICI-based bandwidth selector showed varying results for the supported values of it. In the optimum adaptive method, it shows uniformity in the results for test signals with uniformity in the denoising for various values of the noise parameters. We can see that the optimum adaptive local fit ICI method can achieve a relatively good result and show uniformity in all the signal processing to get the MSE as expected for the results.
Recursive implementation
Using test signal 'Blocks' with a white Gaussian noise of zero mean and unit variance, the recursive implementation of LPR will be demonstrated. In the work by Z.G. Zhang [23], the bandwidth functions h RSC andh M SE are recursively estimated in the interval as in FGBS, while in the TICI method, the values are selected from threshold parameter list and keep varying accordingly. Similar to TICI, RICI used the h opt . The subintervals length L 8, in the FGBS method, and the threshold parameter κ 2.58 have been used. The ICImethods and the FGBS method using forgetting factors are, λ ICI 0.95, λ RSC λ MSE 0:95/L 0.12, respectively. The estimation window size is N w 32, and the forgetting factor is λ σ 1−(1/N w ) 0.97 in the recursive estimation of noise variance, Fig. 5 shows the FGBS, TICI, and RICI methods estimation results, and the corresponding local bandwidth functions for block signal. All these methods work satisfactorily, and the local bandwidth can be adjusted adaptively. The MSE values are 2.2265, 2.1165 and 1.0417 for the FGBS, TICI and RICI methods, respectively, while the MSE by OALB-ICI is 0.0140 and window length is kept constant throughout the recursion. Table 2 summarized the observations in experiments for these classes of bandwidth selectors.
In our method, we rely on the noisy data itself, and selection of the signal values at the point is independent of MSE values or the MSE of other data values as well. We are characterizing κ as a function of noise variance as given in summary of the procedure used to get the local bandwidth. Then, the Fig. 6 Denoising result of LPR with input SNR 10 dB for the signal Bumps by proposed OALB-ICI method used to filter is Nadaraya-Watson kernel regression, which is based on the known values of observation and categories as zero-order approximation to the multiple nonparametric regression function. The experiments show that it is more relevant and practical method in number of situations, including approximation, search, array formation for engineering work, etc. It is represented by the equation where z are expected values of denoised signal (estimated values), and K(·) is the kernel function based on difference of the data points to the centered location where the estimation is carried out. We used Gaussian form of equation.
Denoising example based on bumps and audio signal
The signal shown in Fig. 6 is corrupted by white Gaussian noise; the SNR is 10 dB. The corresponding denoising is shown in same Fig. 6. The denoised SNR and MSE are 23.04 dB and 0.01609, respectively. The length of the window is kept varying in the iterations in this case. This demonstrates that the method is appropriate for denoising of unknown signals. The audio signal is used to show the examples of timevarying signals shown in Fig. 7. This signal is corrupted by white Gaussian noise with SNR 10 dB. The corresponding Here, the length of the window is five successive samples, and it is denoising the signal without any iteration. This shows that the method is suitable for denoising of time-varying signal within the time constraints of real-time signal processing.
Algorithm reliability and complexity analysis
The algorithm reliability as studied by Stanković [3] suggested that the under limiting conditions probability of 'a false result' for limited distribution of error is zero for. |x| > ((κ + κ)σ (h S )) e. g., it is impossible to get a false result for uniformly distrusted error with (κ + κ) > √ 3. Gaussian distributed error gives the false result probability less than 0.0001, while heavy-tailed Laplacian error probability not more than 0.05 at a point.
Computation of Q s at a given point involves local sum of successive points in a window. The window around the centered value can be constructed to store the values. We stored the sufficient consecutive values and sum of corresponding window values. This computation can be achieved by known methods [29] in O n(log n) N −1 times, and the values of windows containing y can be retrieved in O n(logn) N time. Thus, estimated values in a window can be computed in O n(logn) N times after a preprocessing step in O n(logn) N −1 times. Here, we say that algorithm has complexity number of times the subsequent values taken into consideration with the complexity involve in Nadaraya-Watson method. The complexity for different work of Nadaraya-Watson has preprocessing complexityO n(logn) N −1 , storage complexity O(2 n ) and functional computational complexityO (logn) N .
Conclusions
Features of different adaptive bandwidth selection methods for LPR have been studied. Comparisons of these methods in terms of their performance and implementation complexity using test value sets have been carried out. A new optimum adaptive local fit ICI-based bandwidth selector and its recursive implementation are feature out. Simulation of the proposed optimum adaptive local fit ICI-based bandwidth selection method performance is considerably better than the other ICI methods. There are various applications of these studies such as interpolation of missing data values, testing of Patch antenna and PCB for various bandwidth, other applications involving image analysis and smoothing, and MRA of non-uniform data. The method suggested by us does not involve any complication or higher-order approximation; actually, it is simplest one and easily embedded in hardware and software fusion. | 7,093.8 | 2023-04-04T00:00:00.000 | [
"Computer Science"
] |
First large-scale genomic prediction in the honey bee
Genomic selection has increased genetic gain in several livestock species, but due to the complicated genetics and reproduction biology not yet in honey bees. Recently, 2970 queens were genotyped to gather a reference population. For the application of genomic selection in honey bees, this study analyzes the accuracy and bias of pedigree-based and genomic breeding values for honey yield, three workability traits, and two traits for resistance against the parasite Varroa destructor. For breeding value estimation, we use a honey bee-specific model with maternal and direct effects, to account for the contributions of the workers and the queen of a colony to the phenotypes. We conducted a validation for the last generation and a five-fold cross-validation. In the validation for the last generation, the accuracy of pedigree-based estimated breeding values was 0.12 for honey yield, and ranged from 0.42 to 0.61 for the workability traits. The inclusion of genomic marker data improved these accuracies to 0.23 for honey yield, and a range from 0.44 to 0.65 for the workability traits. The inclusion of genomic data did not improve the accuracy of the disease-related traits. Traits with high heritability for maternal effects compared to the heritability for direct effects showed the most promising results. For all traits except the Varroa resistance traits, the bias with genomic methods was on a similar level compared to the bias with pedigree-based BLUP. The results show that genomic selection can successfully be applied to honey bees.
INTRODUCTION
Genomic selection (Meuwissen et al. 2001) incorporates genomewide marker data into breeding value estimation. Compared to pedigree-based breeding values, the use of genomic data can increase the accuracy of estimated breeding values (EBV), or enable the selection of animals before they are phenotyped. Both strategies have been realized to increase the genetic gain in several livestock species (Doublet et al. 2019;Fulton 2012;Samorè and Fontanesi 2016). Honey bee breeders, by contrast, employ phenotypic selection (De la Mora et al. 2020;Maucourt et al. 2020) or pedigree-based breeding value estimation (Bienefeld et al. 2007;Brascamp et al. 2016;Hoppe et al. 2020). Recently, a high-density SNP chip was developed and genotypes of phenotyped queens are now available to validate genomic prediction (Jones et al. 2020).
Pedigree-based best linear unbiased prediction (PBLUP) of breeding values began in 1994 for the population registered on BeeBreed. The EBV enabled hundreds of mostly Central European bee breeders to improve the quality of their stock . To ensure the quality of the EBV, the program relies on a specialized infrastructure for mating control and an adapted genetic model to account for the peculiarities of the honey bee (Bienefeld et al. 2007;Brascamp and Bijma 2014).
The phenotypes of honey bee colonies for economically relevant traits result from the collaboration of worker groups and queens. In honey yield, for example, the workers of a colony perform foraging and storing, but the queen affects the number of workers via her egg-laying rate, and influences the behavior of the workers via pheromones. Therefore, the genetic model for the traits includes direct and maternal effects for the contribution of workers and queens, respectively.
In commercial honey bee breeding programs, the demands of beekeepers lead to selection traits that differ significantly in terms of methodology and effort for recording and mathematical modelling. Typical aims include increased honey yield, better workability for the beekeeper, and more disease resistance (Petersen et al. 2020;Uzunov et al. 2017). Especially resistance against Varroa destructor is targeted, since this parasitic mite contributes to severe colony losses in numerous countries (Genersch et al. 2010;Guichard et al. 2020;Traynor et al. 2016).
Genomic breeding value estimation in honey bees has been tried in simulation studies, and single-step genomic BLUP (ssGBLUP) appeared as an efficient solution Gupta et al. 2013) to combine pedigree information with genomic information. The simulations showed that ssGBLUP can increase the accuracy of genomic breeding values considerably and enables high genetic gains, if the infrastructure is appropriately adapted. Augmenting ssGBLUP with trait-specific weights leads to weighted ssGBLUP (WssGBLUP) (Wang et al. 2012), which can increase the prediction accuracy further, as results from other species have shown (Lourenco et al. 2014;Teissier et al. 2019;Vallejo et al. 2019).
To our knowledge, only simulated results on genomic EBV in honey bees have been published until now. In this study, we first report the accuracies and the bias of PBLUP, ssGBLUP, and WssGBLUP for a number of key traits of economic importance in a large breeding population of honey bees.
MATERIALS AND METHODS Data
Pedigree and performance data from the Apis mellifera carnica population were used, since the genotyped queens belonged to this subspecies, which is native and widespread in Central Europe (Lodesani and Costa 2003;Ruttner 1988;Wallberg et al. 2014). The data were downloaded from BeeBreed in February 14, 2021, totaling 201,304 valid performance tests and pedigree data of 234,519 queens. The oldest queen on the pedigree was born in 1949. Since a large part of the BeeBreed data set was of negligible relevance to the breeding values of the genotyped queens, the data were reduced and refined for the comparison of classical and genomic prediction. Queens with a valid phenotype whose genotypes passed the quality control (see below) were the starting set. In an iterative process, phenotypes of performancetested queens on apiaries from the test year 2010 onwards were included by adding (1) queens tested at the apiaries of the previously added queens, (2) sister queens of the previously added queens, and (3) queens when an ancestor as well as offspring had already been added. Steps (1)-(3) were repeated until no further phenotypes could be added. The pedigree was restricted to the resulting queens and their ancestors. The final enriched data set contained 36,509 phenotypes in a pedigree of 44,183 queens and 4512 sires, which were usually groups of sister queens dedicated to drone production in an isolated geographic area. Table 1 lists the countries of origin for all colonies.
The phenotypes covered honey yield, gentleness, calmness, swarming drive, hygienic behavior, and Varroa infestation development (VID). Honey yield was measured in kg, and the values were corrected for outliers as described in . Gentleness, calmness, and swarming tendency were recorded as marks from 1 to 4 with 4 being the best mark. Records for these traits were discarded if all colonies on an apiary received the same mark. For hygienic behavior, larvae were artificially killed with a pin and the percentage of cleared cells was recorded (Büchler et al. 2013). VID indicates the resistance of a colony against Varroa, based on the change in the level of Varroa infestation from early spring to late summer (see Hoppe et al. 2020 for the calculation of VID). For a measurement of Varroa infestation, a bee sample is taken from the hive, and the number of mites per 10 g bees is determined (Büchler et al. 2013). Table 2 shows the descriptive statistics of the phenotypes available for each trait.
The 100-K-SNP chip (Jones et al. 2020) was used to genotype 2970 queens which were registered on BeeBreed and born between 2009 and 2017. Markers that were called in less than 90% of the samples, had minor allele frequency below 1%, or showed significant deviations from Hardy-Weinberg equilibrium after Bonferroni-correction (χ 2 p value < 0.05 × 10 -5 ) were removed. This left 63,240 markers for further analysis. A total of 312 queens were removed because less than 90% of all the valid markers were called in their samples, indicating low DNA quality. After comparisons of daughter and parent based on the number of opposing homozygotes, 207 queens were removed (Bernstein et al. 2022). Subsequently, 62 samples were removed based on the comparison of genomic and classic relationship matrix (Calus et al. 2011). This left 2389 genotyped queens for further analysis.
Model and genetic parameters
The complex collaboration between the workers and the queen of a colony must be reflected in the model, and carefully analyzed in the calculation of genetic parameters (Brascamp and Bijma 2019). The phenotype, y, of a colony is modelled as follows: where a w is the direct effect of the worker group in the colony, and m Q the maternal effect of the queen in the colony, while e is a non-heritable residual. The genetic component of the phenotype will be denoted g = a W + m Q . The phenotypic variance was calculated according to formula (2) in Brascamp and Bijma (2019) as follows: where σ 2 a and σ 2 m are the additive genetic variances of direct and maternal effects, σ am is the covariance between direct and maternal effects, σ 2 a is the residual variance, and A base is the average relationship between two workers of the same colony in the base population. The variance components were estimated via AIREML with the complete phenotypic information, using the model for PBLUP (see below). We used A base = 0.40 (Brascamp and Bijma 2019), because even the oldest queens in our pedigree came from populations with established mating control (Armbruster 1919). The heritabilities of direct and maternal effects, h 2 a and h 2 m were calculated according to formulas (6b) and (6c) in Brascamp and Bijma (2019), respectively, as follows: We provide two concepts of the heritability of the sum of maternal and direct effects. Firstly, heritability is usually defined as the fraction of phenotypic variance due to additive genetic effects. In honey bees, the corresponding concept is the heritability of the genetic component of the phenotype, h 2 g ¼ Var g ð Þ=σ 2 ph . We calculate h 2 g according to formula (6a) in Brascamp and Bijma (2019) as follows: For 6 queens in the data set, no country of origin was given, and they were not genotyped. Honey yield is given in kg. Marks from 1 to 4 were recorded for gentleness, calmness, and swarming drive. Hygiene is given as the percentage of cleared cells. VID is a Varroa resistance score and higher values indicate more resistance.
Secondly, in the classical theory of animal breeding, the heritability can be used to predict short-term genetic gain, but h 2 g is unsuitable for this purpose. The BeeBreed data set relies on colony-based selection (CBS), and short-term genetic gain with CBS can be estimated using formulas (18) and (6) from Bernstein et al. (2021) using the heritability of the selection criterion of CBS, h 2 CBS . We calculate h 2 CBS as follows: The numerators of h 2 g and h 2 CBS correspond to the notions of genetic variance in the performance and selection criterion, respectively, as introduced by Du et al. (2021).
Breeding value estimation
We analyzed single-trait models without repeated measurements for the same trait on the same colony. The following mixed linear model was used for PBLUP: where y is a vector of observations on colonies; b a vector of fixed effects (year and apiary); a a vector of direct effects of queens, worker groups or sires; m a vector of maternal effects of queens, worker groups or sires; e a vector of residuals; and X, Z a , and Z m are known incidence matrices for b, a, and m, respectively. For a, m, and e, the expected values were assumed to equal 0, while their covariance matrix was given by: where A is the honey bee-specific numerator relationship matrix derived from pedigree (Brascamp and Bijma 2014), I is an identity matrix, and σ 2 a , σ 2 m , σ am and σ 2 e are the additive genetic variance of worker and queen effects, their covariance, and the residual variance, respectively.
The model equation and variances for ssGBLUP were the same as for PBLUP, except for the fact that matrix H replaced matrix A. Matrix H was constructed from the numerator relationship matrix A which is calculated from pedigree information, and the marker information in the following steps (Aguilar et al. 2010;Christensen and Lund 2010). The genomic relationship matrix, G, (VanRaden 2008, method 1) was constructed by the following equation: where p i is the allele frequency of the SNP at locus i; Z = M-P with M containing the marker information of all genotyped queens given as 0, 1, 2, and matrix P defined column-wise by P ji = 2p i for all j. Matrix G was adjusted to A by adjusting the means of diagonal and off-diagonal elements as described by (Christensen et al. 2012). To have an invertible genomic relationship matrix, we used the weighted genomic relationship matrix, G w , given by the following equation: where A g is the submatrix of A relating to the genotyped animals. Finally, the inverse of H was computed according to the following formula: Method WssGBLUP is an expansion of ssGBLUP which employs weights for all marker loci in the construction of the numerator relationship matrix. In order to assign a large weight to loci with a high impact on the trait, the weight of a single marker locus corresponds to the amount of additive genetic variance explained by this locus. To calculate the additive genetic variance explained by each marker, a BLUP equation for the SNP effects was used.
The model equation and variances for WssGBLUP were the same as for ssGBLUP, except for the fact that matrix G * replaced matrix G. Matrix G * was constructed from the vectors of direct and maternal additive genetic effects, a and m, and the genomic relationship matrix G w , which were obtained from ssGBLUP. The vectors of the direct and maternal SNP effects, u and v, were estimated by: , where p i and M have the same value as in ssGBLUP.
SNP weights d were calculated using the average of the direct and maternal SNP effects, deviating from the original algorithm which considered only single-trait models (Wang et al. 2012) as follows: The trait-specific matrix G * was calculated by the following formula: where Z is the same matrix as in ssGBLUP. Programs from the BLUPF90 software (Misztal et al. 2002) were used to estimate the genetic parameters, predict breeding values and calculate relationship matrices G and G * . To account for the specifics of honey bees, PInCo (Bernstein et al. 2018) was used to calculate the pedigreebased relationship matrices. Equations (9)-(12) were implemented in R (R Development Core Team 2020).
Validation
We performed two types of cross-validation. The generation validation simulated the selection of candidates before they were phenotyped, which is a common scenario in genomic selection. However, the differences in management practices, climate, and vegetation between apiaries can influence the results of the generation validation. The five-fold crossvalidation was designed to evaluate predicted breeding values with a reduced impact of the differences between apiaries.
In the generation validation, EBV were predicted using PBLUP, ssGBLUP and WssGBLUP (1) without the phenotypes of all queens born in 2017 or later, and (2) without the phenotypes of queens born in 2016 or later. For the validation procedure, the EBV of the 265 genotyped queens born in 2017 from scenario 1 were merged with the EBV of the 994 genotyped queens born in 2016 from scenario 2, and likewise for the EBV of the corresponding worker groups. Thereby, the validation sets of the two scenarios, i.e., the genotyped queens born in 2017 and 2016, respectively, could be treated as a single validation set. In the five-fold cross-validation, only apiaries with at least five performance-tested queens were included to ensure reliable estimates of fixed effects. This left 1281 genotyped queens for validation. Each apiary was randomly split into five equally sized partitions, splitting the 1281 queens into five partitions. For each partition, EBV were estimated using PBLUP, ssGBLUP and WssGBLUP without the phenotypes of the animals on this partition. The results from all partitions were merged, so that the five partitions could be treated as a single validation set of 1281 queens and their worker groups. The procedure was repeated six times from the split of the apiaries on.
To assess the accuracy of PBLUP, ssGBLUP, and WssGBLUP, we calculated the accuracy of the prediction of the genetic component of the phenotype, g, as follows: where b g was calculated for each colony, C, by b g C ¼ b a W þ b m Q with b a W as the predicted direct effect of the worker group of C, b m Q as the predicted maternal effect of the queen of C, and y − Xb as the vector of phenotypes corrected for fixed effects. We prove Eq. (14) in the Appendix (Text S1). For each method to predict EBV, the phenotypes corrected for fixed effects were calculated using fixed effects from the same method. In the generation validation, PBLUP, ssGBLUP and WssGBLUP were run on the complete data set to obtain appropriate fixed effects. In the five-fold crossvalidation, the fixed effects for the correction of the phenotypes were taken from the same run of the same partition as the predicted phenotypes.
A bootstrap procedure was used to test whether the accuracies of WssGBLUP and ssGLUP were significantly higher than the accuracy of PBLUP. In total, 10,000 bootstrap sample vectors were constructed by sampling validation queens with replacement, and the accuracy with PBLUP, ssGLUP, and WssGBLUP was calculated for each vector. Two methods were considered significantly different, if the same method had higher accuracy in 97.5% of all sample vectors (p value of 0.05 in a twosided test). Similar bootstrapping methods were used in other studies (Iversen et al. 2019;Legarra et al. 2008).
The regression coefficient, b 1 , of y − Xb on b g was used as a measure of bias. Values of b 1 < 1 and b 1 > 1 indicate inflation and deflation of the genetic components of the phenotypes compared to the phenotypes corrected for fixed effects, respectively.
Genetic parameters
Estimates of the genetic parameters are shown in Table 3. The heritability of the genetic component of the phenotype, h 2 g , was very high for gentleness and calmness, medium for hygienic behavior, honey yield and swarming drive, low for VID. All traits showed considerable negative genetic correlations between maternal and direct effects. The heritability for direct effects was considerably larger than the heritability for maternal effects in gentleness, calmness, and hygienic behavior, but equal to or smaller than the heritability for maternal effects for all other traits.
Accuracy of breeding values
The accuracies of the methods under investigation in the generation validation are shown in Fig. 1. Compared to PBLUP, the accuracy was improved with WssGBLUP for honey yield (94%), swarming drive (7%), gentleness (6%), calmness (5%), and VID (20%), and with ssGBLUP, improvements were observed for honey yield (48%), VID (41%), and gentleness (6%). The improvement with WssGBLUP over PBLUP for honey yield was statistically significant. No improvement was observed for hygienic behavior, and ssGBLUP did not yield a higher accuracy than PBLUP for calmness and swarming drive.
The accuracies of the methods under investigation in the five-fold cross-validation are shown in Fig. 2. Improvements over PBLUP were achieved for swarming drive (20%), honey yield (15%), calmness (2%), and gentleness (3%) with WssGBLUP. Improvement over PBLUP with ssGBLUP was achieved for honey yield (10%) and swarming drive (3%). The improvements with WssGBLUP over PBLUP were statistically significant for calmness and swarming drive. No improvement was observed for hygienic behavior and VID.
Overall, both validations showed similar results, although the accuracy was higher in the five-fold cross-validation, and the increases in accuracy with ssGBLUP and WssGBLUP over PBLUP were higher in the generation validation.
Bias of breeding values
Bias was calculated as the regression coefficient b 1 of phenotypes corrected by fixed effects on the predicted genetic component of the phenotype. The results for EBV from PBLUP, ssGBLUP and WssGBLUP in the generation validation are shown in Fig. 3. The results for all three methods showed inflated EBV estimates. The regression coefficient b 1 deviated the most from 1 for VID with WssGBLUP and honey yield with PBLUP by −0.59, and −0.52, respectively. While WssGBLUP showed overall the most inflation, the difference between PBLUP and WssGBLUP ranged only up to 0.16, which was relatively small compared to the deviation from 1 with PBLUP. For ssGBLUP, the results were overall similar to PBLUP, although ssGBLUP was considerably less biased than PBLUP for honey yield and VID.
The results for EBV from PBLUP, ssGBLUP and WssGBLUP in the five-fold cross-validation are shown in Fig. 4. For honey yield, gentleness, and calmness, the bias of the EBV was negligible, although the EBV from WssGBLUP tended towards inflation. For swarming drive and VID, all methods showed similarly inflated EBVs with regression coefficient b 1 < 0.8. For hygienic behavior, EBVs from PBLUP were nearly unbiased, while the genomic methods produced inflated EBVs. Table 3. Estimated variance and covariance components, genetic parameters derived from these (co)variances.
Genetic parameters and quality of breeding values
The estimated heritabilities (Table 3) were in line with the results for the multiple trait models of the complete BeeBreed data set . The results on the accuracies in the generation validation (Fig. 1) and in the five-fold cross-validation ( Fig. 2) showed improvements with WssGBLUP over PBLUP for honey yield, gentleness, calmness, and swarming drive. These results were within the range reported for data sets of similar size in dairy goats , or for traits affected by maternal effects in beef cattle (Lourenco et al. 2015) or pigs (Putz et al. 2018 Fig. 1 Accuracies of breeding values in the generation validation. Accuracies of pedigree-based BLUP (PBLUP), single-step genomic BLUP (ssGBLUP) and weighted ssGBLUP (WssGBLUP) were calculated in the generation validation. The results on the difference in accuracy between WssGBLUP and PBLUP can be explained with the results on the heritabilities (Table 3). Traits with a higher heritability for maternal effects than for direct effects can be expected to show higher increases than other traits in accuracy with WssGBLUP and ssGBLUP over PBLUP, because simulation studies in honey bees showed greater increases in accuracy with ssGBLUP over PBLUP for maternal effects than for direct effects . This result stood out from other species where maternal effects are modelled, as in beef cattle (Lourenco et al. 2018) and simulation studies for beef cattle and pigs (Lourenco et al. 2013;Putz et al. 2018), the accuracy for direct effects showed higher increases in accuracy with ssGBLUP over PBLUP than the accuracy for maternal effects.
The results of the current study are in line with the results from the simulations on honey bees ). On the one hand, honey yield and swarming drive showed the highest improvements in accuracy with WssGBLUP over PBLUP, and the heritability for maternal effects is equal to or greater than the heritability for direct effects in both traits. On the other hand, gentleness, calmness, and hygienic behavior showed less or even no improvements in accuracy with WssGBLUP over PBLUP, and the heritability for direct effects is twice as great as the heritability for maternal effects in these traits.
The results for the Varroa resistance-related traits were also affected by problems in gathering data. The number of genotyped queens with phenotype for both traits was about 200 queens lower than for honey yield, gentleness, and calmness. Furthermore, the number of phenotyped queens on apiaries with a genotyped queen (Table 2) was low for the Varroa-related traits, which might have led to less accurate fixed effects. The results for VID are also due to the low heritability of the genetic component of the phenotype for this trait, because simulation studies in honey bees and other species show that traits with low heritability also have low accuracy of pedigree-based and genomic EBV (Gowane et al. 2019;Gupta et al. 2013). However, Varroa-specific hygienic behavior is the subject of ongoing research (Conlon et al. 2019;Farajzadeh et al. in prep;Mondet et al. 2020). The discovery of new quantitative trait loci (QTL) which are then covered by causative SNPs on a new chip can increase accuracy for the Varroa-related traits.
The accuracy of ssGBLUP was slightly lower than the accuracy of WssGBLUP for most traits. This result is common in studies for several other agricultural species using WssGBLUP (e.g., Lu et al. 2020;Teissier et al. 2019;Wang et al. 2014). In simulation studies Wang et al. 2012), WssGBLUP had higher accuracy than ssGBLUP when the trait was controlled by few QTL, and both methods showed equal accuracy when the trait was polygenic. As the accuracy for VID was higher with ssGBLUP than with WssGBLUP in both validations, the genetic architecture of the trait appears to be highly polygenic. However, this is a preliminary conclusion, as VID has the lowest heritability of the traits we considered, due to the many factors that affect it (see Guichard et al. 2020 for a review).
The accuracies in the five-fold cross-validation were for the majority of the traits higher than in the generation validation. This is due to the fact that in the five-fold cross-validation, sibling groups are evenly distributed across the partitions, while the phenotypes of whole sibling groups might be removed for the calculation of EBVs in the generation validation. Therefore, the five-fold cross-validation is a validation within sibling groups, while the generation validation is similar to a validation across sibling groups. Studies in other species found that validations within sibling groups show higher accuracies than validations across sibling groups Kjetså et al. 2020;Legarra et al. 2008). The standard errors of the accuracies in the five-fold cross-validation were extremely small in our study, but the accuracies for individual partitions showed large differences. This suggests that the predicted breeding values were stable across the repetitions, although the results on single partitions were very different.
According to a simulation study in honey bees , the size of the reference population in our study is close to the minimal size which should be available to initiate a breeding program. We expect the reference population to grow in the future, when breeders start to apply genomic selection.
The larger reference population is likely to obviate the need to run WssGBLUP instead of ssGBLUP, since a simulation study showed that WssGBLUP and ssGBLUP yield the same results for large reference sets . The larger reference population will also result in an increase of the accuracy of genomic methods, as results from other species demonstrate (Daetwyler et al. 2012;Lourenco et al. 2015;Mehrban et al. 2017;Moser et al. 2009).
In the generation validation, inflation was observed with all methods (Fig. 3). However, considerable bias was neither observed in simulations for honey bees for PBLUB and ssGBLUP, nor the Austrian data set (Brascamp et al. 2016) with PBLUP. Since only limited bias was observed in the five-fold crossvalidation (Fig. 4), the inflation in the generation validation is possibly due to genotype by environment interactions (GxE). GxE were found, e.g., in Italian honey bees (Costa et al. 2012), an Austrian Fig. 4 Bias of breeding values in the five-fold cross-validation. Mean regression coefficients b 1 of pedigree-based BLUP (PBLUP), single-step genomic BLUP (ssGBLUP) and weighted ssGBLUP (WssGBLUP) were calculated in the five-fold cross-validation across the six repetitions. The standard errors over the six repetitions are not shown, as they were smaller than 0.04 for all other traits except for Varroa infestation development where they ranged up to 0.08.
honey bee breeding program (Brascamp et al. 2022), and by a wider study across Europe (see Meixner et al. 2014 for an overview). The five-fold cross-validation was less susceptible to GxE, since this validation only masked the phenotypes of one-fifth of the colonies on an apiary. Further analysis is required to confirm that the bias in the present study is due to GxE, and localize regions of similar GxE. The bias with genomic methods compared to PBLUP can be reduced by, e.g., increasing the share of the classic relationship matrix A g in Eq. (9) (McMillan and Swan 2017; Misztal et al. 2017).
Practical application of genomic selection in the honey bee The availability of genomic breeding values offers new possibilities in breeding schemes for honey bees. In classical breeding schemes, queens spend the first months of their life building a colony. When the queens are 1 year old, they are used as droneproducing queens to inseminate other virgin queens, or phenotyped to be selected as dams of new queens when they are 2 years old. A simulation study of innovative genomic breeding schemes suggested to genotype drone-producing queens before they are employed, and to employ only the candidates with the highest genomic breeding values. This requires additionally that phenotyped queens are genotyped to achieve a high accuracy of selection. According to the simulations, a budget to genotype at least 1000 queens per year should be available to increase genetic gain considerably. Another simulation (Brascamp et al. 2018) study argued for a different genomic breeding scheme, where several generations of queens are bred within a single summer by genomic selection, and phenotyped in the following year. Since this scheme implies a shorter generation interval, extremely high genetic gain would be possible, if the scheme was practically feasible. Gathering genomic data from honey bees requires special considerations, due to their small body size, and their genetic diversity within a hive. Non-lethal ways to genotype queens are available (Jones et al. 2020), but require further development for commercial applications. The exuviae which queens leave behind after hatching offer a non-lethal option to genotype virgin queens, but just one exuvia is available for each queen, and exuviae showed low DNA quality in several cases. Relying purely on this technique in the present state could require breeders to forgo queens simply because the genotyping failed. Alternatively, drones can be gathered from a hive to genotype the queen, since drones are haploid offspring. However, collecting a sufficient number of drones in the first months after the queen's hatching is impossible in routine breeding, since a young queen will only lay worker eggs to grow her colony.
CONCLUSIONS
WssGBLUP offers significantly greater accuracy than PBLUP for honey yield, calmness, and swarming drive. For gentleness, the accuracy of WssGBLUP was greater than the accuracy of PBLUP to a similar degree as for calmness, but the difference remained below the threshold for significance. For all traits, except the Varroa resistance traits, the bias with WssGBLUP and ssGBLUP was on a similar level compared to the bias with PBLUP. For the Varroa resistance traits, the genomic methods offer too little improvement over PBLUP to be recommended based on the current data set, which is likely due to the size of the reference population. A larger reference population or the discovery of new causative SNPs for Varroa resistance are required to increase the accuracy of genomic methods for hygienic behavior and VID. The results suggest that genomic selection can be successfully applied to honey bees.
DATA AVAILABILITY
The genotypes used for this study are available in Jones et al. (2020) (https://doi.org/ 10.5061/dryad.gxd2547gp). The phenotype data of this study belong to several breeding associations and are unavailable due to legal reasons. Requests to access further raw material should be directed at the authors of this study. | 7,250.6 | 2022-06-15T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
Study on the Structure and Dielectric Properties of Zeolite/LDPE Nanocomposite under Thermal Aging
Nanodoping is an effective way to improve the dielectric properties and the aging resistance of polyethylene. Nano-zeolite has a nano-level porous structure and larger specific surface area than ordinary nano-inorganic oxide, which can be used to improve dielectric properties of low-density polyethylene (LDPE) nanocomposite. The zeolite/LDPE nanocomposites were prepared and subjected to thermal aging treatment to obtain samples with different aging time. Using scanning electron microscopy (SEM), Fourier transform infrared spectroscopy (FTIR) and the differential scanning calorimetry (DSC) test to study the microscopic and structure characteristics, it was found that nano-zeolite doping can effectively reduce the thermal aging damage to the internal structure of the nanocomposite; carbonyl and hydroxyl decreased significantly during the thermal aging time, and the crystallinity effectively improved. Nano-zeolite doping significantly improved the morphology and strengthened the aging resistance of the nanocomposite. In the dielectric strength test, it was found that nanodoping can effectively improve the direct current (DC) and alternating current (AC) breakdown field strength and the stability after the thermal aging. The dielectric constant of nanocomposite can be reduced, and the dielectric loss had no obvious change during the aging process. Moreover, the zeolite/LDPE nanocomposite with the doping concentration of 1 wt % had the best performance, for the nano-zeolite was better dispersed.
Introduction
In the field of electrical insulation, especially for cable manufacturing industries, polyethylene is the most important power cable insulation material with the excellent insulating performance and mechanical properties [1,2]. In order to obtain better performances of polyethylene materials, many studies have tried doping polyethylene with nano-inorganic oxides. The dielectric properties of doped polyethylene are improved by using special effects under nanometer size [3][4][5] and interfacial effect between nanoparticles and polyethylene matrix particularly [6]. Various studies have shown that a small amount of nanodoping can significantly improve the dielectric properties of polyethylene, such as increasing its breakdown field strength, reducing its conductance under high electric field, reducing space charge accumulation and so on [7][8][9]. Common nanodopants are nano-inorganic oxides such as nano SiO 2 , Al 2 O 3 , MgO, TiO 2 , ZnO and so on [10][11][12][13][14]. Most of the studies have achieved some results and progress, but many problems remain to be solved. The microscopic mechanism of the modification of oxide nanoparticles is still at the level of qualitative description. It is generally believed that the interfacial region between nanoparticles and polymer matrix plays an important role, the special structure and dielectric behavior in the interfacial region are the key to the performance improvement [15,16]. Therefore, the mechanism of nanodoping modification is more likely to be the result of nanostructure in interfacial region rather than the nanoparticle itself, and the research on the polyethylene modification of nanodoping with special nanostructures from the perspective of microstructure has also been developed, such as the doping modification of nanoparticles with porous structure [17,18].
While studying the nanodoping modification of polyethylene materials, the aging of the insulating material research is also very important [19,20]. For high voltage and ultra-high voltage cables under the environment of high temperature and high electric field for long-term working, the insulation material is easy to accelerated aging and cause insulation failure [21,22]. Therefore, it is necessary to study the structure and dielectric properties of nanocomposite polyethylene after thermal aging. Some studies suggest that nanodoping can improve the thermal aging properties of polymer materials by improving the damage of molecular chains caused by thermal stress during thermal aging [23]. The reason is also related to the interaction between nanoparticles and polymer matrix [24]. In this paper, nano-zeolite polyethylene composites were prepared by using nano-zeolite with nano-porous structures as dopants and the aging characteristics of the composite materials were also studied. Nano-zeolites have a larger specific surface area than ordinary nanometer inorganic oxides and a special porous structure to enhance the interface effect in the nano-dielectric, so as to increase breakdown strength, reduce conductivity and suppress the space charge injection under high electric field, and also to enhance the thermal aging resistance of the nanocomposite. Under the condition of thermal aging, samples with different doping ratio and aging time were obtained, their structure and dielectric properties were studied, and the reason and mechanism of improving the aging performance were analyzed.
Materials
The basic material low-density polyethylene (LDPE) model NPC-LDPE was produced by Petrochemical Commercial Company of Iran (Tehran, Iran) and the density was 0.92 g/cm 3 . The type NaY zeolite was selected as nano scale dopant. The type NaY zeolite was produced by Tianjin Nanhua Catalyst Co. LTD and was prepared for laboratory use. The purity of nano zeolite was higher than 99.5%. The molecular formula of NaY zeolite nanoparticles is Na 56 [(AlO 2 ) 56 (SiO 2 ) 126 ]•250H 2 O. Porous NaY zeolite with pore size about 0.74 nm and the particle size is about 50 nm. All materials were dried in an oven at 60 • C for more than 24 h prior to further blending. Two kinds of sample with different mass fractions of NaY zeolite (1 wt % and 3 wt %) were prepared by melt blending with LDPE in a torque rheometer (Harbin Hapro Electrical Technology Co., LTD, Harbin, China) at 393 K for 10 min. Then the prepared nanocomposites were placed in flat vulcanizing machine (Huzhou Shuangli Automation Technology Equipment Co., LTD, Huzhou, China) at 393 K for about 15 min at 10 Mpa pressure to form films. The film thickness was 200 µm for break down test and 100 µm for dielectric spectrum and FTIR, respectively.
Aging Process
The oven with constant temperature was used for the thermal aging experiment, and 95 • C was selected as the temperature of thermal aging. During the thermal aging, the air blower was turned on to make the temperature in the oven uniform and keep the air composition consistent, so as to facilitate the aging oxidation reaction. In this paper, three kinds of materials were selected: pure LDPE, and type NaY zeolite/LDPE nanocomposites with nano-zeolite mass percentages of 1 wt % and 3 wt %.
Morphology and Structure Tests
The morphology of zeolite/LDPE nanocomposite was characterized by a Hitachi SU8020 scanning electron microscopy (SEM) (Hitachi High-Tech Co., LTD, Tokyo, Japan). Before the SEM test, the sample with a thickness of 200 µm was brittle fracture in liquid nitrogen, and the fracture surface of samples was gold-plated by E-1045 Ion Sputter instrument (Hitachi High-Tech Co. LTD, Tokyo, Japan) to improve the electrical conductivity of the sample and observed by SEM equipment subsequently.
Fourier transform infrared (FTIR) absorption spectra of pure LDPE and zeolite/LDPE nanocomposites films with 100 µm thickness were tested on a JASCO FT/IR-6100 spectrometer (JASCO Corporation, Tokyo, Japan). The tested wavenumber range was mid infrared spectroscopy (MIR) from 4000 to 400 cm −1 with a spectral resolution of 2 cm −1 . The infrared absorption spectrum of the sample was obtained using transmission mode and each sample was scanned 5 times and averaged. DSC-1 differential scanning calorimetry (DSC) analyzer manufactured by Mettler Toledo (Zurich, Switzerland) was used to test the absorption and release heat curves of the samples with variable temperature. All the samples were taken into small pieces with about 7 mg mass each, and the thermal analysis experiment was carried out in the atmosphere of nitrogen. The heating program was rising from 20 to 150 • C at 10 • C/min. The absorption and release heat curves were recorded and the heat absorbed during melting was calculated by integration. In order to retain the influence of the thermal aging process on the crystallinity of the sample, the heating curve and the value obtained during the first heating cycle were selected for the crystallinity calculation. Subsequently, the 100% melt enthalpy of PE was used to calculate the crystallinity of the sample. To ensure the accuracy of the experimental data, all the test samples were scanned twice and averaged.
Breakdown Test
The JNC801 type insulation dielectric strength tester (Shanghai Wang Xu Electirc Co., LTD, Shanghai, China) was used for DC and AC breakdown tests. Its output voltage range was AC 0~80 kV, and its boost rate was 2 kV/s. For DC breakdown test, half wave rectifier filter circuit is used for rectification. The test electrode was made of a ball -ball electrode with a diameter of 25 mm. In order to prevent surface discharge, the tested samples with average thickness of 200 µm and electrodes were immersed in transformer oil. DC and AC breakdown tests were carried out for 10 times for each sample and the data were recorded. Weibull distribution was used to calculate the breakdown characteristic function and breakdown field strength [25].
Dielectric Spectroscopy Test
Broadband Dielectric spectrometer models Alpha-A, the analyzer manufactured by Novocontrol Germany company (Frankfurt, Germany), was used for pure LDPE and zeolite/LDPE nanocomposite dielectric spectrum measurement. It can measure the frequency range from 3 × 10 −6 Hz to 4 × 10 7 Hz. The samples used for dielectric spectroscopy tests were 100 µm thick. Aluminum deposits were developed on the samples surface to obtain electrodes with a diameter of 25 mm by a vacuum coating machine (KYKY TECHNOLOGY CO., LTD., Beijing, China). And then, samples were placed in a constant temperature oven and dried at 60 • C for 24 h, and tested the dielectric spectrum with a frequency range of 10 −1~1 0 7 Hz. The relative dielectric constant and dielectric loss (tanδ) were calculated from the measured results. Figure 1 showed the SEM images after the first and the fourth thermal aging cycle of different kinds of samples including pure polyethylene and zeolite/LDPE nanocomposite with nano-zeolite doped by 1 wt % and 3 wt %, respectively. SEM images can clearly show the microstructure changes of the samples during the thermal aging process. It can be seen from Figure 1a, for the pure LDPE after the first thermal aging cycle, the material internal began to produce tiny pore defects, and with the increase of aging period to fourth cycles shown in Figure 1b, pore defects increased and more obvious gaps on the cross section structure appeared. It manifested that thermal aging under aerobic conditions over time increasing made polyethylene material internal oxidation reacted, expand the size of the defects and quantity, and led to the decrease of the mechanical properties, cause on the cross section of broken gap deeper and more obvious.
Results and Discussion
Polymers 2020, 12, x FOR PEER REVIEW 4 of 12 spectrum with a frequency range of 10 −1~1 0 7 Hz. The relative dielectric constant and dielectric loss (tanδ) were calculated from the measured results. Figure 1 showed the SEM images after the first and the fourth thermal aging cycle of different kinds of samples including pure polyethylene and zeolite/LDPE nanocomposite with nano-zeolite doped by 1 wt % and 3 wt %, respectively. SEM images can clearly show the microstructure changes of the samples during the thermal aging process. It can be seen from Figure 1a, for the pure LDPE after the first thermal aging cycle, the material internal began to produce tiny pore defects, and with the increase of aging period to fourth cycles shown in Figure 1b, pore defects increased and more obvious gaps on the cross section structure appeared. It manifested that thermal aging under aerobic conditions over time increasing made polyethylene material internal oxidation reacted, expand the size of the defects and quantity, and led to the decrease of the mechanical properties, cause on the cross section of broken gap deeper and more obvious. However, the microstructure of the zeolite/LDPE nanocomposite was obviously different. It can be found that, from the image of zeolite/LDPE nanocomposite with zeolite doping concentration of 1 wt % shown in Figure 1c, the internal structure had no obvious change after the first thermal aging cycle and nanoparticles was relatively evenly dispersed in the nanocomposite. The size of all the nano-zeolite particles was below 100 nanometers, and some particles were aggregated together, but not closely. Hardly any pore defects were found, and there was no clear boundary line between nanoparticles and polyethylene matrix. When the aging time reached 4 cycles, no obvious pore defects were found inside the nanocomposite all the same from Figure 1d, but the fluctuation on the fracture section was more obvious. Figure 1e was microscopic image of zeolite/LDPE nanocomposite with doping concentration of 3 wt % after the first thermal aging cycle, from the image the microscopic structure was similar with the sample of doping concentration 1 wt %, there was no apparent porosity defect of the sample, just more zeolite nanoparticles, and the size of nanoparticles was bigger. This suggested that when the doping concentration was high, part of the zeolite nanoparticles produced agglomeration phenomenon. When the aging time was 4 cycles, from Figure 1f it can be seen that pore defects similar to those in pure LDPE began to appear in the interior of the sample, but the quantity was less than pure LDPE. Moreover, the binding between large size nano-zeolite particles and the matrix was not tight enough, and there were obvious boundaries. It can be judged from the SEM images that the nano-zeolite doping effectively inhibits the oxidation and porosity generated in polyethylene during the thermal aging process, and the doping concentration of 1 wt % had the best aging resistance effect. Due to the porous structure of the nano-zeolite, the polyethylene molecular chain part may enter the porous structure, which made the polyethylene structure more stable, inhibiting the damage and defects in the process of thermal aging, and improving the anti-aging performance. When the zeolite doping concentration was 3 wt %, the size of the nanoparticles increases due to agglomeration, and the defects between the particles and the matrix increase, resulting in the deterioration of the aging resistance of the nanocomposite material.
Results and Discussion
The above phenomena were also verified by FTIR spectroscopy. Figure 2 shows the infrared absorption spectra of pure polyethylene and the zeolite/LDPE nanocomposite at 1% doping concentration at different aging cycles, respectively. As can be seen from Figure 2a, the absorption curve of FTIR spectrum showed all the characteristic absorption peaks of medium strength and strong strength of LDPE [26]. The absorption peak at 721 cm −1 identified the rocking motion of the larger atomic group of -(CH 2 ) n -(n ≥ 4), the absorption peak located at 1460 cm −1 can also be derived from the scissor bending vibration of -CH 2 -or antisymmetric stretching vibration of -CH 3 . In addition, the absorption peak at 1367 cm −1 was also identified the symmetrical stretching vibration of -CH 3 . The wide peak near 2900 cm −1 was composed of two absorption peaks of 2850 and 2926 cm −1 [27], and was also the absorption peaks identified antisymmetric stretching vibration and the symmetrical stretching vibration of -CH 2 -respectively. This wide peak appeared to have been flattened, possibly because the intensity of the two absorption peaks for -CH 2 -was the strongest here which was completely absorbed by the sample and recorded with too high a concentration. It was worth noting that in the infrared spectrograph, there was an obvious absorption peak at 1712 cm −1 which is enhanced with the increase of the aging period from thermal aging cycles 1 to 4, and this absorption peak can be corresponding to the carbonyl (C=O) double bond stretching vibration absorption [28,29]. In addition, in the spectra curve with the aging cycles of 1, 3 and 4, it can also be seen that a wide absorption peak also appears at 3381 cm −1 , which also increases with the aging time. This absorption peak can be corresponding to the stretching vibration absorption peak of hydroxy (-OH), or the frequency doubling stretching vibration absorption peak of C=O. Carbonyl and hydroxyl increasing with the thermal aging time indicated that the oxidation was reacting constantly in polyethylene at aerobic environment condition with the increasing of time. This oxidation caused structural defects of polyethylene molecules and increase the polarity of polyethylene materials, which will lead to the destruction of microstructure and the increase of pores, seriously affecting the dielectric properties of insulating materials finally. Figure 2b shows the infrared spectra of zeolite/LDPE nanocomposites with doping concentration 1 wt % under different thermal aging cycles. The absorption peak of -CH2-and -CH3was basically the same as that of pure LDPE. However, a wide absorption peak appears at 1037 cm −1 , which can be considered as the characteristic peak of TO4 (oxide tetrahedron), was the characteristic peak of silicon-oxygen tetrahedron or aluminum-oxygen tetrahedron in the nano-zeolite. At the same time, significantly different from pure LDPE, the absorption peak of carbonyl (C=O) at 1712 cm −1 is significantly reduced, which is not obvious at 1 and 3 aging cycles, and can be seen only after the fourth aging cycle presenting a significant reduction compared with pure LDPE. In addition, the characteristic peak of hydroxy (-OH) located at 3381 cm −1 was also significantly weakened. The absorption peak could hardly be seen at 1 and 3 aging cycles, and it could only be observed at 4 aging cycles, which was consistent with the phenomenon of carbonyl reduction. These phenomena indicate that the addition of nano-zeolite significantly decreased the oxidative reaction under thermal aging in aerobic environment in polyethylene, inhibits the damage of oxidation when aging progress. At the same time, The Si and Al provided by nano-zeolite can form bond structure of C-O-Si or C-O-Al instead of carbonyl C=O in the matrix, and the change of chemical bond can reduce defects or improve the structural stability in polyethylene.
In order to further prove that zeolite/LDPE nanocomposites had better thermal aging resistance properties, a thermal analysis test was used to assist the proof. All samples were tested by differential scanning calorimetry (DSC), and the absorption curve was analyzed. A comparison was made between the 100% melting enthalpy value and the heat absorption during melting, so as to obtain the crystallinity and melt peak value (Tmax) information of all samples. The specific values are shown in Table 1. It can be seen from the table that the crystallinity (XC) of pure LDPE before aging is 37.72%, while the crystallinity of the zeolite/LDPE nanocomposite has been improved to 39.23% and 39.08% after doping with nano-zeolite. This indicates that nano-zeolite doping can effectively improve the crystallinity of the nanocomposite, increase the proportion of ordered structure inside the material, reduce the defects and pores caused by disordered structure, and thus improve the performance and aging stability of LDPE. After thermal aging from 1 to 4 cycles, the crystallinity of pure LDPE decreased slightly, indicating that with the increasing of aging time, the thermo-oxygen aging of polyethylene materials intensified, thermal cracking and thermo-oxygen degradation led to the fracture of LDPE macromolecular chains, partial crystal structure was destroyed, and the crystallinity decreased. However, the crystallinity of zeolite/LDPE nanocomposite was first increased and then decreased with the increasing of aging time. The increase of crystallinity after the first and second thermal aging cycles may be due to the fact that at the initial aging with temperature of 95 °C, recrystallization occurred to a small extent in the nanocomposite and nano-zeolite particles may be used as heterogeneous nucleation centers, resulting in the increasing of crystallization ratio and more regular and perfect material structure. When the aging time was further increased, Figure 2b shows the infrared spectra of zeolite/LDPE nanocomposites with doping concentration 1 wt % under different thermal aging cycles. The absorption peak of -CH 2 -and -CH 3 -was basically the same as that of pure LDPE. However, a wide absorption peak appears at 1037 cm −1 , which can be considered as the characteristic peak of TO 4 (oxide tetrahedron), was the characteristic peak of silicon-oxygen tetrahedron or aluminum-oxygen tetrahedron in the nano-zeolite. At the same time, significantly different from pure LDPE, the absorption peak of carbonyl (C=O) at 1712 cm −1 is significantly reduced, which is not obvious at 1 and 3 aging cycles, and can be seen only after the fourth aging cycle presenting a significant reduction compared with pure LDPE. In addition, the characteristic peak of hydroxy (-OH) located at 3381 cm −1 was also significantly weakened. The absorption peak could hardly be seen at 1 and 3 aging cycles, and it could only be observed at 4 aging cycles, which was consistent with the phenomenon of carbonyl reduction. These phenomena indicate that the addition of nano-zeolite significantly decreased the oxidative reaction under thermal aging in aerobic environment in polyethylene, inhibits the damage of oxidation when aging progress. At the same time, The Si and Al provided by nano-zeolite can form bond structure of C-O-Si or C-O-Al instead of carbonyl C=O in the matrix, and the change of chemical bond can reduce defects or improve the structural stability in polyethylene.
In order to further prove that zeolite/LDPE nanocomposites had better thermal aging resistance properties, a thermal analysis test was used to assist the proof. All samples were tested by differential scanning calorimetry (DSC), and the absorption curve was analyzed. A comparison was made between the 100% melting enthalpy value and the heat absorption during melting, so as to obtain the crystallinity and melt peak value (T max ) information of all samples. The specific values are shown in Table 1. It can be seen from the table that the crystallinity (X C ) of pure LDPE before aging is 37.72%, while the crystallinity of the zeolite/LDPE nanocomposite has been improved to 39.23% and 39.08% after doping with nano-zeolite. This indicates that nano-zeolite doping can effectively improve the crystallinity of the nanocomposite, increase the proportion of ordered structure inside the material, reduce the defects and pores caused by disordered structure, and thus improve the performance and aging stability of LDPE. After thermal aging from 1 to 4 cycles, the crystallinity of pure LDPE decreased slightly, indicating that with the increasing of aging time, the thermo-oxygen aging of polyethylene materials intensified, thermal cracking and thermo-oxygen degradation led to the fracture of LDPE macromolecular chains, partial crystal structure was destroyed, and the crystallinity decreased. However, the crystallinity of zeolite/LDPE nanocomposite was first increased and then decreased with the increasing of aging time. The increase of crystallinity after the first and second thermal aging cycles may be due to the fact that at the initial aging with temperature of 95 • C, recrystallization occurred to a small extent in the nanocomposite and nano-zeolite particles may be used as heterogeneous nucleation centers, resulting in the increasing of crystallization ratio and more regular and perfect material structure. When the aging time was further increased, thermal aging will also lead to oxidation damage inside the nanocomposite, thus reducing the crystallinity. However, the decrease of crystallinity in the nanocomposite was much less than that of pure LDPE, indicating that the aging damage degree level was lower than that of pure LDPE. Because of its porous structure, nano-zeolite may made part of polyethylene molecular chain enter the pore and more tightly bonded, which can stabilize the structure of composite material more effectively and increase the proportion of ordered structure. In addition, it was worth noting that in the DSC testing data, the melt peak value (T max ) of pure LDPE presents the downward trend after rising first. However, the T max of zeolite/LDPE nanocomposite appeared very stable, almost remaining the same in different thermal aging cycles; this suggested that the internal structure of the zeolite/LDPE nanocomposite remained stable after thermal aging, without apparent thermal degradation or thermal transitions [26,30]. The DC and AC breakdown Weibull distributions of all samples in different aging cycles were shown in Figures 3 and 4, respectively. As the electrical breakdown was a random weak point breakdown process and the breakdown data has a certain dispersion, Weibull distribution was used to carry out statistics and analysis of the breakdown data. In Figures 3 and 4, β is the shape parameter of Weibull distribution which reflects the dispersion degree of breakdown strength, while E 0 is the breakdown field strength. As can be seen from Figure 3a, the DC breakdown field strength of all the zeolite/LDPE nanocomposites was higher than that of pure LDPE. Moreover, from Figure 3b-d the nano-zeolite doping effectively improved the DC breakdown field strength of the nanocomposites during the thermal aging process. With the increase of the aging cycle, the breakdown field intensity of all samples showed a downward trend, but the breakdown field strength performance of zeolite/LDPE nanocomposites decreased less than that of pure LDPE. After four rounds of thermal aging treatment, the breakdown field strength of pure LDPE decreased by 27.35% while that of zeolite/LDPE nanocomposite with nano-zeolite doping concentration 1 wt % decreased by 21.01%, which indicates that zeolite/LDPE nanocomposite has a better heat-resistant aging performance. The DC breakdown field strength of zeolite/LDPE nanocomposite with NaY nano-zeolite doping concentration 1 wt % was higher than that of the nanocomposite with NaY nano-zeolite doping concentration 3 wt % in all aging cycles, indicating that the sample with 1 wt % doping had the best DC breakdown performance. When the nano-zeolite doping concentration was high (3 wt %), the agglomeration of nanoparticles in nanocomposites was increased, the related defect and pore size would be larger, and large-size defects and destruction are more likely to occur during thermal aging process, which may lead to breakdown performance degradation and was consistent with the phenomenon of SEM and FTIR described above. The nano-zeolite with a large number of porous structures can effectively enhance the interface effect between nanoparticles and polyethylene matrix, so as to generate more and deeper traps in the interface area. The trapped charge under high electric field will block further charge injected, thus improving the electrical strength of the material. As shown in Figure 4, the result of AC breakdown was basically consistent with that of DC breakdown. The breakdown field strength of zeolite/LDPE nanocomposite was higher than that of As shown in Figure 4, the result of AC breakdown was basically consistent with that of DC breakdown. The breakdown field strength of zeolite/LDPE nanocomposite was higher than that of As shown in Figure 4, the result of AC breakdown was basically consistent with that of DC breakdown. The breakdown field strength of zeolite/LDPE nanocomposite was higher than that of pure LDPE under different aging cycles. The aging resistance of zeolite/LDPE nanocomposite was better and the breakdown field strength decreased less. In addition, the shape parameters of zeolite/LDPE AC breakdown Weibull distribution are larger than that of pure LDPE, indicating that the nanocomposite had better AC breakdown stability. A large number of porous structures and enhanced interfacial effects brought by nano-zeolite doping can effectively reduce the electrical conductivity by trapping electrons, reduce the electron mobility and heat generation under the high Voltage of AC, and thus enhance the AC breakdown strength. Comparing between the nanocomposites with different doping contents, the AC breakdown field strength of nanocomposite with doping concentration of 1 wt % was still higher than that of nanocomposite with doping concentration of 3 wt % in all aging cycles which was also similar to the DC breakdown results. A higher doping concentration will lead to more agglomeration of nanoparticles within the nanocomposite, thus increasing internal defects and pores. In the process of thermal aging, it is more likely to generate weaknesses that lead to material breakdown under high electric field [31]. Figure 5 showed the dielectric constant and dielectric loss curve of pure LDPE under different aging cycles. It can be found that with the increase of aging cycle, the dielectric constant of the material shows an upward trend, increasing from around 2.2 to 2.5 in Figure 5a. It was because with the increase of aging time, some molecular chains inside the polyethylene material were destroyed, the number of polar groups increased, and at the same time, some polar groups located in the long molecular chain are detached, increasing the polarity of the material, resulting in the rise of the dielectric constant [20]. It can be seen from the dielectric loss diagram shown in Figure 5b that the total dielectric loss also increases with the increase of aging period. There is no obvious loss peak in the whole frequency range of Pure-1, indicating that the pure LDPE had no obvious dielectric relaxation even when it was suffered one circle of thermal aging. When the aging cycle increases to 2, 3, and 4, there were small dielectric loss peaks visible in the curve of samples Pure-2, Pure-3 and Pure-4 located at about 10~100 Hz. This dielectric loss peak may be related to the polar groups and defects in pure LDPE produced by the destruction of thermal aging. This was mainly because the polarization time range of the dipole moment generated by the defect and polar groups was 10 −6 -10 −2 s, which corresponds to the frequency position of the current dielectric loss peak.
Polymers 2020, 12, x FOR PEER REVIEW 9 of 12 pure LDPE under different aging cycles. The aging resistance of zeolite/LDPE nanocomposite was better and the breakdown field strength decreased less. In addition, the shape parameters of zeolite/LDPE AC breakdown Weibull distribution are larger than that of pure LDPE, indicating that the nanocomposite had better AC breakdown stability. A large number of porous structures and enhanced interfacial effects brought by nano-zeolite doping can effectively reduce the electrical conductivity by trapping electrons, reduce the electron mobility and heat generation under the high Voltage of AC, and thus enhance the AC breakdown strength. Comparing between the nanocomposites with different doping contents, the AC breakdown field strength of nanocomposite with doping concentration of 1 wt % was still higher than that of nanocomposite with doping concentration of 3 wt % in all aging cycles which was also similar to the DC breakdown results. A higher doping concentration will lead to more agglomeration of nanoparticles within the nanocomposite, thus increasing internal defects and pores. In the process of thermal aging, it is more likely to generate weaknesses that lead to material breakdown under high electric field [31]. Figure 5 showed the dielectric constant and dielectric loss curve of pure LDPE under different aging cycles. It can be found that with the increase of aging cycle, the dielectric constant of the material shows an upward trend, increasing from around 2.2 to 2.5 in Figure 5a. It was because with the increase of aging time, some molecular chains inside the polyethylene material were destroyed, the number of polar groups increased, and at the same time, some polar groups located in the long molecular chain are detached, increasing the polarity of the material, resulting in the rise of the dielectric constant [20]. It can be seen from the dielectric loss diagram shown in Figure 5b that the total dielectric loss also increases with the increase of aging period. There is no obvious loss peak in the whole frequency range of Pure-1, indicating that the pure LDPE had no obvious dielectric relaxation even when it was suffered one circle of thermal aging. When the aging cycle increases to 2, 3, and 4, there were small dielectric loss peaks visible in the curve of samples Pure-2, Pure-3 and Pure-4 located at about 10~100 Hz. This dielectric loss peak may be related to the polar groups and defects in pure LDPE produced by the destruction of thermal aging. This was mainly because the polarization time range of the dipole moment generated by the defect and polar groups was 10 −6 -10 −2 s, which corresponds to the frequency position of the current dielectric loss peak. Figure 6 shows the dielectric constant and dielectric loss curves of the zeolite/LDPE nanocomposite with doping concentration of 1 wt % at different aging cycles. It can be found that, different from pure LDPE, with the increase of aging cycle, the dielectric constant decreases from around 2.4 to 2.3. This was mainly due to the nanoparticles doped inhibited the destruction of the thermal aging and at the same time small recrystallization happens inside, orderly nonpolar component increased, which can be proved from FTIR and DSC test conclusion. Moreover, there were a lot of porous structures and large specific surface area on the surface of nano-zeolite, the polyethylene molecular chain may enter into the porous structure of zeolite partly at a higher temperature, or bind more closely to reduce the polarity of the nanocomposite. It can be seen from Figure 6b that under different aging cycles, the dielectric loss did not change significantly, and there Figure 6 shows the dielectric constant and dielectric loss curves of the zeolite/LDPE nanocomposite with doping concentration of 1 wt % at different aging cycles. It can be found that, different from pure LDPE, with the increase of aging cycle, the dielectric constant decreases from around 2.4 to 2.3. This was mainly due to the nanoparticles doped inhibited the destruction of the thermal aging and at the same time small recrystallization happens inside, orderly nonpolar component increased, which can be proved from FTIR and DSC test conclusion. Moreover, there were a lot of porous structures and large specific surface area on the surface of nano-zeolite, the polyethylene molecular chain may enter into the porous structure of zeolite partly at a higher temperature, or bind more closely to reduce the polarity of the nanocomposite. It can be seen from Figure 6b that under different aging cycles, the dielectric loss did not change significantly, and there was a relatively obvious dielectric relaxation peak around 10 3~1 0 4 Hz, which was mainly caused by the interfacial polarization between nano-zeolite and polyethylene. According to the theory of Maxwell-Wagner-Sillars, interface polarization relaxation time was relatively long, mainly occurred in the low frequency region about 10 −3~1 0 3 Hz [32]. However, due to the small size and the uniform dispersion of zeolite nanoparticles, the combination between nano particles and LDPE matrix was relatively close in zeolite/LDPE nanocomposites. Therefore, in the process of polarization and depolarization, the distance of charge movement was not as long as that of ordinary interfacial polarization in a larger scale, the polarization time was shorter, and the corresponding frequency of interfacial polarization was higher. This frequency could be even higher than the dipole polarization frequency by defects and polar groups in pure LDPE caused by thermal aging.
Polymers 2020, 12, x FOR PEER REVIEW 10 of 12 was a relatively obvious dielectric relaxation peak around 10 3~1 0 4 Hz, which was mainly caused by the interfacial polarization between nano-zeolite and polyethylene. According to the theory of Maxwell-Wagner-Sillars, interface polarization relaxation time was relatively long, mainly occurred in the low frequency region about 10 −3~1 0 3 Hz [32]. However, due to the small size and the uniform dispersion of zeolite nanoparticles, the combination between nano particles and LDPE matrix was relatively close in zeolite/LDPE nanocomposites. Therefore, in the process of polarization and depolarization, the distance of charge movement was not as long as that of ordinary interfacial polarization in a larger scale, the polarization time was shorter, and the corresponding frequency of interfacial polarization was higher. This frequency could be even higher than the dipole polarization frequency by defects and polar groups in pure LDPE caused by thermal aging.
Conclusions
In this paper, morphology, infrared spectrum, crystallinity, DC and AC breakdown and dielectric spectrum of pure LDPE and zeolite/LDPE nanocomposites after thermal aging were investigated. The aging characteristics were fully studied from the aspects of microstructure and structural characteristics to macro dielectric strength and dielectric spectrum respectively, and the correlation and analysis were established.
The SEM microstructure characterization after aging showed that nano-zeolite doping could effectively reduce the damage of thermal aging on the internal structure of the nanocomposite, effectively reduce and hinder the generation of internal holes, and improve the heat-resistant aging performance. In the subsequent infrared spectrum test, the experimental results also verified that the carbonyl and hydroxyl groups in zeolite/LDPE nanocomposites were significantly reduced, and its characteristic peak was far lower than that of the pure LDPE with the same aging cycle. In DSC test, it was found that nano-zeolite doping could effectively improve the crystallinity of nanocomposites, and the crystallinity change was not obvious in the whole aging process, only slightly increased first and then decreased, which was significantly different from the monotonic decline of pure LDPE. The results showed that the nano-zeolite improved the structure characteristics inside the material, reduced the large size defects, and the nano-zeolite was more closely bound to the polyethylene matrix.
Through the test of DC and AC electric strength, it was found that the nano-zeolite doping can effectively improve the electrical strength of the nanocomposite, and the DC and AC electric strength are both improved in all aging cycles. Compared with pure LDPE, with the increase of aging time, the breakdown field strength decreases less, the stability is better, and the aging resistance is improved.
The dielectric constant of the zeolite/LDPE nanocomposite decreased slightly with the increase
Conclusions
In this paper, morphology, infrared spectrum, crystallinity, DC and AC breakdown and dielectric spectrum of pure LDPE and zeolite/LDPE nanocomposites after thermal aging were investigated. The aging characteristics were fully studied from the aspects of microstructure and structural characteristics to macro dielectric strength and dielectric spectrum respectively, and the correlation and analysis were established.
The SEM microstructure characterization after aging showed that nano-zeolite doping could effectively reduce the damage of thermal aging on the internal structure of the nanocomposite, effectively reduce and hinder the generation of internal holes, and improve the heat-resistant aging performance. In the subsequent infrared spectrum test, the experimental results also verified that the carbonyl and hydroxyl groups in zeolite/LDPE nanocomposites were significantly reduced, and its characteristic peak was far lower than that of the pure LDPE with the same aging cycle. In DSC test, it was found that nano-zeolite doping could effectively improve the crystallinity of nanocomposites, and the crystallinity change was not obvious in the whole aging process, only slightly increased first and then decreased, which was significantly different from the monotonic decline of pure LDPE. The results showed that the nano-zeolite improved the structure characteristics inside the material, reduced the large size defects, and the nano-zeolite was more closely bound to the polyethylene matrix.
Through the test of DC and AC electric strength, it was found that the nano-zeolite doping can effectively improve the electrical strength of the nanocomposite, and the DC and AC electric strength are both improved in all aging cycles. Compared with pure LDPE, with the increase of aging time, the breakdown field strength decreases less, the stability is better, and the aging resistance is improved.
The dielectric constant of the zeolite/LDPE nanocomposite decreased slightly with the increase of aging time, which was contrary to the small increase of pure LDPE. At the same time, there was a loss peak of the nanocomposite under the action of interfacial polarization, but the loss was small and did not change significantly during the whole aging period.
The sample with a doping concentration of 1 wt % of nano-zeolite had better performance. When the doping concentration was increased to 3 wt %, the internal defects and pores of the material were larger due to the agglomeration of nanoparticles, which was more likely to cause damage during thermal aging.
Conflicts of Interest:
The authors declare no conflict of interest. | 9,172.8 | 2020-09-01T00:00:00.000 | [
"Materials Science"
] |
Simultaneous Identification of EGFR, KRAS, ERBB2, and TP53 Mutations in Patients with Non-Small Cell Lung Cancer by Machine Learning-Derived Three-Dimensional Radiomics
Simple Summary Multiple genetic mutations are associated with the outcomes of patients with non-small cell lung cancer (NSCLC) after using tyrosine kinase inhibitor, but the cost for detecting multiple genetic mutations is high. Few studies have investigated whether multiple genetic mutations can be simultaneously detected based on image features in patients with NSCLC. We developed a machine learning-derived radiomics approach that can simultaneously discriminate the presence of EGFR, KRAS, ERBB2, and TP53 mutations on CT images in patients with NSCLC. These findings suggest that machine learning-derived radiomics may become a noninvasive and low-cost method to screen for multiple genetic mutations in patients with NSCLC before using next-generation sequencing tests, which can help to improve individualized targeted therapies. Abstract Purpose: To develop a machine learning-derived radiomics approach to simultaneously discriminate epidermal growth factor receptor (EGFR), Kirsten rat sarcoma viral oncogene (KRAS), Erb-B2 receptor tyrosine kinase 2 (ERBB2), and tumor protein 53 (TP53) genetic mutations in patients with non-small cell lung cancer (NSCLC). Methods: This study included consecutive patients from April 2018 to June 2020 who had histologically confirmed NSCLC, and underwent pre-surgical contrast-enhanced CT and post-surgical next-generation sequencing (NGS) tests to determine the presence of EGFR, KRAS, ERBB2, and TP53 mutations. A dedicated radiomics analysis package extracted 1672 radiomic features in three dimensions. Discriminative models were established using the least absolute shrinkage and selection operator to determine the presence of EGFR, KRAS, ERBB2, and TP53 mutations, based on radiomic features and relevant clinical factors. Results: In 134 patients (63.6 ± 8.9 years), the 20 most relevant radiomic features (13 for KRAS) to mutations were selected to construct models. The areas under the curve (AUCs) of the combined model (radiomic features and relevant clinical factors) for discriminating EGFR, KRAS, ERBB2, and TP53 mutations were 0.78 (95% CI: 0.70–0.86), 0.81 (0.69–0.93), 0.87 (0.78–0.95), and 0.84 (0.78–0.91), respectively. In particular, the specificity to exclude EGFR mutations was 0.96 (0.87–0.99). The sensitivity to determine KRAS, ERBB2, and TP53 mutations ranged from 0.82 (0.69–90) to 0.92 (0.62–0.99). Conclusions: Machine learning-derived 3D radiomics can simultaneously discriminate the presence of EGFR, KRAS, ERBB2, and TP53 mutations in patients with NSCLC. This noninvasive and low-cost approach may be helpful in screening patients before invasive sampling and NGS testing.
Introduction
Lung cancer is responsible for 11.6% of de novo malignancies and 18.4% of cancerrelated deaths in 2018 [1]. Over the past 15 years, the treatment of non-small cell lung cancer (NSCLC) has changed dramatically with the introduction of tumor genomic profiling and targeted therapy [2]. Several genetic mutations were identified in patients with NSCLC. The prevalence of mutations varies with ethnicity [3]. Epidermal growth factor receptor (EGFR) mutations exist in 40-60% of pulmonary adenocarcinoma in the Asian population, while only 7-10% in the European population [4]. Kirsten rat sarcoma viral oncogene (KRAS) mutation accounts for approximately 25% of patients with NSCLC [5]. A total of 27% of the Caucasians had the KRAS mutation, significantly higher than 17% of African Americans [6]. The over-expression of Erb-B2 receptor tyrosine kinase 2 (ERBB2) was observed in 2.4-38% of NSCLC cases [7,8]. Tumor suppressor protein 53 (TP53) gene mutation can be found in 35-60% of patients with NSCLC [9]. These gene mutations are associated with the prognosis of patients with NSCLC after receiving tyrosine kinase inhibitor (TKI) therapy and may confer resistance to TKI [10]. For example, EGFR is the main actionable target in patients with NSCLC [11]; a recent trial showed that sotorasib can be used against NSCLC harboring KRAS mutation [12].
Detection of multiple genetic alterations in patients with lung cancer is crucial to decide the applicability of targeted therapy. Next-generation sequencing (NGS), a highthroughput genetic sequencing method, allows for simultaneous and rapid detection of multiple tumor mutations [13,14]. NGS achieved an accuracy of 99.1% for detecting EGFR mutation in patients with advanced lung adenocarcinoma, compared with the traditional Sanger sequencing method [15]. Thus, many medical centers used NGS in clinical practice [16]. However, the current clinical practice for NGS involves invasive biopsy or surgical resection, which is associated with high cost and patient discomfort. Intra-tumor heterogeneity, which leads to the heterogeneous molecular sampling results, reduces the accuracy of identifying potential genetic mutations [17]. Furthermore, in some areas, the clinical implementation of NGS is still poor. A comprehensive and noninvasive approach will help to screen candidate patients for invasive sampling and NGS testing.
Radiomics, a subfield of machine learning, is a promising noninvasive approach to assess genetic mutations in lung cancer. Radiomics extracts and analyzes a large number of advanced quantitative image features with high throughput. This approach can be used to determine the molecular type of lung tumors based on the phenotypic appearance in computed tomography (CT) [18]. Several studies have reported encouraging results in discriminating EGFR mutation using radiomics [19,20]. For example, Jia et al. built a random forest classifier to identify EGFR mutation and reached an area under the receiver operating characteristics curve (AUC) of 0.802 [21]. Pinheiro et al. found that radiomic features can discriminate EGFR mutation with an AUC of 0.75, but did not find a radiomic feature correlated to KRAS mutation [22]. To our knowledge, few studies have investigated whether multiple genetic mutations can be simultaneously detected based on image features in patients with NSCLC. Therefore, this study aimed to develop a machine learning-derived radiomics approach to discriminate the presence of EGFR, KRAS, ERBB2, and TP53 mutations on CT images in patients with NSCLC.
Study Population
This study retrospectively included consecutive patients with NSCLC who visited our institute from April 2018 to June 2020. The inclusion criteria were as follows: (1) surgically resected tumor sample tissues; (2) patients with NSCLC confirmed by hematoxylin-eosin and immunohistochemistry staining; (3) post-surgical NGS test proved the mutation status of EGFR, KRAS, ERBB2, and TP53; (4) thin-slice contrast-enhanced chest CT (slice thickness ≤ 1 mm) performed prior to tumor resection; (5) interval between CT scanning and tumor resection < 1 month. The exclusion criteria were: (1) non-contrast CT examination; (2) lowquality CT images affected by image artifacts; (3) indistinguishable tumor edge, caused by adjacent obstructive pneumonia, atelectasis, and mediastinal adhesions, etc. The collected clinical factors were age at diagnosis, sex, cTNM stage, smoking status, and tumor location. The cTNM stage categorizes the extent of the tumor during imaging examination before any treatment. The cTNM stage was determined by whole-body CT except the lower extremities or whole-body PET-CT.
The local Institutional Review Board approved this retrospective study (No. SGH-2018-56) and waived the requirement for patient informed consent. The patient selection flowchart is shown in Figure 1. This study retrospectively included consecutive patients with NSCLC who visited our institute from April 2018 to June 2020. The inclusion criteria were as follows: 1) surgically resected tumor sample tissues; 2) patients with NSCLC confirmed by hematoxylineosin and immunohistochemistry staining; 3) post-surgical NGS test proved the mutation status of EGFR, KRAS, ERBB2, and TP53; 4) thin-slice contrast-enhanced chest CT (slice thickness ≤ 1 mm) performed prior to tumor resection; 5) interval between CT scanning and tumor resection < 1 month. The exclusion criteria were: 1) non-contrast CT examination; 2) low-quality CT images affected by image artifacts; 3) indistinguishable tumor edge, caused by adjacent obstructive pneumonia, atelectasis, and mediastinal adhesions, etc. The collected clinical factors were age at diagnosis, sex, cTNM stage, smoking status, and tumor location. The cTNM stage categorizes the extent of the tumor during imaging examination before any treatment. The cTNM stage was determined by whole-body CT except the lower extremities or whole-body PET-CT.
The local Institutional Review Board approved this retrospective study (No. SGH-2018-56) and waived the requirement for patient informed consent. The patient selection flowchart is shown in Figure 1.
NGS
In this study, a Clinical Laboratory Improvement Amendments (CLIA)-certified testing center (Burning Rock Biotech, Guangzhou, China) performed deoxyribonucleic acid (DNA) processing and subsequent NGS procedures for adequate formalin-fixed and paraffin-embedded tumor sections to detect somatic genetic mutations. In brief, a minimum of 50 ng of DNA isolated from the tumor tissue was processed for NGS library construction and profiled using the capture-based targeted sequencing panels targeting multiple genes. NGS was performed by using an ultra-deep (20,000×) 168-gene panel named Lung-Plasma (Burning Rock Biotech, Guangzhou) [23]. Sequencing panels were selected based on the patients' clinical characteristics and financial situation. The panels interrogated the
NGS
In this study, a Clinical Laboratory Improvement Amendments (CLIA)-certified testing center (Burning Rock Biotech, Guangzhou, China) performed deoxyribonucleic acid (DNA) processing and subsequent NGS procedures for adequate formalin-fixed and paraffinembedded tumor sections to detect somatic genetic mutations. In brief, a minimum of 50 ng of DNA isolated from the tumor tissue was processed for NGS library construction and profiled using the capture-based targeted sequencing panels targeting multiple genes. NGS was performed by using an ultra-deep (20,000×) 168-gene panel named LungPlasma (Burning Rock Biotech, Guangzhou) [23]. Sequencing panels were selected based on the patients' clinical characteristics and financial situation. The panels interrogated the whole exons and critical intronic regions of the actionable genes including EGFR, KRAS, ERBB2, and TP53 in this study.
CT Image Acquisition
All included patients underwent contrast-enhanced chest CT scans using two CT scanners (Somatom Force, Siemens Healthineers, Erlangen, Germany; Revolution CT, GE Healthcare, Milwaukee, WI, USA). A total of 60-80 mL of contrast medium (Iopamiro 300, Bracco, Milan, Italy) was injected at 4 mL/s. The reconstructed slice thickness was 0.6 mm and 0.625 mm, respectively. Table S1 presents the detailed acquisition protocol and reconstruction parameters. Figure 2 shows the radiomics analysis pipeline steps. One radiologist with 18 years of experience in diagnostic imaging, who was blinded to the results of the NGS test, performed semi-automated three-dimensional (3D) tumor segmentation, using a radiomics analysis software package (Radiomics 1.0.9a, Siemens Healthineers) on a research platform (SyngoVia VB10b, Research Frontier, Siemens Healthineers). This radiomics analysis package extracts radiomics features based on the Pyradiomics library [24], in conformance with the Image Biomarker Standardization Initiative [25]. After finding the lesion and clicking on it, the software automatically segments the tumor edge and extracts 1672 radiomic features. These features comprise first-order (HU stats), shape, and texture features. The first-order feature describes the intensity distribution of CT values in the volume of interest by common basic measures, such as mean, range, and standard deviation [24]. The texture features comprise the following five categories: (1) gray-level co-occurrence matrix; (2) gray-level difference matrix; (3) gray-level run-length matrix; (4) gray-level size-zone matrix; (5) neighborhood gray-tone difference matrix. These features are analyzed by nine filters and eight wavelet transformations in high dimensions. Details on the principle of feature algorithms are found in the supplementary materials. whole exons and critical intronic regions of the actionable genes including EGFR, KRAS, ERBB2, and TP53 in this study.
CT Image Acquisition
All included patients underwent contrast-enhanced chest CT scans using two CT scanners (Somatom Force, Siemens Healthineers, Erlangen, Germany; Revolution CT, GE Healthcare, Milwaukee, US). A total of 60-80 mL of contrast medium (Iopamiro 300, Bracco, Milan, Italy) was injected at 4 mL/s. The reconstructed slice thickness was 0.6 mm and 0.625 mm, respectively. Table S1 presents the detailed acquisition protocol and reconstruction parameters. Figure 2 shows the radiomics analysis pipeline steps. One radiologist with 18 years of experience in diagnostic imaging, who was blinded to the results of the NGS test, performed semi-automated three-dimensional (3D) tumor segmentation, using a radiomics analysis software package (Radiomics 1.0.9a, Siemens Healthineers) on a research platform (SyngoVia VB10b, Research Frontier, Siemens Healthineers). This radiomics analysis package extracts radiomics features based on the Pyradiomics library [24], in conformance with the Image Biomarker Standardization Initiative [25]. After finding the lesion and clicking on it, the software automatically segments the tumor edge and extracts 1672 radiomic features. These features comprise first-order (HU stats), shape, and texture features. The first-order feature describes the intensity distribution of CT values in the volume of interest by common basic measures, such as mean, range, and standard deviation [24]. The texture features comprise the following five categories: 1) gray-level co-occurrence matrix; 2) gray-level difference matrix; 3) gray-level run-length matrix; 4) gray-level size-zone matrix; 5) neighborhood gray-tone difference matrix. These features are analyzed by nine filters and eight wavelet transformations in high dimensions. Details on the principle of feature algorithms are found in the supplementary materials.
Selection of Radiomic Features
To assess the stability of feature extraction, two observers with 5 and 18 years of experience in radiology independently evaluated 50 randomly selected patients. Spearman's
Selection of Radiomic Features
To assess the stability of feature extraction, two observers with 5 and 18 years of experience in radiology independently evaluated 50 randomly selected patients. Spearman's rank correlation coefficient between the two feature extracting procedures was calculated to indicate the feature stability [26]. The features with a Spearman's r > 0.8 were considered stable for the subsequent analysis. Then, according to the F-statistic test in one-way analysis of variance (ANOVA), the most correlated features with the presence of genetic mutations were selected. In radiomics studies, this method is commonly used for univariant feature selection by estimating the degree of linear dependency between features and labels (mutations in our study) [27,28]. The top 20 significant features to the presence of mutations were eventually selected to establish the discriminative models.
Model Development
In this study, the one-vs-all strategy, which exhibited great interpretability and fitted one classifier per class, was implemented to achieve the aim of the multi-classification task to identify four different mutation types [29]. First, we established four discriminative models based only on radiomic features (radiomics models) to determine the presence of EGFR, KRAS, ERBB2, and TP53 mutations, using penalized multivariate logistic regression with 5-fold cross-validation. The least absolute shrinkage and selection operator (LASSO) was implemented for imposing a penalty to the logistic model with excessive features, so that the coefficient of noncontributing features shrank to zero. LASSO logistic regression, as a machine learning algorithm, is commonly used to select contributing features in radiomics research [30].
Second, we built four other discriminative models (combined models), each of which was based on the combination of radiomic features and clinical factors, to predict the existence of EGFR, KRAS, ERBB2, and TP53 mutations. The Wilcox rank-sum test was used to select significantly relevant clinical factors associated with the presence of a genetic mutation. Then, multivariate logistic models combining radiomic features and significant clinical factors were established to discriminate the presence of mutations.
Statistical Analysis
A one-sample Kolmogorov-Smirnov test was applied for the normality test of continuous variables. The Fisher exact test or Chi-square test was used to compare categorical variables, and the independent Student t-test or Mann-Whitney U test for continuous variables. The discrimination performance of models was evaluated by the receiver operating characteristics (ROC) curve. The cutoff value was obtained by using the maximum likelihood ratio on the ROC curve. Sensitivity, specificity, and accuracy were calculated based on these cutoff values. DeLong's test was used to compare the diagnostic performance between the radiomics model and combined model for each of the four genetic mutations.
Extraction and Selection of Radiomic Features
The consistency analysis between the two feature extraction procedures showed that 1098 out of 1672 features were stable (Spearman's r > 0.8) and usable for feature selection, including 199 first-order, 14 shape, and 885 texture features.
Among the 1098 usable features, 40, 13, 166, and 398 features were highly relevant (Fstatistic test's p > 0.1) to EGFR, KRAS, ERBB2, and TP53 mutations, respectively (Figure 3). The 40 highly relevant features for EGFR mutation included five first-order and 35 texture features but did not include any size and shape-related features. The highest correlated first-order and texture features were exponential_firstorder_MeanAbsoluteDeviation and logarithm_gldm_LargeDependenceHighGray LevelEmphasis, respectively. The 13 highly relevant features for KRAS mutation were texture features, in which the highest correlated one was square_ngtdm_Complexity. The 166 highly relevant features for ERBB2 included 23 first-order, 4 shape, and 139 texture features, in which the highest correlated features were log.sigma.0.5.mm.3D_firstorder_Minimum, original_shape_ SphericalDisproportion, and log.sigma.0.5.mm.3D_glrlm_ShortRunHighGrayLevelEmphasis, respectively. The 398 highly relevant features for TP53 mutation included 43 first-order, six shape, and 349 texture features, in which wavelet.LHH_firstorder_Uniformity, original_shape_SurfaceArea, and log.sigma.4.5.mm.3D_ngtdm_ Complexity had the highest correlation, respectively.
Subsequently, the top 20 relevant features (13 for KRAS) in the F-statistic test to EGFR, KRAS, ERBB2, and TP53 mutations were used to construct discriminative models. Detailed visualized results and distributions of these features are shown in Figures S1-S4. The finally selected features with a non-zero coefficient after LASSO selection for all mutations are presented in Table S3.
Model Performance
Gender was a significant factor associated with EGFR mutation (Wilcox rank-sum p = 0.001) and with KRAS mutation (p = 0.036). Tumor stage (cT) was a significant factor for ERBB2 mutation (p = 0.044). Age, sex, and tumor metastasis (cM) were significant factors for TP53 mutation (all p < 0.01). Then, these relevant clinical factors (Table S4) were combined with the above-mentioned radiomic features to establish combined models.
Extraction and Selection of Radiomic Features
The consistency analysis between the two feature extraction procedures showe 1098 out of 1672 features were stable (Spearman's r >0.8) and usable for feature sel including 199 first-order, 14 shape, and 885 texture features.
Among the 1098 usable features, 40, 13, 166, and 398 features were highly re (F-statistic test's p > 0.1) to EGFR, KRAS, ERBB2, and TP53 mutations, respectively ( 3). The 40 highly relevant features for EGFR mutation included five first-order and ture features but did not include any size and shape-related features. The highest lated first-order and texture features were exponential_firstorder_MeanAbsolute tion and logarithm_gldm_LargeDependenceHighGray LevelEmphasis, respectivel 13 highly relevant features for KRAS mutation were texture features, in which the h correlated one was square_ngtdm_Complexity. The 166 highly relevant featur ERBB2 included 23 first-order, 4 shape, and 139 texture features, in which the highe related features were log.sigma.0.5.mm.3D_firstorder_Minimum, original_shape_ icalDisproportion, and log.sigma.0.5.mm.3D_glrlm_ShortRunHighGrayLevelEmp respectively. The 398 highly relevant features for TP53 mutation included 43 first six shape, and 349 texture features, in which wavelet.LHH_firstorder_Uniformity nal_shape_SurfaceArea, and log.sigma.4.5.mm.3D_ngtdm_ Complexity had the h correlation, respectively.
Subsequently, the top 20 relevant features (13 for KRAS) in the F-statistic test to KRAS, ERBB2, and TP53 mutations were used to construct discriminative mode tailed visualized results and distributions of these features are shown in Figures The finally selected features with a non-zero coefficient after LASSO selection for a tations are presented in Table S3.
Model Performance
Gender was a significant factor associated with EGFR mutation (Wilcox rank-sum p = 0.001) and with KRAS mutation (p = 0.036). Tumor stage (cT) was a significant factor for ERBB2 mutation (p = 0.044). Age, sex, and tumor metastasis (cM) were significant factors for TP53 mutation (all p < 0.01). Then, these relevant clinical factors (Table S4) were combined with the above-mentioned radiomic features to establish combined models.
The radiomics model and combined model showed similar performance in discriminating EGFR and ERBB2 mutations. The AUC (Figure 4) of these two models for discriminating EGFR was 0.77 (95% CI: 0.70 to 0.85) and 0.78 (0.70 to 0.86), respectively (DeLong's p = 0.590). The AUC of these two models for discriminating ERBB2 was 0.88 (0.80-0.96) and 0.87 (0.78-0.95), respectively (p = 0.585). The combined model showed a sensitivity and specificity of 0.52 (0.40-0.65) and 0.96 (0.87-0.99) for discriminating EGFR, respectively (Table 1) year-old female non-smoker, with ERBB2 mutation in lung adenocarcinoma. A lobulated solid mass is observed in the middle lobe of the right lung. The maximum diameter is 18 mm. D) A 66-year-old male smoker, with TP53 mutation in lung adenocarcinoma. A lobulated solid mass with rough margin is observed in the lower lobe of the right lung. The maximum diameter is 12 mm.
Discussion
In this study, we established machine learning-derived radiomics models to determine the presence of EGFR, KRAS, ERBB2, and TP53 mutations in patients with NSCLC, based on radiomic features and combined with clinical factors. The AUC of the combined models ranged from 0.78 to 0.87 for discriminating these four mutations. In particular, the specificity to determine EGFR mutation was 0.96, indicating a very low false-positive rate that is potentially useful to screen outpatients with EGFR wildtype. The sensitivity to define KRAS, ERBB2 and TP53 mutations ranged from 0.82 to 0.92, suggesting a low falsenegative rate, which is helpful in selecting patients with mutations for invasive sampling and NGS testing. Our study reveals the possibility of using a noninvasive method to
Discussion
In this study, we established machine learning-derived radiomics models to determine the presence of EGFR, KRAS, ERBB2, and TP53 mutations in patients with NSCLC, based on radiomic features and combined with clinical factors. The AUC of the combined models ranged from 0.78 to 0.87 for discriminating these four mutations. In particular, the specificity to determine EGFR mutation was 0.96, indicating a very low false-positive rate that is potentially useful to screen outpatients with EGFR wildtype. The sensitivity to define KRAS, ERBB2 and TP53 mutations ranged from 0.82 to 0.92, suggesting a low falsenegative rate, which is helpful in selecting patients with mutations for invasive sampling and NGS testing. Our study reveals the possibility of using a noninvasive method to screen for multiple genetic mutations before invasive sampling and expensive molecular testing.
The mutation status of EGFR, KRAS, ERBB2, and TP53 is closely associated with the response to targeted therapy for NSCLC. EGFR is the main actionable target of many targeted therapies in patients with NSCLC [31]. KRAS mutation is also a common oncogenic driver [5]. Recently, novel therapeutic strategies for KRAS G12C, the most common KRAS mutation in NSCLC, have emerged [32,33]. A recent early-phase clinical trial evaluated the efficacy of zenocutuzumab, a bispecific ERBB2/ERBB3 antibody [34]. TP53 mutation is a potential negative prognostic factor for NSCLC patients with TKI therapy due to increased cellular resistance to EGFR-TKIs [10,35]. The simultaneous and rapid detection of these four mutations is crucial for clinical decision-making in patients with NSCLC.
Because lung cancer is a heterogeneous disease at the molecular level, testing for genetic alteration biomarkers has been recommended for each specimen of advanced-stage NSCLC [36]. NGS allows comprehensive polygenic analysis and facilitates the identification of alterations for targeted therapy. Before NGS, genomic analysis was limited to specific loci known to be associated with each cancer subtype. Single-gene sequencing like Sanger technology is limited to DNA insertion, deletion, and substitution, while NGS can detect chromosomal rearrangement, oncogenic fusion event, translocation, and copy number alteration. Therefore, this study took NGS as the reference standard. Although NGS is more cost-effective than multiple single-gene tests in detecting multiple genetic alterations, the cost of NGS is still high, which limits its clinical implementation. Our study demonstrated a noninvasive and low-cost method to screen patients with NSCLC before NGS testing. In particular, the high specificity (0.96) to determine EGFR mutation is potentially useful to screen outpatients with EGFR wildtype. The patients with negative radiomic results would have a high probability for wildtype, thus avoiding unnecessary NGS tests. The high sensitivity (0.82 to 0.92) to determine KRAS, ERBB2, and TP53 mutations increases the certainty of detecting these mutations. Patients with positive radiomic results may have a higher probability of harboring mutations in these genes that could be validated through NGS.
Analysis of CT-based image features has received extensive attention on detecting EGFR mutation, limited attention to KRAS and TP53 mutations, but no report on ERBB2 mutation. In 385 patients with lung adenocarcinoma, Liu et al. found that using human semantic annotation of a CT scan combined with clinical variables reached an AUC of 0.78 to discriminate EGFR+/EGFR-, superior to using clinical variables alone (AUC = 0.69) [37]. Zhang et al. conducted a multivariate analysis based on CT radiomic features to discriminate EGFR mutation in patients with NSCLC and reached AUCs of 0.86 and 0.87 the training (n = 140) and test (n = 40) cohorts, respectively [38]. Recently, Wang et al. established a deep learning model to distinguish EGFR+/EGFR−, and reached AUCs of 0.85 and 0.81 in the training (n = 603) and test (n = 241) cohorts, respectively [39]. Our models achieved an AUC of 0.78 to identify EGFR mutation, which is comparable to the previous reports. However, there were few studies on discriminating KRAS and TP53 mutations. Velazquez et al. developed radiomic signatures to distinguish KRAS+/KRAS−, EGFR+/EGFR−, and EGFR+/KRAS+ with a training cohort (n = 353) and reached AUCs of 0.63, 0.69, and 0.80 in an independent test cohort (n = 352), respectively [19]. Pinheiro et al. Included 116 and 114 patients with NSCLC to establish models to detect EGFR and KRAS mutations, respectively. They found that radiomic features were correlated with EGFR mutation (AUC = 0.58) but not KRAS (AUC = 0.51), and the semantic hybrid model improved the AUC to 0.74 for EGFR mutation status [22]. Wang et al. developed and validated a radiomics-based fusion-positive tumor prediction model in 61 patients with early-stage lung adenocarcinoma, which can discriminate TP53/EGFR mutations and tumor mutation burden, and yielded AUCs of 0.84 and 0.59 for identifying TP53 mutation in the training (n = 41) and test cohorts (n = 20), respectively [9]. Our models achieved AUCs of 0.81 and 0.84 to identify KRAS and TP53 mutations, respectively, which is higher than the previous reports. This study's major strength is to simultaneously analyze EGFR, KRAS, ERBB2, and TP53 mutations in a single CT examination. The AUCs for discriminating these four mutations ranged from 0.78 to 0.88. The AUCs of KRAS and TP53 were higher than the reported results (0.63 and 0.66, respectively) [9,19]. One reason to archive high AUCs might be a 3D radiomics algorithm, which was used in this study to extract and analyze 1672 radiomic features. The extracted features in most of the published radiomics studies were relatively fewer. We extracted 1672 radiomics features, which laid the foundation for machine learning to select highly relevant features. Thereafter, a wide range of candidate features can maximize the potential information hidden in the images, thus improving the capacity of reflecting the genotype of NSCLC lesions.
This study has some limitations. First, this retrospective study was conducted in one center. Ideally, a prospective multicenter study would enhance the conclusion of this study. The results may differ in case of the presence of mutations in other populations. Further research is necessary to test the generalizability of our models in other races. Second, we included 134 patients because the NGS test is expensive and not widely used in clinical practice. Increasing the sample size will strengthen the robustness of radiomic models. Third, the model described in this study has not been validated in an independent set. Forth, the extracted radiomic features can be prone to inter-and intra-observer variability as a consequence of the manual part of the image segmentation procedure.
Conclusion
Machine learning-derived 3D radiomics based on CT images can simultaneously identify the presence of EGFR, KRAS, ERBB2, and TP53 mutations in patients with NSCLC, which can sensitively determine EGFR mutation with a very low false-positive rate, and increase the certainty of determining the presence of KRAS, ERBB2, and TP53 mutations. These findings suggest that patients with a negative radiomics result of EGFR mutation can avoid expensive NGS testing, but patients with positive KRAS, ERBB2, and TP53 results should undergo NGS testing. Although these conclusions should be validated in a larger sample size population, machine learning-derived radiomics has the potential to become a noninvasive and low-cost method to screen multiple genetic mutations in patients with NSCLC before using an NGS test, which can help improve individualized targeted therapy.
Supplementary Materials: The following are available online at https://www.mdpi.com/article/ 10.3390/cancers13081814/s1, Figure S1: Logistic regression paths showing the coefficients of top 20 features (13 for KRAS) at different lambda values in the least absolute shrinkage and selection operator (LASSO) feature selection procedure, Figure S2, Correlation heatmaps with clustering of the most relevant radiomic features with the presence of genetic mutation, Figure S3, Heatmaps of the most relevant radiomic features with the presence of genetic mutation, Figure S4, Boxplots of the most relevant radiomic features with the presence of genetic mutation. Table S1: CT acquisition protocols and image reconstruction parameters, Table S2: Variation of EGFR, KRAS, ERBB2, and TP53 mutations, Table S3: Finally selected features with non-zero coefficient after the least absolute shrinkage and selection operator (LASSO) selection, Table S4: Association between clinical factors and the presence of EGFR, KRAS, ERBB2, and TP53 mutations. Radiomic feature interpretation.
Author Contributions: (I) IConception and design: T.Z., X.X.; (II) administrative support: X.X.; (III) provision of study materials or patients: T.Z., G.L., and B.J.; (IV) collection and assembly of data: T.Z., G.L., and B.J.; (V) data analysis and interpretation: T.Z., Z.X., and X.X.; (VI) manuscript writing: all authors; (VII) final approval of manuscript: all authors. All authors have read and agreed to the published version of the manuscript. | 6,679.4 | 2021-04-01T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
Rectal dosimetry in intracavitary brachytherapy by HDR at rural center of Maharashtra: Comparison of two methods
The purpose of this study was to calculate the radiation dose at the anterior rectal wall as per the International Commission on Radiation Units and Measurements (ICRU 38) recommendations and compare it with the dose calculated by the commonly used intrarectal catheter. Dose delivery by brachytherapy to the cervix is limited by the critical structure of the bladder and rectum. In this study the ICRU-38 rectal point was derived by using a radio-opaque gauze piece on the posterior vaginal wall, and the intrarectal point was derived by inserting a rubber catheter with a wire, inside the rectum. A total of 146 applications were performed in 81 patients. Rectal doses were compared for complementary rectal points R1 and R5, R2 and R6, R3 and R7, and R4 and R8, obtained by both methods. The rectal doses at each complementary pair were compared with each other. The average dose at R1 was 5% higher than at R5 (60.57% vs. 55.57%). The average dose at R2 was 1% higher than at R6 (58% vs. 57%). The average dose at R3 was 1.29% higher than at R7 (52.71% vs. 51.42%), and the average dose at R4 was 1.15% higher than at R8 (43% vs. 41.85%). There were many instances where the rectal dose exceeded by more than 15%, from the R1 to R4 points (43, 22, 21, and 11 times, respectively, for R1-R5, R2-R6, R3-R7, and R4-R8 pairs). The difference in dose between R1 and R5 was significant as seen on the statistical tests, i.e., Pair T test, Wilcoxan Signed Ranks test, and Sign test (p value 0.002). The rectal dose obtained by the intrarectal wire method underestimates the actual dose to the rectum when compared to the ICRU-38 method. Thus ICRU-38 recommendations should be strictly adhered to, to reduce late complications.
Introduction
The combination of external beam radiotherapy (EBRT) and intracavitary brachytherapy (ICRT) has been well established in the definitive management of cervical cancer. The bladder and rectum are two organs that act as dose limiting organs, due to their low tolerance. The most important treatment-related factors that that could lead to a creation of late complications on the rectum include, total dose to the rectum, volume of the irradiated rectum, and dose rate of the brachytherapy modality used. [1][2][3][4] If a higher dose is delivered to these critical organs, it leads to late complications, with a decrease in the quality of the patient's life. [5] To overcome this problem brachytherapy is added, which delivers higher doses to localized regions of the cervix and a lesser dose to the bladder and rectum primarily, as per inverse square law, and also due to absorption and scatter in the intervening media. The International Commission on Radiation Units and Measurements (ICRU 38) recommends [6] certain guidelines for measurements and reporting of intracavitary insertions, as it may vary from center to center. As per ICRU-38, the posterior vaginal wall is visualized by means of an intravaginal mould or radio-opaque gauze. The rectal reference point is determined on a lateral radiograph, on the anteroposterior (AP) line drawn through either the lower end of the intrauterine source or through the middle of the intravaginal sources, 5 mm behind the posterior vaginal wall. Many centers use the intrarectal catheter to visualize the rectum, and a point 5 mm anterior to this is taken as the rectal point, for calculation. The calculation of the rectal dose is very important as a higher rectal dose often leads to increased morbidity in patients, even in successfully treated patients.
Materials and Methods
This study was undertaken in a prospective way from October 2006 to January 2008. Eighty-one patients of proven carcinoma cervix, stage IIB to IIIB, were taken for the study. All the patients received EBRT at the dose of 50 Gy/25#/5 weeks. All the patients were planned for three to four fractions of ICRT via the High Dose Rate Brachytherapy machine (HDR), as per the stage. During each application, an intrauterine tandem (4-6 cm) was placed into the uterine cavity, with ovoids (1.5-2.5 cm) in the vagina, at the level of the fornices. A radio-opaque gauze (barium soaked) was placed on the posterior vagina followed by proper packing with a povidine iodine-soaked gauze piece to further displace the bladder anteriorly and rectum posteriorly . A rectum marker, using a radio-opaque metallic wire inside the hollow rubber catheter of 1 cm diameter, was also placed inside the rectum. Orthogonal films were taken and rectal points were marked on the lateral x-ray film as R1-R4, 0.5 cm behind the posterior most visualized portion of the barium-soaked gauge, with R1 at the level of the cervical os, i.e., on the lower end of intrauterine source. Similarly points R5-R8 were marked 0.5 cm anterior to the rectal catheter with R5 at the level of the cervical os [ Figure 1]. The distance between each point in both sets was taken as 1 cm. The points were selected symmetrically in relation to the anteroposterior line passing through the middle of the intravaginal sources. In all insertions a dose of 7 Gy was given at point A. Planning and dose distribution were calculated using the Abacus treatment planning software for each complementary rectal point, i.e., R1 and R5, R2 and R6, R3 and R7, and R4 and R8.
Statistics
For all rectal points, mean, median, maximum, and minimum doses were calculated. For the R1 and R5 pair, Paired T test, Wilcoxan Signed Ranks test, and Sign test were performed using SPSS statistical software version 10.0, for assessment of significance.
Results
There were a total of 146 insertions of brachytherapy. The mean, median, maximum, and minimum doses for rectal points R1-R8 are shown in Table 1.
The pair T test was applied to R1 and R5, R2 and R6, R3 and R7, and R4 and R8 pairs. For the R1 and R5 pair, the mean were 4.24 and 3.88 and the standard deviation 1.23 and 1.06, respectively. The 95% confidence interval of the difference was between 0.1341 and 0.5763 with a t value 3.175 (p value 0.002), which was significant. For the other three pairs there was no significant difference. On applying the Wilcoxan Signed Ranks test, on 87 occasions R1 was greater than R5 (Sum of ranks = 6766), whereas, on 57 occasion R5 exceeded R1 (Sum of Ranks = 3674). Thus the Z value for the test was -3.083 (p value 0.002), which was significant. Similarly, the Sign test also gave the value of Z as -2.417 (p value 0.016) for the R1and R5 pair, which is again significant. For the other three pairs these tests were not significant. The doses of R1 and R5 were plotted against their mean [ Figure 2]. It shows that the dose at R1 is significantly higher than that at R5.
Discussion
In the recent era, the dosimetry for ICRT has seen many changes, with the computed tomography scan (CT scan) being increasingly used for planning. [7] However, these facilities are limited to only a few urban centers. Rural recommendations. Doses can be calculated for Manchester points A, B, rectal, and bladder reference points. Spatial orientation of the applicator, with relation to fixed bony points, can be seen by these orthogonal films. Specification of the region of rectal mucosa that absorbs the highest doses, as per ICRU-38 recommendations, is valid, and serves as a means of comparison among radiotherapy centers. These rules should be obeyed by all the centers to have uniformity in reporting the cases. To calculate the rectal dose many centers use flexible wire markers inside the rectum. [8][9][10] It was seen in these studies that the calculated rectal dose was different in the two methods (rectal wire and ICRU-38). On an average the rectum receives a 15% higher dose as compared to the dose calculated by the intrarectal wire method. Our study also confirms these findings. The most significant difference was seen in the R1 and R5 pair, i.e., at the level of the cervical os or the lowest level of the intrauterine source. At the other points also (R2 and R6, R3 and R7, R4 and R8) a difference was present between the ICRU-38 calculation and the intrarectal wire method, but it was small. The average dose at R1 was 5% higher than that at R5 (60.57% vs. 55.57%), which was very significant when the statistical tests were applied (p value = 0.002). It clearly shows that at this point the dose estimation by the usual intrarectal method, underestimates the actual dose. The lower dose in the intrarectal method arises due to the fact that these rectal wire markers are inserted randomly in the rectal lumen and usually do not fill the whole rectum due to their smaller diameter. Furthermore, variation in diameter of the rectal wire can change the position of the rectal points in this method. Using a catheter with diameter smaller than the rectal lumen may lead to wrong marking of the rectal points, if the catheter is not in close approximation with the anterior rectal wall. Similarly, if a larger diameter is used it may actually push the anterior wall of the rectum further anteriorly, again leading to wrong marking of the rectal point. Thus positions of these rectal markers are variable, thereby leading to false calculation of the rectal dose. This will lead to a lot of variation in reporting from center to center. Therefore, specific rectal points determined in this manner cannot represent the true rectal wall dose.
The precise location of the rectum is possible on CT slice by CT-based dosimetry. [7,11] It is not possible to introduce image-based brachytherapy (CT based) in daily clinical practice everywhere as it is expensive, time consuming, and not available at most centers. Thus, for centers with limited resources, it is best to stick to the ICRU guidelines, which require just orthogonal films.
Conclusion
By comparing the ICRU-38 recommended rectal point and the intrarectal catheter derived rectal point, it can be seen that the actual dose received is high, as seen in the ICRU-38 method. The intrarectal catheter falsely shows a lower value at the rectal point. Thus, till modern imagebased brachytherapy replaces the conventional system, one should stick to the ICRU guidelines for dose estimation. | 2,403.8 | 2009-04-01T00:00:00.000 | [
"Medicine",
"Physics"
] |
Enhancing coherence via tuning coupling range in nonlocally coupled Stuart–Landau oscillators
Nonlocal coupling, as an important connection topology among nonlinear oscillators, has attracted increasing attention recently with the research boom of chimera states. So far, most previous investigations have focused on nonlocally coupled systems interacted via similar variables. In this work, we report the evolutions of dynamical behaviors in the nonlocally coupled Stuart–Landau oscillators by applying conjugate variables feedback. Through rigorous analysis, we find that the oscillation death (OD) can convert into the amplitude death (AD) via the cluster state with the increasing of coupling range, making the AD regions to be expanded infinitely along two directions of both the natural frequency and the coupling strength. Moreover, the limit cycle oscillation (OS) region and the mixed region of OD and OS will turn to anti-synchronization state through amplitude-mediated chimera. Therefore, the procedure from local coupling to nonlocal one implies indeed the continuous enhancement of coherence among neighboring oscillators in coupled systems.
Practically, conjugate coupling (or dissimilar coupling) is popular and natural coupling manner in many real circumstances 35,36 . Recent studies have suggested that the conjugate coupling is more practical method than classical diffusive coupling in certain situations. For example, it can adjust effectively the emitting of light signal in the coupled semiconductor laser system 37 . Not only that, conjugate coupling can realize oscillation suppression of coupled identical oscillators in the absence of time delay 14 . However, most existing studies have focused mainly on two conjugate coupled oscillators, where the coupled systems are low-dimensional. Only recently, the role of conjugate coupling in locally coupled chaotic systems has been reported that stability region of synchronization is irrelevant to the number of oscillators 38 , and they also have observed the oscillation quenching and multistability phenomenon in locally conjugate coupled Stuart-Landau oscillators 16 . Compared with the local coupling, the relationship among oscillators becomes closer in the case of nonlocal coupling with the increasing of coupling range, but the collective dynamic behaviors of nonlocally coupled system still are not clearly described. Therefore, the main work of this paper is to investigate the changes of oscillation patterns in the transition from local coupling to nonlocal coupling, to further exploit the crucial role of coupling range.
Motived by the above analysis, we introduce the nonlocal coupling topology to Stuart-Landau oscillators with conjugate variables throughout this paper. In next section, based on the local coupling scenario where AD, OD and limit cycle oscillation (OS) can be observed in the phase diagram 16 , we try to reveal successively the variations of dynamic behaviors through tuning the coupling range from local coupling to nonlocal coupling. Through theoretical analysis, we can obtain the conditions for AD in different coupling ranges. Besides, the numerical results show that OD can transit to AD for the strong coupling, where the stable fixed points gather gradually to form cluster states and finally turn to a stable fixed point (i.e. AD) with the increase of coupling range. Moreover, we also find numerically for the weak coupling that OD and OS can convert to anti-synchronization via the amplitude-mediated chimera. Thus, this peculiar performance, in fact, portends the enhancement of coherence. Finally, the conclusions will be given.
Results
Now, let us consider a ring of N identical nonlocally coupled Stuart-Landau oscillators, and the oscillators are coupled mainly through conjugate variables, the dynamic equation is shown as where i = 1, 2, …, N, the parameter ε denotes the coupling strength, and w is the inherent frequency of the oscillators. For single uncoupled oscillator, it exhibits limit cycle oscillation with radius 1 and frequency w. Here p governs the number of nearest neighbors in each direction of the ring, and it is called the coupling range. Then p = 1 corresponds to the local coupling, and p = N/2 (N is even) or p = (N − 1)/2 (N is odd) is the global coupling. Thus, changing the value of p can establish the transition from the local coupling to the global coupling. Here the periodic boundary condition is implemented for the above system. In addition, the networks of N (N = 100) nonlocally coupled oscillators (Eq. (1)) are solved numerically by the fourth order Runge-Kutta method with integration step size h = 0.01, and the initial condition is adopted as follows: the former 50 oscillators are specified to start from positive constants (x i = 0.3, y i = 0.5, i = 1, … 50), and the other 50 oscillators select the opposite numbers (x i = −0.3, y i = −0.5, i = 51, … 100). Obviously, the origin is a fixed point of the coupled system, and when it is stable, AD can appear. Hence, in the following we consider firstly the generation condition of AD through executing linear stability analysis method, and the Jacobian matrix J at the origin can be obtained as follows: Here, the diagonal elements of the matrix J are A, and the left and right sides of the matrix A have p matrix B respectively according to the periodic boundary. Also A and B are matrix blocks of R 2×2 , namely And the remaining elements are the zero matrix (O) of R 2×2 . Then, the characteristic equation to estimate AD of the coupled system can be derived by utilizing the property of the cyclic matrix 39 , AD emerges if and only if all eigenvalues have negative real parts, so the critical conditions induced AD are given by It should note that the lower bound of coupling strength to induce AD is ε = 1, and the upper bound is changed by the second inequality of Eq. (7). Figure 1 depicts the range of coefficient λ k /2p, where k is set to the horizontal coordinate, p is the vertical coordinate, and the color codes represent the values of λ k /(2p). Obviously, |λ k /(2p)| ≤ 1, and this equal sign can be achieved in k = N/2 and p = 1. Thus, one can obtain that the AD boundary for p = 1 is 1 < ε < (1 + w 2 )/2, which is consistent with ref. 16 . Furthermore, in the case of local coupling (p = 1), the phase diagram can be obtained numerically on the w − ε plane in Fig. 2. Specifically, we can define an amplitude index A i , and A i denotes the difference between the global maximum and minimum values of the time series of the ith oscillator over a sufficiently long interval. If A i (i = 1, …, N) turn to zero, AD or OD occurs, otherwise, OS can occur. Moreover, one needs to determine whether the maximum (or minimum) values of all oscillators are the same to distinguish AD and OD, namely, if they are identical, AD appears, otherwise, OD appears. Herein, the green region denotes oscillation death (OD), and purple region is amplitude death (AD), and the white region is limit cycle oscillation (OS). It is worth mentioning that for the OD phenomenon in Fig. 2, it can be divided into two parts by the line ε = 1.0, the upper part is the only OD denoted Region I, and the lower part is a coexistence of OD and limit cycle oscillation (OS) called Region II. In addition, here the corresponding theoretical boundaries (red, blue and green lines in Fig. 2) have been described for locally coupled system in ref. 16 .
Through the above analysis, the collective behaviors of the coupled system (Eq. (1)) in the local coupling form (p = 1) have been established entirely with the changes of parameters (ε and w). However, up to now, the evolutions of oscillation patterns remain unclear for the scenario of p ≠ 1. For example, does the increasing of coupling range break stability of the AD and OD? How does coupling range affect the transition between AD and OD? And apart from the above dynamic behaviors in the local coupling, what new phenomena can appear for the nonlocal network (1 < p < N/2)? These questions will constitute the main researches of this paper.
In this work, considering that the transition from local coupling to nonlocal one, we can obtain theoretically the conditions supported AD for different coupling range. However, the theoretical analyses for OD state becomes more difficult due to the appearance of different equilibrium points, which leads to that the researches of OD rely mostly on numerical simulations. Similarly, the changes of OS state also are researched using the numerical methods. Therefore, in the following, for the evolutions of dynamical regimes we will adopt two numerical methods, one is spatiotemporal patterns of coupled system, and the other is the statistical measure-strength of incoherence (SI) 40 . In fact, SI is a suitable statistical method to measure the coherence of the coupled oscillators via using the time series. The utilization of SI will provide a very efficient method to observe the transition processes.
Effects of coupling range on AD. In this subsection, we will investigate the effects of coupling range p on the AD state in detail. Firstly, According to the second inequality of Eq. (7), we can derive λ ε ε , and we can further obtain numerically the critical values of p with the change of w and ε as shown in the following Fig. 3. Wherein, when p exceeds the critical values, Eq. (7) always holds, and AD can occur for any coupling strength (ε > 1) and natural frequency (w > 0).
In the following, we will show more concretely the range of AD for different p. On the one hand, we fix the intrinsic frequency w = 2 without loss of generality, the above Eq. (7) can be simplified to According to the properties of quadratic function, one can easily know that the second inequality of Eq. (8) is always correct if Δ = 4 − 20a < 0, and it further is equivalent to p ≥ 13 through simple algebraic calculations and numerical analyses. That is, as long as p ≥ 13 is established, AD will occur for coupling strength ε > 1, which agrees with the results in Fig. 3. Besides, for Δ > 0, we also can derive that the ranges of coupling strength ensured AD are ε < < − − a a 1 (1 1 5 )/ and ε > + − a a (1 1 5 )/ with the change of coupling range p. Interestingly, we find numerically that OD state can occur in the interval ε − − < < + − a a a a (1 1 5 )/ (1 1 5 )/ , which are shown in Fig. 4(a). Herein, the ranges of AD are gradually expanded, while the OD regions correspond to shrink as p increases, and finally only AD state occurs for p ≥ 13. Moreover, Fig. 4(b-d) show the bifurcation diagram of the variable y i (i = 1, …, 100) for different coupling ranges p. Therein, AD appears for 1 < ε < 2.9 and ε > 17.9, and OD is in 2.9 < ε < 17.9 at p = 8. Then AD enlarges to 1 < ε < 3.2 and ε > 10.8, and OD reduces to 3.2 < ε < 10.8 at p = 10. Further AD range becomes 1 < ε < 4.3 and ε > 5.7, and OD narrows to 4.3 < ε < 5.7 at p = 12. As a result, one can observe clearly that apart from the transition from AD to OD via pitchfork bifurcation, the other transition from OD to AD can also be achieved through inverse pitchfork bifurcation when the natural frequency is fixed. Thus, the changes of coupling range have a significant impact on the AD phenomenon of coupled system for the fixed natural frequency. Namely, the AD state (HSS) is promoted, while the OD state (IHSS) is restrained when the coupling range increases.
On the other hand, we can also give the coupling strength ε = 5.0 to investigate the influence of the coupling range p on the natural frequency w for inducing AD. Here Eq. (7) can be simplified to and it can further be reduced to w 2 > 25(λ k /(2p)) 2 − 16, which indicates that the upper bound of w to produce AD is infinite, and changing the p value only can adjust the lower bound. It can be noticed that if |λ k /(2p)| < 4/5 is satisfied (where the threshold of coupling range is p = 18), Eq. (9) always holds, and the result also can be obtained directly in Fig. 3. Namely, w can become very small for p ≥ 18. In addition, we also numerically calculate the minimum natural frequencies for p < 18 in Fig. 5(a), where the minimum value of w (red region) decreases monotonically with the increasing of p. Not only that, the transition from OD to AD through inverse pitchfork bifurcation also can be observed with the increasing of w, which can be confirmed by the bifurcation diagram of the variable y i of coupled system for p = 6, 12, and 17 in Fig. 5(b-d). Herein, the critical natural frequencies to achieve the transition are w = 2.8, 2.0, and 0.5 respectively. Thus, when the natural frequency varies, we can obtain that the increasing of coupling range can accelerate the transition from OD to AD until OD is eliminated completely.
To end this subsection, we can know that tuning the coupling range, the AD regions can be enlarged remarkably in two directions of both the coupling strength and the natural frequency. Particularly, only AD state can occur for ε > 1 on the (ε, w) plane in Fig. 2 when the coupling range exceeds the threshold. In fact, the different stable fixed points (IHSS) of OD are reduced and ultimately converged into the same steady state (AD) with the increasing of coupling range in the above region, which will be discussed detailedly in next subsection. It is noteworthy that the adjustment of coupling range in the stable regions (AD and OD) makes that the stability of coupled system is not destroyed, and just the number of stable equilibrium points can be changed.
Effect of coupling range on OD. Through the above analyses for AD, it is concluded that OD regime becomes eventually AD as coupling range p increases in the Region I. Here we will explore the emerging phenomena in the process of transition from OD to AD. Firstly, we select a point (w = 1.0, ε = 2.0) in the Region I, further to observe the changes of dynamic behaviors of the coupled system with the increasing of p by spatiotemporal Fig. 6. From Fig. 6(a), we can see that the system has numerous stable fixed points for the coupling range p = 1, namely different oscillators populate different stable branches (emergence of OD phenomenon). With the increasing of coupling range to p = 5, the stable branches of OD reduce, and the neighboring oscillators gradually fall on the same branch to appear the cluster behavior, which is shown in Fig. 6(b). Then, the amplitude of oscillators decreases progressively (see Fig. 6(c)) and vanishes eventually to form AD state when p climbs to p = 22 (see Fig. 6(d)). Hence, the transition of OD to AD in the Region I can be regarded as two processes: clustering of stability points and decreasing of amplitude. In addition, for the continuous increasing of coupling range p, the changes of snapshots are exhibited in the Supplementary Materials (OD1. gif).
For better describing the variations from OD to AD in the Region I, we can use the strength of incoherent measure (SI) 40 to observe the transition from IHSS (incoherent state) to HSS (coherent state). In fact, the larger is SI the more coherent is the dynamics. The transition from OD to AD is actually a process that the coherence of coupled system is enhanced gradually. We firstly define new variables ξ 1,i = x i+1 − x i and ξ 2,i = y i+1 − y i , which implies that the neighboring oscillators are coherent for ξ 1,i , ξ 2,i → 0 (ξ 1,i = ξ 2,i = 0 for AD state). Then the total number of oscillators is split into M bins with equal size n = N/M, and the local standard deviation σ l (m) can be derived where Θ is the Heaviside step function, and δ is relatively small and represents a certain percentage of difference between maximum and minimum of the variables in the coupled system. We also can note that the smaller δ is, the stronger the demand of coherence among the neighboring oscillators is. Here, we take M = 10 and δ = 0.01, and SI = 0, SI = 1 and 0 < SI < 1 represent AD state, incoherent OD and cluster state respectively. Namely, when the coupled system is in the OD state, all oscillators populate different stable branches, presenting an incoherent state (SI = 1). With the enhancement of coherence, the coupled oscillators can turn to cluster state (0 < SI < 1). Finally, all oscillators are stabilized to the same fixed point to produce AD phenomenon, where the coherence is the strongest (SI = 0), namely, complete coherent state. Therefore, SI can be used to distinguish the transition from OD to AD. Now, we can draw the change of SI for different points of the Region I with the increasing of p. For instance, Fig. 7(a) shows the transition from incoherent OD state to coherent AD via cluster state for different coupling strength ε = 2.0,4.0 and 6.0 at fixed natural frequency w = 1.0. It can be obtained that the greater the coupling strength ε, the smaller coupling range p to induced AD is. Similarly, the increasing of natural frequency also can For the OD state in Region II, when coupling range p increases, the stability of coupled system will be destroyed, so finally AD also will not appear. However, we observe that two emerging phenomena, chimera state and anti-synchronization, can arise with the change of coupling range. Similarly, the strength of incoherent measure (SI) also can be utilized to depict the variations of OD state, and the parameters of SI are chosen M = 20 and δ = 0.05. Owing to the selection of initial condition, the completely coherent state is impossible (SI = 0), and only for SI = 0.1 the coherence reach the strongest (see Fig. 8). So here SI = 0.1, SI = 1 and 0.1 < SI < 1 represent the strongest coherent state, incoherent OD and chimera state respectively. Figure 8 describes the change of SI for the different points in Region II. Wherein, with the increasing of p, the oscillation behaviors turn rapidly to chimera state (0.1 < SI < 1) from the incoherent OD state (SI = 1), and finally reaching the coherent state (SI = 0.1). In fact, the smaller SI represents the stronger coherence, which also can be confirmed though spatiotemporal evolutions and snapshots of the variable y i in Fig. 9 with ε = 0.9 and w = 0.5(the red square line in Fig. 8(a)). Wherein, for SI = 1 the coherent OD state occurs (see Fig. 6(a) with p = 1), and then the coupled system shows traveling wave state when SI = 0.2 (p = 2), and SI = 0.4 (p = 27), SI = 0.6 (p = 35) and SI = 0.8 (p = 40) show respectively chimera state behaviors in Fig. 9(b-d). Thus, the coherence between the neighboring oscillators is gradually decreased with the increasing of SI. Until SI = 0.1 (p ≥ 47) the oscillators evolve into two synchronization regimes: 1-50 oscillators maintain the same synchronization rhythm, and 51-100 oscillators have another same rhythm synchronous oscillation as shown in Fig. 9(e), which represents the appearance of anti-synchronization. Furthermore, the continuous change of snapshots with p is also provided in the Supplementary Materials (OD2. gif).
Considering that chimera states have various categories, it is important to distinguish the types of chimera states mentioned above. To illustrate this, we calculate the snapshots for the variable x i (t) and mean phase velocity Here Δt (=1000) denotes the time window over which the average is computed. The mean phase velocity profile is arc shaped to denote the incoherent region of a chimera state, while it is flat to represent the coherent region of the chimera. Take the Fig. 9(e) as example, we can obtain that the chimera behaviors of the coupled system not only with respect to the amplitude, but also the phase through snapshots and mean phase velocity profile in Fig. 10. Besides, we also conducted out the same analysis for the above chimera behaviors and obtained the identical results. Therefore, the chimera states should be called amplitude-mediated chimera 26,27 .
To sum up this subsection, we can obtain that the OD in Region I will convert into AD (reduction of equilibrium points), while the OD in Region II can occur anti-synchronization (destruction of stability of equilibrium points). Consequently, the emergence of two different behaviors is mainly caused by the size of coupling strength, that is, weak coupling strength is conducive to synchronization, whereas strong one contributes to oscillation quenching (AD/OD) 7 .
Effect of coupling range on LC oscillation. For the limit cycle oscillation in Fig. 2, we also can use the SI index to describe the transition from incoherent state to coherent state with the change of coupling range. From Fig. 11, it can be clearly observed that the coupled system in incoherent limit cycle oscillation (SI = 1) becomes anti-synchronization state (SI = 0.1) through chimera state (0.1 < SI < 1). However, for the relatively weak coupling strength ε = 0.1 (green triangle line in Fig. 11(a)), the transition is achieved directly from chimera state to anti-synchronization state. Moreover, the coupling range induced anti-synchronization (the strongest coherence) is relatively small for smaller coupling strength at fixed natural frequency, while it needs the larger natural frequency at fixed coupling strength. Similarly, we here also describe the spatiotemporal evolutions and snapshots of the variable y i in Fig. 12 with ε = 0.5 and w = 1.5 (the red square line in Fig. 11(a)). Wherein, with the decreasing of SI, the coherence of coupled system increases gradually (Fig. 12(a-c)), until finally the anti-synchronization state is reached (SI = 0.1). Therefore, adjusting the coupling range also can achieve the enhancement of coherence among oscillators for OS region. It is worth mentioning that here the observed chimera states are also amplitude-mediated chimera ( Fig. 12(b,c)), and the relevant evidences are not shown repeatedly. Besides, the continuous change of snapshots with the increasing of coupling range p also is showed in the supplementary materials (LCOS. gif).
Discussion
To summary, we have researched systematically the collective dynamics behaviors in a ring of N nonlocally coupled Stuart-Landau oscillators with conjugate variables. Though increasing the coupling range, the oscillation patterns of nonlinear system can produce notable variations. For relatively strong coupling strength (ε > 1), the AD region gradually expands along two directions of both the natural frequency and the coupling strength when the coupling range increases gradually. In fact, only homogeneous steady state (AD) and inhomogeneous steady state (OD) appear for conjugate coupled system in the case of local coupling (p = 1). However, with increasing of coupling range, the different stable branches of OD gradually converge to the single equilibrium to generate AD state. Thus, here the increasing of coupling range not only maintains the system's stability, but also enhances their homogeneity. In contrast, for weak coupling strength (ε < 1), as coupling range increases, the stability of OD is destroyed due to the weak interaction, and both OD and OS can convert ultimately into anti-synchronization via amplitude-mediated chimera, which indicates that the coherence of dynamic behaviors of coupled system also is improved. In view of the above fact, it is exclusively illustrated that tuning the coupling range is an effective approach to facilitate coherence/consistency of nonlinear system from incoherence/disordering states.
Finally, through the research of this paper, it is obvious that the increase of coupling range can significantly prompt the incoherent states (OD/OS) to turn to coherent states (AD/anti-synchronization) for conjugate coupled systems, namely the regularity and consistency of the coupled system can be improved. Therefore, we believe firmly that our research will have profound application in experimental realizations and real life. For example, cooperative task of robots, formation flying of UAVs (unmanned aerial vehicle) and the coordinated operation of human organs, all this could be executed to achieve optimal control through finding the proper coupling range. | 5,779.2 | 2018-06-07T00:00:00.000 | [
"Physics"
] |
Global Stabilization of a Single-Species Ecosystem with Markovian Jumping under Neumann Boundary Value via Laplacian Semigroup
: By applying impulsive control, this work investigated the global stabilization of a single-species ecosystem with Markovian jumping, a time delay and a Neumann boundary condition. Variational methods, a fixed-point theorem, and Laplacian semigroup theory were employed to derive the unique existence of the global stable equilibrium point, which is a positive number. Numerical examples illuminate the feasibility of the proposed methods.
Introduction
As pointed out in [1], the following logistic system has been widely discussed and studied due to its importance in the development of ecology: Here, Z(t) is the population's quantity or density at the time t, and K and R > 0 are the intrinsic growth rate of the environmental capacity and the population. Because all the solutions of nonlinear ecosystems are difficult to provide accurately, people pay more attention to the long-term dynamic trend of an ecosystem, i.e., the long-term trend of population density (see, for example, [1][2][3][4][5]). People especially want to know whether the population will tend to a positive constant after a long time, which is related to the final long-term existence of the population. For example, the long-term behavior of the following random single-species ecosystem was studied in [2]: Animal populations will inevitably spread because of climate, foraging and random walking. Hence, the reaction-diffusion ecological models well simulate a real ecosystem. Particularly, reaction-diffusion ecosystems were studied in [6,7]. For example, in [8], a single-species Markovian jumping ecosystem with diffusion and delayed feedback under a Dirichlet boundary value was investigated: Markov systems often occur in various engineering technologies (see, for example, [9][10][11]). Particularly, a Markovian jumping delayed feedback model well reflects the influence of stochastic factors on time delays in the changes of populations, such as weather, temperature, humidity, ventilation status, etc. However, the case of a single-species ecosystem with a Neumann boundary value is seldom researched. In fact, a Neumann zero boundary value model well simulates the biosphere boundary without population migration. For example, freshwater fish do not enter the sea through rivers. Inspired by some ideas or methods of the related literature , we are willing to study the global stabilization of a Markovian jumping delayed feedback diffusion ecosystem with a single species equipped with the Neumann zero boundary value.
The main contributions are as follows.
• The uniqueness proof of the positive equilibrium solution is presented in this paper, while it was given in previous work that only involved the existence of the positive equilibrium solution. • In the case of a single-species ecosystem with impulses, it is the first study using a Laplacian semigroup to globally stabilize the ecosystem. • A numerical example is designed to illuminate the advantages of Theorem 2 against [22] (Theorem 4.2), as a result of reducing the algorithm's conservatism.
Preparatory Work
Consider the reaction-diffusion delayed ecosystem: Here, u(t, x) is the population density at time t and space point x. a > 0 and b > 0 are described similarly as those of [8]. τ(t) ∈ [0, τ], and Γ(s, x) is the bounded initial value function on [−τ, 0] × Ω. For convenience, c(r(t)) is denoted simply by c r for r(t) = r ∈ S.
In addition, due to the limited resources of nature, the population density should have an upper limit. At the same time, the population density should also have a lower limit because a low population density does not allow male and female animals to meet in the vast biosphere and reproduce. Hypothesis 1 (H1). There exist two positive constants N 1 and N 2 and a decimal k 0 ∈ (0, 1) with Remark 1. The boundedness assumption in (H1) brings innovations. It ensures that the initial value maintains a certain distance from the upper and lower bounds, so that an impulse with an appropriate frequency and intensity can ensure that the dynamic behavior of the system with such an initial value will not exceed the bounds.
, and u * (x) satisfies (H1), and Now, it is easy to conclude from Definition 1 that u * ≡ a b is a stationary solution of the system (5). Moreover, letting Here, the positive solution u * ≡ a b of the ecosystem (5) corresponds to the zero solution of the system (8). Thus, the stabilization of the above-mentioned zero solution will be investigated below. Furthermore, employing an impulse control on the natural ecosystem (5) or (8) results in whose zero solution corresponds to the stationary solution u * ≡ a b of the following system: Here, we assume that 0 < t 1 < t 2 < · · · , and each t k (k ∈ N) represents a fixed impulsive instant. Additionally, lim
In this paper, the following condition is also required: Hypothesis 2 (H2). There are two constants M > 0 and γ > 0 such that e t∆ e t∆ w (see [23]). Lemma 1. (see, for example, [14]) . Ω ⊂ R m is a bounded domain, and its smooth boundary ∂Ω is of class where λ 1 is the least positive eigenvalue of the following Neumann boundary problem: where ν is the external normal direction of ∂Ω.
Lemma 2. ([36]
) If f is a contraction mapping on a complete metric space H, there must exist a unique point u ∈ H, satisfying f (u) = u.
Main Result
Firstly, the unique existence of the stationary solution of the ecosystem (5) should be considered. Moreover, the unique stationary solution of the ecosystem should be positive. Based on these two points, we present the following unique existence theorem: If, in addition, the following condition is satisfied: then the positive solution u * is the unique stationary solution of the system (5).
Thus, Definition 1 yields that u * > 0 defined in Theorem 1 is the unique stationary solution of the system (5).
Below, we claim that u * is the unique stationary solution of the ecosystem (5). Indeed, if u * and v * (x) are two different stationary solutions of the system (5), then Poincare inequality and the boundary condition yield The condition (12), Definition 2 and the continuity of u * and v * lead to which contradicts the inequality (13). This completes the proof.
Remark 3.
As far as our knowledge, Theorem 1 is the first theorem to provide the unique existence of the stationary solution of a single-species ecosystem under a Neumann boundary value.
Remark 4.
This paper provides the unique existence of a stationary solution of a reaction-diffusion system. However, there are many previous articles related to reaction-diffusion systems that only involve the existence of the equilibrium point. For example, in [14], only the unique existence of the constant equilibrium point of a reaction-diffusion system with a Neumann boundary value was given, but the stationary solutions of a reaction-diffusion system may include the non-constant stationary solutions. Because the solution u(t, x) of a reaction-diffusion system involves not only the time variable t but also the space variable x, its stationary solution should be u * (x), independent of the time variable t. Obviously, u * (x) must not be a constant equilibrium point, which may be dependent upon the space variable x. Thereby, it is not inappropriate to prove that the equilibrium point is the unique constant equilibrium point in [14]. It must be proved that it is the unique stationary solution, just like that of this paper. A similar example can also be found in [12]. Note that the system (10) has the same elliptic equation as that of the system (5), and hence, each stationary solution of the system (5) is that of the system (10), and vice versa. i.e., Theorem 1 shows that u * ≡ a b is also the unique stationary solution of the system (10). Next, the global stability of the stationary solution u * ≡ a b should be investigated.
Theorem 2. Set p 1. Suppose all the conditions of Theorem 1 hold. Assume, in addition, the condition (H2) holds, and 0 < r < 1, r(t) = r ∈ S; then, u * ≡ a b of the system (10) is globally exponentially stable in the pth moment; equivalently, the null solution of the system (9) is globally exponentially stable in the pth moment, where Proof. Let the normed space H be the functions space consisting of functions g(t, x) : , where g satisfies: (A1) g is pth moment continuous at t 0 with t = t k (k ∈ N) ; (A2) for any given x ∈ Ω, lim (A3) g(s, x) = ξ(s, x), ∀ s ∈ [−τ, 0], x ∈ Ω ; (A4) e αt g(t) p → 0 as t → +∞, where α is a positive scalar with 0 < α < qγ. It is easy to verify that H is a complete metric space equipped with the following distance: Construct an operator Θ such that, for any given U ∈ H, Below, we want to prove that Θ : H → H, and it takes four steps to achieve the goal.
Step 1. It is claimed that, for U ∈ H, Θ(U) must be pth moment continuous at t 0 with ] means the boundedness of U, and let δ be a scalar small enough; a routine proof yields that if which proves the claim. Then, (A1) is verified.
Step 4. Verifying (A4), i.e., for U ∈ H, verifying Indeed, e q(t−s)∆ c r U(s, x)ds p Moreover, The Holder inequality yields On the other hard, e αt U i (t) p → 0 means that, for any ε > 0, there exists t * > 0 such that all e αt U i (t) p < ε. Therefore, which, together with the arbitrariness of ε, means that Combining (20)-(28) yields (19). It follows from the above four steps that Finally, we claim that Θ is a contractive mapping on H. Indeed, for any U, V ∈ H, the Holder inequality and (H2) yield Similarly, Assume t j−1 < t t j ; then, the definition of the Riemann integral b a e s ds yields It follows from (30) Now, the definition of r implies that Θ : H → H is contractive such that there must exist the fixed point U of Θ in H, which means that U is the solution of the system (9), satisfying e αt U p → 0, t → +∞ so that e αt u − u * p → 0, t → +∞ . Therefore, the zero solution of the system (9) is the globally exponential stability in the pth moment; equivalently, u * ≡ a b of the system (10) is the globally exponential stability in the pth moment.
Remark 5.
To the best of our knowledge, this is the first paper to employ impulsive control and the Laplacian semigroup to globally stabilize a single-species ecosystem. Remark 6. This paper reports the global stability of a single-species ecosystem, while the stability in [3] did not involve the global one. This means that the stability in [3] depends heavily on the choice of initial value, while the global stability does not need such a choice. On the other hand, Equation (5) involves the space state, while the models in [3] did not involve the spatial location. In fact, population migration has a great impact on population stability, so the spatial state should be considered in the ecosystem model.
Example 2.
This example uses all the data provided in Example 1. Assume, in addition, p = 1.005, M k ≡ 1.02, µ = 5; obviously, the condition (H2) holds in Example 1. Moreover, we can obtain, by direct calculations, that and, hence, the condition (14) is satisfied.
Thereby, Theorem 2 yields that the null solution of the system (9) is globally exponentially stable in the pth moment; equivalently, u * ≡ 0.423 of the system (10) is globally exponentially stable in the pth moment (see Figures 1 and 2).
Remark 7.
Obviously, Example 2 illuminates that Theorem 2 is better than [22] (Theorem 4.2) due to reducing the conservatism of the algorithm, because the pulse intensity in Theorem 2 does not require the pulse intensity to be less than 1 but allows it to be greater than 1, while the latter requires that the pulse intensity is less than 1 (see [22] (Theorem 4.2)).
Remark 8.
In Example 1 and Example 2, it follows from 0.3 = N 1 u N 2 = 0.523 and u * = 0.423 that 0 |U| 0.123. A computer simulation of the dynamics of the state U(t, x) of the system (9) illuminated the feasibility of Theorems 1 and 2 (see Figures 1 and 2). In addition, (H1) is the only common condition in much related literature (see, for example, [37] (Definition 1)).
Conclusions
In this paper, there are some improvements on mathematical methods, because it is the first paper to employ fixed-point theory, variational methods and a Laplacian semigroup to obtain the unique existence of the globally stable positive equilibrium point of a singlespecies Markovian jumping delayed ecosystem. Numerical examples are provided to show the feasibility of the artificial management of nature by way of impulse control. | 3,101 | 2021-10-01T00:00:00.000 | [
"Mathematics"
] |
A Constrained Markovian Diffusion Model for Controlling the Pollution Accumulation
This work presents a study of a finite-time horizon stochastic control problem with restrictions on both the reward and the cost functions. To this end, it uses standard dynamic programming techniques, and an extension of the classic Lagrange multipliers approach. The coefficients considered here are supposed to be unbounded, and the obtained strategies are of non-stationary closed-loop type. The driving thread of the paper is a sequence of examples on a pollution accumulation model, which is used for the purpose of showing three algorithms for the purpose of replicating the results. There, the reader can find a result on the interchangeability of limits in a Dirichlet problem.
Introduction
The aim of pollution accumulation models is to study the management of some goods to be consumed by a society. It is generally accepted that such consumption generates two byproducts: a social utility, and pollution. The difference between the utility and the disutility associated with the pollution is known as social welfare. The theory developed in this work enables the decision maker to find a consumption policy that maximizes an expected social welfare for the society, subject to a constraint that may represent, for example, that some costs of cleaning the environment are not to exceed some given quantity along time.
This paper deals with the problem of finding optimal controllers and values for a class of diffusions with unbounded coefficients on a finite-time horizon under the total payoff criterion subject to restrictions. It uses standard dynamic programming tools, the Lagrange multipliers approach, and a result on the interchangeability of limits in a Bellman equation. The driving thread of the paper is a sequence of examples on a pollution accumulation model, which is used for the purpose of showing how to replicate the theoretical results of the work.
The origin of the use of the optimal control theory in the context of stochastic diffusions on a finite-time horizon can be traced back to the works of Howard (see [1]), Fleming (see, for instance, [2][3][4]), Kogan (see [5]), and Puterman (cf. [6]). However, the stochastic optimization problem with constraints was attacked only in the late 90s and early 2000s, when some financial applications demanded the consideration of these models, under the hypothesis that the coefficients of all: the diffusion itself, the reward function, and the restrictions, are bounded (see, for Example [7][8][9][10]). Constrained optimal control under the discounted and ergodic criteria was studied in the seminal paper of Borkar and Ghosh (see [11]), the work of Mendoza-Pérez, Jasso-Fuentes, Prieto-Rumeau and Hernández-Lerma (see [12,13]), and the paper by Jasso-Fuentes, Escobedo-Trujillo and Mendoza-Pérez [14]. In fact, these works serve as an inspiration to pursue an extension of their research to the realm of non-stationary strategies.
Although this is not the first time that the problem of pollution accumulation has been studied from the point of view of dynamic optimization (for example, [15] uses an LQ model to describe this phenomenon, [16] deals with the average payoff in a deterministic framework, [17,18] extend the approach of the former to a stochastic context, and [19] uses a stochastic differential game against nature to characterize the situation), this paper contributes to the state-of-the-art by adding constraints to the reward function, and by taking into consideration a finite-time horizon. Moreover, this work profits from this fact by proposing a simulation scheme to test its analytic results. However, it would not be possible to find a suitable Lagrange multiplier for such simulations without the results presented in Example 3, and Theorem 2, below.
The relevance of this work lies in the applicability of its analytic results in a finitetime interval. Unlike the models under infinite-time criteria (i.e., discounted and average payoffs; and the refinements of the latter), which focus on finding optimal controllers in the set of (Markovian) stationary strategies, the criterion at hand considers as well the more general set of (Markovian) non-stationary strategies. This fact implies that the functional form of the Bellman equation includes a time-dependent term, and that the feedback controllers will depend explicitly on the time argument. Since the coefficients of the diffusions involved in this study are assumed to be unbounded, all of the points in R n will be attainable, and a verification result will be needed to ensure the existence of a solution to the Bellman equation that remains valid for all (t, x) in [0; T] × R n , where T will be the horizon.
Significance and contributions.
• This paper presents an application of two classic tools: the Lagrange multipliers approach, and Bellman optimization in a finite horizon for diffusions with possibly unbounded coefficients. This fact represents a major technical contribution with respect to the existing literature. • This study illustrates its results by means of the full development and implementation of an example on control of pollution accumulation. It also gives actual algorithms which can be used for the replication of the results presented along its pages. • This work lies within the framework of dynamic optimization. However, it considers a broader class of coefficients than, for instance, [15]. As is the case of [16], it presents a pollution accumulation model. However, it focuses on a stochastic context (as in [17,18]), with the difference that the present project does so in a finite-time horizon, and with restrictions on both the reward and the cost functions.
The rest of the paper is divided as follows. The next section gives the generalities of the model under consideration, i.e., the diffusion that drives the control problem, the total payoff criterion, the restrictions on the cost and the control policies at hand. Example 1 introduces the pollution model. Section 3 deals with the actual (analytic and simulated) solution of the problem. Examples 2, 3, 4, Lemma 2, Theorem 2 and Example 5 illustrate the analytic technique and serve the purpose of comparing it with some numeric simulations. Finally, Section 4 is devoted to the presentation of the final Remarks. This section concludes by introducing some notation for spaces of real-valued functions on an open set R n . The space W ,p (R n ) stands for the Sobolev space consisting of all real-valued measurable functions h on R n such that D α h exists for all |α| ≤ in the weak sense, and it belongs to L p (R n ), where D α h := ∂ |α| h ∂x α 1 1 , · · · , ∂x α n n with α = (α 1 , · · · , α n ), and |α| := Moreover, C κ (R n ) is the space of all real-valued continuous functions on R n with continuous -th partial derivative in x i ∈ R, for i = 1, ..., N, = 0, 1, ..., κ. In particular, when κ = 0, C 0 (R n ) stands for the space of real-valued continuous functions on R n . Now, C κ,η (R n ) is the subspace of C κ (R n ) consisting of all those functions h such that D α h satisfies a Hölder condition with exponent η ∈]0; 1], for all |α| ≤ κ; that is, there exists a constant K 0 such that
Preliminaries
This work studies a finite-horizon optimal control problem with restrictions. In concrete, let (Ω, F , {F t : t ≥ 0}) be a measurable space. Let there also be an F t -adapted stochastic differential system of the form where b : R n × U → R n and σ : R n → R n×d are the drift and diffusion coefficients, respectively; and W(·) is a d-dimensional standard Brownian motion. Here, the set U ⊂ R m is a Borel set called the action (or control) set. Moreover, let u(·) be a U-valued stochastic process representing the controller's action at each time t ≥ 0. Now, the profit that an agent can obtain from its activity in the system is measured with the performance index: where r and r 1 are the running and terminal rewards, respectively; and the symbol E u x [·] stands for the conditional expectation of · given that x(t) = x, and the agent uses the sequence of controllers u.
The goal is to maximize (2) subject to a finite-horizon cost index of the operation: where c is a running-cost rate, c 1 is a terminal cost rate function; θ is a running constraintrate function, and θ 1 is a terminal constraint-rate function. Observe that as the running reward-rate function r depends on the action of the controller; the running constraint-rate θ is independent of such variable.
The following is an assumption on the coefficients of the differential system (1).
Hypothesis (H1a). The control set U is compact.
Hypothesis (H1b). The drift coefficient b(x, u) is continuous on R n × U, and x → b(x, u) satisfies a local Lipschitz condition on R n , uniformly on U; that is, for each R > 0, there exists a constant K 1 (R) > 0 such that for all |x|, |y| ≤ R Hypothesis (H1c). The diffusion coefficient σ satisfies a local Lipschitz condition on R n ; that is, for each R > 0, there exists a constant K 2 (R) > 0 such that for all |x|, |y| ≤ R; that is, there exists a positive constant K 2 such that for all x, y ∈ R n .
Remark 1.
The local Lipschitz conditions on the drift and diffusion coefficients referred to in Hypothesis (H1b)-(H1c), along with the compactness of the control set U, stated in Hypothesis 1a, yield that for each R > 0, there exists a number K 4 (R) For u ∈ U, and h(t, ·) ∈ W 2,p (R n ) for all t ≥ 0, define: with a(·) as in Hypothesis 1d, and ∇h, H representing the gradient and the Hessian matrix of h with respect to the state variable x, respectively. The main application of this work is the pollution accumulation model. Although it will be possible to solve this problem within the realm of pure feedback strategies, this is not always the case. As a consequence, the set of actions needs to be widened.
Control Policies.
Let M be the family of measurable functions f : [0; T] × R n → U. A strategy u(t) := f (t, x(t)), for some f ∈ M is called a Markov policy. Definition 1. Let (U, B(U)) be a measurable space, and P (U) be the family of probability measures supported on U. A randomized policy is a family π := (π t : t ≥ 0) of stochastic kernels on B(U) × R n satisfying: (a) for each t ≥ 0 and x ∈ R n , π t (·|x) ∈ P (U) such that π t (U|x) = 1, and for each D ∈ B(U), π t (D|·) is a Borel function on R n ; and (b) for each D ∈ B(U) and x ∈ R n , the function π t (D|x) is a Borel-measurable in t ≥ 0.
The set of randomized policies is denoted by Π.
Observe that every f ∈ M can be identified with a strategy in Π by means of the P (U)-valued trajectory δ f , where δ f represents the Dirac measure at f . When the controller operates policies π = (π t : t ≥ 0) ∈ Π, both the drift coefficient b, and the operator L u defined in (1) and (4), respectively, are written as Under Hypothesis (H1a)-(H1d) and Remark 1, for each policy π ∈ Π there exists an almost surely unique strong solution x π (·) of (1), which is a Markov-Feller process. Furthermore, for each policy π = (π t : t ≥ 0) ∈ Π, the operator ∂ t ν(t, x) + L π t ν(t, x) becomes the infinitesimal generator of the dynamics (1) (for more details, see the arguments in [20] (Theorem 2.2.7)). Moreover, by the same reasoning of Theorem 4.3 in [20], for each π ∈ Π, the associated probability measure P π (t, x, ·) of x π (·) is absolutely continuous with respect to Lebesgue's measure for every t ≥ 0 and x ∈ R n . Hence, there exists a transition density function p π (t, x, y) ≥ 0 such that P π (t, x, B) = B p π (t, x, y)dy, for every Borel set B ⊂ R n .
Topology of relaxed controls. The set Π is topologized as in [21]. Such a topology renders Π a compact metric space, and it is determined by the following convergence criterion (see [20][21][22]). Definition 2 (Convergence criterion). It will be said that the sequence (π m : m = 1, 2, ...) in Π converges to π ∈ Π, and such convergence is denoted as π m W → π, if and only if for all g ∈ L 1 ([0; T] × R n ), and h ∈ C b ([0; T] × R n × U), i.e., in the set of continuous and bounded functions on Throughout this work, the convergence in Π is understood in the sense of the convergence criterion introduced in Definition 2.
The following Definition is this work's version of the polynomial growth condition quoted in, for instance [18].
Definition 3.
Given a polynomial function of the form w(x) = 1 + |x| k (with k ≥ 2), and x ∈ R n , let the normed linear space B w ([0; T] × R n ) be that which consists of all real-valued measurable functions ν on [0; T] × R n with finite w-norm given by
Remark 2.
(a) Observe that for any function ν ∈ B w ([0; T] × R n ): This last inequality implies that any function ν ∈ B w ([0; T] × R n ) satisfies the polynomial growth condition. (b) Assuming that the initial data x(s) = x has finite absolute moments of every order (i.e., where the constant C k depends on k, T − s, and the constant K 1 is as in Hypothesis (H1b). (c) In the application developed throughout this paper, the constant initial data x(s) = x is considered. Then E|x(t)| k also has finite moments of every order (see Proposition 10.2.2 in [18]). Therefore, E|x(t)| k ≤ C k (1 + |x| k ). Now, hypotheses on the reward, cost and constraint rates from (2) and (3) are stated. These are very standard, and represent an extension of the ones used in classic works, such as p. 157 in [23] (Chapter VI.3) and p. 130 in [24] (Chapter 3).
Hypothesis (H2a). The functions r, c : [0; T] × R n × U → R are continuous, and locally Lipschitz on R n , uniformly on U; that is, for each R > 0, there exists a constant K 5 (R) > 0 such that for all |x|, |y| ≤ R Hypothesis (H2b). r(·, ·, u) and c(·, Hypothesis (H2c). The terminal reward and cost rates r 1 (·, ·), c 1 (·, ·) ∈ B w ([0; T] × R n ); and the running and terminal constraint rates θ(·, ·), For π = (π t : t ≥ 0) ∈ Π the reward and cost rates are written as To complete this section, the main application of this work is introduced. It consists of a pollution accumulation model. This application is inspired by the one presented in [17,18], and satisfies Hypotheses (H1a)-(H1d) and (H2a)-(H2c). Example 1. Fix the probability space (Ω, F , {F t : t ≥ 0}, P), and let T > 0 be a given time horizon. Consider the pollution process defined by the controlled diffusion Here u(s) represents the consumption flow at time t ≥ 0, and γ is certain consumption restriction imposed by, for instance worldwide protocols. Additionally, the number η ∈]0; 1] is the rate of pollution decay.
It is easy to see that the coefficients of (7) meet Hypothesis (H1a)-(H1c). A simple calculation yields that K 3 ≥ σ 2 − c for any c ∈]0; σ 2 [. Now, a simulation of the trajectories of the Itô's diffusion (1) is presented. To this end, the extension of Euler's method for solving first order differential equations known as Euler-Maruyama's method (see, for instance [25] and Chapter 1 in [26]) is used. This technique is suitable for diffusions that meet Hypothesis (H1a)-(H1d). The focus is on the comparison between Vasicek's model for interest rates in finance (see, for instance Chapter 5 in [27]): with s ∈ [t; T], and Kawaghuchi-Morimoto's model (7).
Let z N : {0, 1, ..., N} × Ω → R n , N ∈ N, be the Euler-Maruyama approximations for the stochastic differential Equation (1), recursively defined by z N 0 := x and In Figures 1 and 2, observe that Kawaguchi-Morimoto's process allows one to choose a deterministic (implicit) function of t, whereas Vasicek's series features what is known in the literature as mean reversion. The latter fact is clear from the choice of a constant parameter µ.
Let h ∈ W 1,2;p ([0; T] × R). After (4), the infinitesimal generator of (7) is given by The polynomial function w(x) = x 2 + x + 1 satisfies Definition 3. Please note that this function does not depend on the time argument t. The reward-rate function used in further developments represents the social welfare, is given by r : [0; T] × R × U → R, and is defined as: where F ∈ C 2 (R) stands for the social utility of the consumption u, and a · x stands for the social disutility (so to speak) of the pollution stock x, for a > 0 fixed. It is assumed that The cost rate function will be given by with c 1 > 0, and c 2 ∈ R satisfying c 1 + ηc 2 > 0. (12) Since the pollution stock x depends on the time variable t ≥ 0, the functions defined in (9) and (11) also depend on this variable.
The running constraint-rate function has the form where q is a positive constant. (Here, as with the reward and cost functions, it is assumed that x implicitly depends on t.) The terminal constraint, cost and reward rates will be fixed at a level of zero. It is not difficult to see that if F meets Hypothesis (H2a)-(H2c), then so do the social welfare, the cost rate and the running constraint functions.
A Finite-Horizon Control Problem with Constraints
This section is devoted to the introduction of the study of the finite-horizon problem with constraints.
Definition 4.
For each π ∈ Π and T ≥ t, the total expected reward, cost and constraint rates over the time interval [t; T] given that x(t) = x are, respectively, with r(s, x(s), π s ) and c(s, x(s), π s ) as in (6).
The proof of the next result is an extension of [28] [Proposition 3.6]. Lemma 1. Hypothesis (H2a)-(H2c) imply that the total expected reward J T (t, x, π, r), the total expected cost J T (t, x, π, c), and the constraint rate sup Proof of Lemma 1. The proof is presented only for J T (t, x, π, r), for the line of reasoning is the same for J T (t, x, π, c) andθ T (t, x, π). By Hypothesis (H2b), it is known that for every For each T > 0, and x ∈ R n , assume that the (running and terminal) constraint functions θ(·, ·) and θ 1 (·, ·) are given, and that they satisfy Hypothesis (H2c). In this way, To avoid trivial situations, it is assumed that this set is not empty (see Remark 3.8 in [14]). To formally introduce what is meant when talking about the maximization of (2) subject to (3), the finite-horizon problem with constraints is defined.
Definition 5.
A policy π * ∈ Π is said to be optimal for the finite-horizon problem with constraints (FHPC) with initial state x ∈ R n if π * ∈ F t,x θ T and, in addition, In this case, J * T (t, x, r) := J T (t, x, π * , r) is called the T-optimal reward for the FHPC.
Lagrange Multipliers
To solve the FHPC, the Lagrange multipliers approach and the dynamic programming technique are used to transform the original FHPC into an unconstrained finite-horizon problem, parametrized by the so-named Lagrange multipliers. To do this, take λ ≤ 0 and consider the new (running and terminal) reward rates ).
It is natural to let, for all (t, x) ∈ [0; T] × R n , Notice that Example 3 (Examples 1 and 2 continued). The performance index for the FHUP is given by Return now to Example 1, where a single trajectory of the processes (7) and (8) for certain parameters were simulated, and the policy u(t) = x(t), for (7); and u(t) = µ, for (8). One's aim is to compute (20) for a fixed value of λ < 0, when the utility function derived from the consumption is given by F(u) = √ u, by means of Monte Carlo simulation. To this end, the following pseudocodes are presented.
Walkthrough of Algorithm 1. This pseudocode's goal is to compute the integral inside (20).
• Line 1 initializes the process. • Line 2 emphasizes the fact that λ < 0 is supposed to be given.
•
In lines 3-11, the algorithm decides if it will work with (7), or with (8). • Line 12 sets F = √ u and D = a · x, and computes initial values for r, c and θ according to (9), (11) and (13) Walkthrough of Algorithm 2. The purpose of this pseudocode is to compute a 95%confidence interval for the expectation of the result of Algorithm 1 according to Monte Carlo's method. 16 if work with (7) x ← x + (u − ηx)dt + σdW; j ← j + dt; 30 end 31 I ← I · dt; 32 return I; Algorithm 1 receives the initial value x 0 , the step size dt, the time horizon T, and the parameters of the diffusion (7) (resp. (8)) to calculate the (Itô) integral inside the expectation operator in (20) when the process (7) (resp. (8)) is used; then, Algorithm 2 iterates this process and returns the average of such iteration, thus approximating the value of (20). These algorithms require a negative and constant value of the Lagrange multiplier. Later, in Example 5, a modification of Algorithm 1 that solves this situation will be proposed. For the sake of illustration, take the parameter values from Example 1 (that is x 0 = 5, η = 1, σ(x) ≡ 0.5, µ = 5, T = 1, and N = 100), and use Algorithms 1 and 2 to compute an approximation to the value of (20) when one considers the diffusion (8) (that is, the diffusion (7) with u(t) ≡ µ) for all t ≥ 0). Additionally, take γ = 0.4, c 1 = 0.1, c 2 = 0.05, q = 0.0195, and a = 1.25.
Notice that Proposition 1 does not assert the existence of a function that satisfies (21) (this is the purpose of Proposition 2 below). It rather motivates the definition of the finite-horizon unconstrained problem. Definition 6. A policy π * ∈ Π for which is called finite-horizon optimal for the finite-horizon unconstrained problem (FHUP), and J * T (·, ·, r λ ) is referred to as the finite-horizon optimal reward for the FHUP.
Use the former result to introduce the HJB equation for the FHUP for the examples presented along the paper. (Examples 1-3 continued). The HJB equation for the FHUP is given by:
Example 4
where h ∈ C 1,2 ([0; T] × R); and According to Proposition 2, a solution of the HJB equation (25) yields the finite-horizon optimal reward J * T (t, x, r λ ) and the optimal policy π * for the FHUP over the interval [t; T]. Now use Definition 6 and Propositions 1 and 2 to set expressions for the optimal performance index, policies, and constraint rates from the examples presented along this work. (Examples 1-4 continued). Let Λ and I be the Lebesgue's measure and the indicator function, respectively. Consider the planning horizon [t; T] and assume the conditions in (7), (9)-(13) hold. Then, (i) For every x > 0 and λ ≤ 0, the value function J * T (t, x, r λ ) in (23), becomes
For every x > 0 and λ ≤ 0, the total expected reward, cost and constraint, respectively J T (t, x, f λ (t), r), J T (t, x, f λ (t), c), and θ T (t, x, f λ (t)); defined in Example 2, take the form Proof of Lemma 2.
(i) Start by making an informed guess of the solution of (25). Namely h(t, x) := p(t)x + m 2 (t). (33) Observe that h t (t, x) = p (t)x − m 2 (t), h x (t, x) = p(t), and h xx (t, x) = 0. The substitution of these expressions in (25) yields This means that Impose the terminal condition p(T) = 0 to (34) to obtain where k 1 is as in (27). Now, from (35), write To find the supremum of the expression inside the braces, use a standard calculus argument to see that at a critical point u: Next, since by (10), F (u) ≥ 0, it turns out that Then, from (37): With this in mind, (36) turns into where Λ(·) stands for Lebesgue's measure. Therefore, from (33), obtain This proves (26)- (28). The optimality of (29) for the FHUP (20) follows from Proposition 2(ii). (ii) To see that (30) holds, use (17) to write Here, the interchange of integrals is possible due to the finiteness of the interval [t; T], and Fubini's rule. Now, since the solution of the controlled diffusion process (7) is given by where x(t 0 ) = x and its expected value is Now, by (29) observe that the former equals: To prove (31), use the two leftmost members in (18), and proceed as above to put: Finally, by the two rightmost members of (18), write This proves (32). The proof is now complete.
From an Unconstrained Problem, to a Problem with Restrictions
This section starts with an important observation on the set of strategies which will be used.
T] × R n ; and J * T (T, x(T), r λ ) = r λ 1 (x(T))}. Since M can be thought of as a subset of Π, Proposition 2(ii) ensures that the set Π λ is nonempty. Lemma 3. Let (λ m ) be a sequence in ] − ∞; 0] converging to some λ * ≤ 0, and assume that there exists a sequence π λ m ⊂ Π λ m for each m ≥ 1 that converges to a policy π ∈ Π. Then π ∈ Π λ * ; that is, π satisfies Proof of Lemma 3. Recall Definition 2. Take an arbitrary sequence (π m ) ⊂ Π λ such that π m W → π. Observe that Proposition 2 ensures that for each m ≥ 1, J T (t, x, r λ m ) satisfies: In terms of the operatorL π m t λ m , defined in (A4), the former relation reduces to x, r λ m ), and λ m constant. A verification that the hypotheses of Appendix A follows. Specifically, part (a) trivially follows from (39). Then, the focus will be on checking that part (b) of Theorem A1 is met. To do that, for some R > 0, take the ball B R := {x ∈ R n : |x| < R}. By [30] [Theorem 9.11], there exists a constant C 0 (depending on R) such that for a fixed p > n: where |B 2R | represents the volume of the closed ball with radius 2R; M and M 2 (x, T, t) are the constants in Hypothesis (H2b), and in (14), respectively.
Notice that conditions (c) to (f) from Theorem A1 trivially hold, and that condition (g) is given as a part of the hypotheses just presented. Then, one can claim the existence of a function h λ * ∈ W 1,2;p ([0; T] × B R ) together with a subsequence (m k ) such that J * T (·, ·, r λ m k ) = J * T (·, ·, π m k · , r λ m k ) → h λ * (·, ·) uniformly in [0; T] × B R and pointwise on [0; T] × R n as k → ∞ and π m W → π. Furthermore, h λ * satisfies: Since the radius R > 0 was arbitrary, one can extend the analysis to all of x ∈ R n . Thus, Proposition 1 asserts that h λ * (t, x) coincides with J * T (t, x, r λ * ). This proves the result.
Lemma 3 gives, in particular, the continuity of the mapping π t → J T (t, x, π t , r λ ).
Remark 5.
If the opposite condition in (54) occurs, then the existence of a critical point of the mapping λ → J * T (t, z, r λ ) implies necessarily that In this case, every λ ≤ 0 is a critical point of λ → J * T (t, z, r λ ) and f λ (t) = γ is an optimal policy for the FHPC. To avoid this trivial situation, under the fact F (∞) = 0, choose γ large enough such that Now use Theorem 2 to propose a modification of Algorithm 1 to compute the integral inside (20). Observe that it is no longer needed to include the computation of the Vasicek process (8) because the optimal values of the controllers f λ * -given by (57), and the Lagrange multipliers λ * t -given by (60)-are non-stationary along time.
Example 5 (Examples 1-4, Lemma 2, and Theorem 2 continued). Algorithms 2 and 3 can be used to compare the Monte Carlo simulations for the integral inside the expectation operator (20) with the results (formula (58)) from Theorem 2. To this end, recall from Example 1, the choice made for the parameters of (7) (that is: x 0 = 5, σ(x) ≡ 0.5, η = 1 and T = 1). In addition, choose constants that meet (12): these are a = 1.25, γ = 1, c 1 = 0.1, c 2 = 0.05, and q = 0.0195. With this configuration, condition (54) holds for all t ∈ [0; 1] with an error of, at most 0.004 (see Figure 3). With all these in mind, formula (58) in Theorem 2 yields an optimal value for the FHPC of the optimal controllers -and objective function-for a finite-time horizon under constraints. Moreover, this work used the tools presented in [25], and the Monte Carlo simulation technique to test its analytic findings. This represents a major implication of this work concerning the current methodology for resource management and consumption when pollution has an active role. Indeed, the model presented along this paper can be used for the purpose of decision-making when the social welfare, and the cost and rewards constraints are known and parametrized. A plausible extension of this paper could be related to looking for optimal controllers on a random horizon with a constrained performance index, in the fashion of [31].
Conflicts of Interest:
The authors declare no conflict of interest.
Appendix A. Technical Complements
In this appendix, an extension of Theorem 5.1 from [32] to the non-stationary case with only one controller, in a finite horizon is introduced.
Theorem A1. Let R n be a C 2 -class bounded domain and suppose that Hypotheses 1 and 2 hold. Moreover, assume the existence of sequences (h m ) ⊂ W 1,2;p ([0 (e) λ m converges uniformly to some function λ.
Since t ∈ [0; T], remove the time argument from the latter expression by merely substituting the constants M and M 1 by another constants. To keep the notation as straightforward as possible, this will not be done. Now, Hypothesis (H1b) gives the existence of a constant K 1 (R n ), such that |b(x, π)| ≤ K 1 (R n ). Moreover, there also exists a positive constant k([0; T] × R n ) such that |v 1 (t, x, π)| + |v 3 (t, x, π)| ≤ k([0; T] × R n ).
Take all of these facts, and observe that: | 7,687.4 | 2021-06-22T00:00:00.000 | [
"Environmental Science",
"Mathematics",
"Economics"
] |
Size-dependent lymphatic uptake of nanoscale-tailored particles as tumor mass increases
Aim: To investigate the size-dependent lymphatic uptake of nanoparticles in mice with rapidly growing syngeneic tumors. Materials & methods: Mice were inoculated subcutaneously with EL4 lymphoma cells and on day 5 or day 6 of tumor growth, injected peritumorally with either 29 nm or 58 nm of ultra-small superparamagnetic iron oxide nanoparticles. Twenty-four hours later the animals were imaged using MRI. Results & conclusion: The larger of the two particles can only be detected in the lymph node when injected in animals with 6-day-old tumors while the 29 nm ultra-small superparamagnetic iron oxide nanoparticle is observed on both time points. Tumor mass greatly impacts the size of particles that are transported to the lymph nodes.
Over the years many different types of nanoconstructs have been used for in vivo imaging, from liposomes and various types of colloidal metal compounds, to quantum dots and solid metal oxide core particles [1]. Different nanostructures are widely used in preclinical investigations, although the biological compatibility and size varies substantially between the various types of construct. Quantum dots, for instance, have shown a high degree of toxicity in cell culture, while the toxic effect in higher biological systems remains a topic of discussion [2][3][4]. On the other hand, polyethylene glycol (PEG)-coated ultra small superparamagnetic iron oxide nanoparticles (USPIOs) are considered safe also for human diagnostic applications [5]. Another advantage of the PEG-coated USPIOs is that the coating can be easily modified to give the particles a multimodal ability, detectable in various imaging systems [6][7][8]. The iron oxide core will act as a contrast agent in magnetic resonance imaging, causing a hypo-enhancement in T2/T2*-weighted images. By attaching different functional groups to the coating, such as radionuclides and fluorescent dyes, the particles can be further modified to suit the imaging setup of preference.
At present the clinical practice for sentinel lymph node (SLN) localization is to use a 99m Tc-labeled albumin nanocolloid along with a blue dye [9][10][11][12][13][14][15][16]. The SLN is defined as the first lymph node draining the site of a tumor and by investigating the SLN for the presence of tumor cells it is possible to establish if the tumor has started to metastasize. This is a strong prognostic factor and the outcome of the SLN examination governs Research Article part of Size-dependent lymphatic uptake of nanoscale-tailored particles as tumor mass increases staging, treatment and prognosis. The contrast agents are administered as a number of injections around the tumor and the SLN is then located and removed, for examination, by surgery. Unfortunately, the small size of the 99m Tc-labeled albumin nanocolloid particles allows them to pass beyond the sentinel node and into second-tier nodes, making it difficult to identify the true SLN [15]. However, it has been demonstrated that USPIOs with a size between 10 and 60 nm will accumulate in the SLN, and that the SLN can easily be detected up to 72 h after local USPIO injection [7,17]. Even though the SLN, even at 72 h postinjection, contained a vast amount of particles it could be shown that the amount of particles observed in the SLN is dependent on the size of the particles. An optimal sized nanoparticle would thus increase the sensitivity for SLN detection while keeping the dose of contrast agent low [7,10,12,16,[18][19][20]. Furthermore, it is of importance to determine the optimal nanoparticle size in the diseased state since the size of the lymph node and the lymphatic flow will increase due to the inflammation caused by a rapidly growing tumor [16,21]. It will also be important to determine the optimal size of nanoparticles for use in humans, since the retention of different sizes of particles will vary between species [22]. Hence it might be necessary to use multiple distinct sizes of particles to optimize lymphatic retention in various stages of disease.
In the present investigation two particles, sharing identical solid iron oxide cores but with different thickness of the coating, were injected into mice carrying a syngeneic tumor. Peritumoral injections of USPIOs were used to mimic the scenario of SLN detection in the clinic and the tissue distribution was investigated with MRI on different time points to explore the dynamics of the USPIOs during tumor growth.
USPIOs
The USPIOs were produced as previously described with the exception of the purification of the USPIOs from excess coating material [7]. Briefly; a mixture of iron (III) oxide-hydroxide, octadecene and oleic acid was heated to 323°C for 60 min. The resulting iron oxide cores were coated with poly(maleic anhydridealt-1-octadecene) and O,O′-bis(2-aminopropyl) polypropylene glycol-block-polyethylene glycol-block-polypropylene glycol through evaporation in a two-phase system. Instead of, as previously, using diafiltration the particles were captured and concentrated on a magnetic separation column (LS-column, Miltenyi Biotec, Bergisch Gladbach, Germany) and washed extensively with 150 mM NaCl. The magnet was removed and the particles were eluted in approximately 1 ml 150 mM NaCl. The size of the particles was determined by dynamic light scattering using a Malvern Zeta Sizer Nano Series (Malvern Instruments Ltd, Worcestershire, UK).
Tumor model EL-4 cells (mouse lymphoma) were cultured in RPMI 1640 media, supplemented with 10% new-born calf serum and 1% penicillin/streptomycin, at 37°C, 5% CO 2 . The cells were harvested and resuspended in phosphate buffered saline (PBS) at a cell concentration of 30 × 10 6 cells/ml. Female C57BL/6 mice (Taconic, Ry, Denmark), weighing approximately 20 g, were anesthetized with isoflurane and injected subcutaneously, in the right flank, with 3 × 10 6 cells in 0.1 ml of suspension. This tumor model grows very fast [23,24], sometimes resulting in bleeding, necrosis and collapse after as little as 1 week postinoculation. A dramatic increase in the size of the lymph node in the area affected by the tumor is observed at day 6-7 post inoculation [Unpublished Data]. On day 5 or day 6 postinoculation, one time point before and one during this observed tumor growth, the animals (five per particle) were anesthetized with isoflurane and 0.1 ml USPIOs (340 μg Fe/ml) were administered as four subcutaneous peritumoral injections. Approximately 24 h post injection of USPIOs, the animals were imaged with MRI (2.4 T, Bruker Avance II, Bruker Biosciences Corporation, MA, USA). The respiratory rate and body temperature was monitored during imaging (S.A. Instruments Inc., NY, USA). After the magnetic resonance (MR) data collection was finished, the animals were sacrificed and the inguinal lymph nodes on the injection side and contralateral side, along with the tumor, were harvested and snap frozen in isopentane. All studies were conducted in accordance with the Swedish guidelines for the use and care of laboratory animals.
MRI
Optimal image settings were empirically established at an echo time of 6 ms (3D-GE, repetition time: 27 ms, field of view: 60 × 30 × 30 mm 3 , pixel matrix size: 256 × 128 × 128, scan time 16 min 43 s, 4 averages). MR images were evaluated by two scientists independently on the presence of USPIOs in the lymph nodes and tumor of each animal.
Histology
The lymph nodes were cryo-sectioned in 5 μm and the tumors in 8 μm tissue sections and mounted on Superfrost Plus microscope slides (Thermo Scientific, MA, USA). For evaluation of USPIOs using fluorescence microscopy the tissue sections were fixated in 4% paraformaldehyde and rinsed in PBS. For immunofluorescence staining the slides with frozen tissue sections were immediately fixated with ice-cold acetone and then dried for 30 min, dark at room temperature. The tissue sections were washed and rehydrated in PBS + 0.05% Tween-20, after which the tissue was blocked with 5% rabbit serum in PBS + 0.025% Triton X-100 for 60 min at room temperature. The primary antibody (FITC anti-CD11b [BD Biosciences Pharmigen, CA, USA] and anti-Ly6B.2 [Serotec, Oxford, UK]) was diluted to 2 μg/ml and 20 μg/ml, respectively, in PBS + 0.025% Triton X-100 and incubated with the tissue sections for 60 min at room temperature. Slides were washed in PBS + 0.05% Tween-20 and incubated with the secondary antibody (Alexa Fluor 594 rabbit-anti-rat IgG [Invitrogen, CA, USA]) at 0.4 μg/ml for 60 min, after which the slides were washed in PBS + 0.05% Tween-20.
Cover slips were mounted using ProLong Gold antifade reagent with DAPI (Invitrogen) and the slides were cured over night at room temperature, shielded from light. Fluorescence microscopy was performed using a Zeiss Axiovert 200M microscope (Zeiss, Oberkochen, Germany) equipped with a XBO 75 Xenon lamp and a Hamamatsu ORCA-ER CCD camera (Hamamatsu Photonics K.K., Hamamatsu, Japan). Acquisition and co-registration of fluorescence images was performed using the Volocity software (Improvision, MA, USA).
USPIOs
Using the methods previously described [7] two different sized USPIOs were synthesized. A modified purification protocol was implemented; in other words, the particles were purified and concentrated using magnetic separation instead of diafiltration in order to increase the recovery of the particles. Using this new production protocol we established a faster and simpler method, with high reproducibility, for producing the USPIOs. However, the smallest USPIOs generated with this method were significantly bigger than the ones produced with the previous protocol. Despite this, the two USPIO-candidates used in this study were chosen on a stability and difference in size criterion rather than an absolute size criterion. The mean size (±SD), determined by dynamic light scattering, was 29 ± 1.7 nm and 58 ± 4.6 nm for the two USPIOs, respectively.
MRI
Mice subcutaneously inoculated (day 0) with syngeneic lymphoma cells were on day 5 or day 6 injected peritumorally with 29-or 58-nm-sized USPIOs. Twentyfour hours after USPIO-injection, in other words, on day 6 or day 7, the tissue distribution of particles was investigated using MRI. The mean size of the tumors was 0.13 ± 0.10 cm 3 on day 6 and 0.44 ± 0.13 cm 3 on day 7, a significant size increase between the two days. When injected on day 5, USPIOs could be detected in the SLN of four out of five animals given the 29-nm particles ( Figure 1A) but only one out of five animals injected with the 58 nm particles ( Figure 1B). However, when injected on day 6 post-tumor inoculation, USPIOs were detectable in the SLN of all animals, regardless of the size of the injected particles ( Figure 2). Particles were not detectable by MRI in the contralateral lymph nodes of any of the tested animals. Furthermore, 24 h after peritumoral injection a substantial amount of USPIOs were still detectable at the injection site in all tested animals. However, using MRI, there was no detectable influx of USPIOs into the actual growing tumor mass.
Histology
The targeting of particles to the SLN was further evaluated using fluorescent microscopy. The distribution observed by microscopy examination agreed well to the data obtained with MRI. The USPIOs were predominantly located in the subcapsular sinus of the lymph nodes, with minor amount observed in the deeper situated areas. Due to the lower detection limit in fluorescent microscopy compared with MRI, USPIOs could be detected in the sentinel lymph nodes of all of the animals, although at varying amounts. High amounts of USPIOs could be detected on both day 6 and day 7 post-tumor inoculation in the animals injected with the 29 nm particles ( Figure 3A & C). However, on day 6 in animals injected with the 58 nm particles, only minute amounts of USPIOs could be detected in the SLN ( Figure 3B) but on day 7 the fluorescence signal is on par with that seen in the animals injected with the 29 nm particles ( Figure 3D). Interestingly, on day 7 and regardless of particle size there are several individuals where low levels of particles can be detected in the lymph node on the contralateral side. Furthermore, particles can also be detected inside cells in the periphery of some of the tumors, especially at day 7 of tumor growth.
The immunohistochemical staining of the lymph nodes revealed USPIOs inside both CD11b-positive and Ly6B-positive cells. The signal from the 29-nm particles is closely associated to the CD11b-positive cells ( Figure 4A). Particles could also be seen in Ly6Bpositive cells, although the signal observed from the CD11b-positive cells is more pronounced (Figure 4D). For the 58-nm particles, however, the particle signal is not as closely associated to the CD11b-positive cells as for the 29-nm particles ( Figure 4B
Discussion
The model for studying the retention of USPIOs in the SLN used in this paper, mimics the method in which contrast agents for sentinel lymph node diagnosis are administered in the clinic, in other words, the USPIOs are injected around the tumor and then allowed to move to the SLN before imaging. However, contrary to the contrast agents used in the clinic, the USPIOs used in our study have a retention time in the SLN exceeding 24 h, allowing administration and imaging of the contrast agent at least 24 h prior to surgery. Two major processes influence the retention of USPIOs in the lymph nodes, transport of USPIOs by the lymph from the injection site and uptake by cells, either at the injection site with subsequent migration to the lymph nodes or by resident cells in the lymph node.
Two factors affect the transport of USPIOs in our study; the subcutaneous pressure on the fluid after injection and the inflammatory response to the tumor. Size-dependent lymphatic uptake of nanoscale-tailored particles as tumor mass increases Research Article It has been shown that lymphatic clearance of subcutaneously administered particles to the flank of rodents is poor [25]. A major contributing factor of this is the volume of the subcutaneous space. An increase in subcutaneous pressure will widen the channels leading to the lymphatic vessels and will force elevated levels of fluid into the lymphatic system. A comparison can be made to an injection that is administered in the paw of the animal, which is a method for studying lymphatic uptake in healthy animals. The subcutaneous space is much more limited in the paw than in the flank and a smaller injection volume will be sufficient to elevate the subcutaneous pressure and hence the lymphatic uptake. In a subcutaneously growing tumor in the flank of the mouse, the subcutaneous space is larger compared with the paw and the pressure-increase that the injection causes will only have a marginal effect on lymphatic uptake. However, in this EL4-lymphoma tumor model, the connective tissue surrounding the tumor is compressed when the solid tumor grows. As can be seen in the MR-images the USPIOs are present in this connective tissue, even 24 h after the injection. It can be anticipated that the increasing pressure on the connective tissue by the growing tumor enhances the pressure exerted by the particle injection and hence the lymphatic uptake.
Another factor contributing to the transport of USPIOs through the lymphatic system is the increase in the lymphatic flow from the site as well as the recruitment of leukocytes due to the inflammation caused by the growing tumor [16,21]. This increase in lymph flow enables the larger particle to move into the lymphatics and to the SLN.
The uptake and retention of USPIOs in lymph nodes is also affected by the uptake of USPIOs by cells, studied using immunohistochemistry. Ly6B antibodies stain immature and mature neutrophils and monocytes but do not stain macrophages, lymphocytes,
eosinophils, mast cells or erythroid cells, while CD11b
is found on macrophages, Kupffer cells as well as granulocytes and dendritic cells [26]. EL-4 cells are generally considered as being both CD11b-and Ly6Bnegative [27]. The phagocytosis of the 58-nm particles differs from that of the 29-nm particles in the type of cells where the particles can be detected. For the 29-nm particle a clear correlation between USPIO fluorescence and CD11b-positive cells (most likely macrophages) can be seen. The 58 nm particles are also to some extent found in these CD11b-positive cells but the signal is not as clearly correlated with the CD11bstaining and the 58-nm USPIOs are found also in other cell types. When the tissue sections are stained for Ly6B a clear co-localized signal can be observed between these Ly6B-positive cells and the 58-nm USPIOs. The 29-nm particles do not display the same degree of co-localization. The USPIOs were predominantly located in the subcapsular sinus of the lymph nodes, with minor amount observed in the deeper situated areas [17,28].
Others have shown that PEG increases the ability of particles to evade phagocytosis by macrophages [29,30]. The 58-nm particles have a thicker PEG-layer than the 29-nm particles. This could explain the differences seen between the two USPIOs in uptake by CD11bpostive cells. The outer-most layer of the 58-nm par-ticles consists of a methylated PEG-layer, while the 29-nm particles have a surface layer with residual amino-PEG groups. As lymphatic fluid has a slightly alkaline pH these groups should be uncharged when the particles enter the lymphatics [31]. However, this potentially minor difference in surface chemistry is unlikely to influence the fate of the USPIOs in the lymphatics [32].
The increased perfusion of lymph fluid, in and around the tumor, might explain why USPIOs are detected in the contralateral lymph node of some of the animals imaged on day 7, of tumor growth. As the injections are placed in four peritumoral locations, the larger size of the tumor will force the injection to be placed closer to the back midline of the mice. The increased flow of lymph as well as the close proximity to the back midline might cause fluid, particles and cells to 'spill over' to the contralateral side, leading to the findings of USPIOs in this 'control node'. The finding of USPIO-containing cells in the periphery of the tumors might also be an effect of the 'age' of the tumor. The rapidly growing tumor causes a recruitment of leukocytes, which have the capacity to phagocytize particles in the subcutaneous space [26,27,33]. This recruitment was illustrated by numerous of both CD11b-and Ly6B-positive cells with ingested USPIOs identified by histology. Size-dependent lymphatic uptake of nanoscale-tailored particles as tumor mass increases Research Article Even though it is difficult to quantify the amount of USPIOs using MRI, it is clear that, on day 6 of tumor growth, there is a difference in the amount of particles in the SLN between the two particle sizes. The same difference was observed using fluorescence microscopy. When the tumors were allowed to grow for an additional day the difference in USPIO uptake, between the two sizes, can no longer be detected and both are readily observed in the SLN.
Conclusion
In conclusion this study has shown that the lymphatic uptake of USPIOs injected peritumorally increases as the burden on the lymphatics is increased by the growing tumor and that the size of the particle has a significant influence on the uptake in the SLN of tumor bearing animals. Future work will focus on the dynamics and multimodal nature of the two different particles in the same animal as well as studying the dynamics of uptake of nanostructures in tumor models with growing metastases in the SLN. Finally, the size-dependent pharmacokinetics, indicated by this study, should be explored as a possible criterion for cancer staging.
Future perspective
Multimodal nanoparticle-based contrast agents, like the ones used in the article, are showing great promise to advance the field of sentinel lymph node detection. Taking the results from this investigation into account it is possible that using a nanoparticle of a certain size, or even mixture of particles of different carefully selected sizes, can be used not only for detection of the sentinel lymph node but also staging the cancer. If a particle, with a size larger than a specific threshold, will only be transported to the sentinel lymph node after the tumor reaches a certain stage, the detection of these particles in the SLN would indicate the staging of the tumor. By labeling the nanoparticles with different markers it would be possible to distinguish the different sizes from the mixture in the patient. This would reduce the time needed for each procedure, as the USPIOs can be administered and imaged days ahead of surgery, allowing for more procedures being performed and thereby reducing costs. With a nanoparticle that also allows staging of the tumor, numerous patients could also be spared unnecessary lymph node surgery. It is quite possible that the treatment of several other types of cancer could benefit from this type of procedure and contrast agent, not only breast cancer and malignant melanoma, which are the most frequently used at present. No writing assistance was utilized in the production of this manuscript.
Ethical conduct of research
The authors state that they have obtained appropriate institutional review board approval or have followed the principles outlined in the Declaration of Helsinki for all human or animal experimental investigations. In addition, for investigations involving human subjects, informed consent has been obtained from the participants involved.
Open Access
This work is licensed under the Creative Commons Attribution 4.0 License. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ Executive summary • There is a need to develop a new contrast agent for sentinel lymph node diagnosis, that only labels the sentinel lymph node (SLN), remains in the lymph node in excess of 24 h and can be visualized by multiple medical imaging modalities. Nanoparticles show great promise to constitute this new contrast agent. • Mice inoculated with rapidly growing EL4 tumors were injected peritumorally with either of two sizes of ultra small superparamagnetic iron oxide nanoparticles (USPIOs). The particles were administered on either day 5 or day 6 of tumor growth. • The retention of USPIOs in the SLN was visualized in vivo with MRI and ex vivo with fluorescence microscopy.
• The uptake and retention of USPIOs in the SLN of tumor bearing mice is shown to be size-dependent, both on account of the size of the tumor and the nanoparticles. • A smaller particle is more readily taken up into the lymphatics, when injected peritumorally. However, as the tumor grows the transport of larger particles to the SLN is increased. • It is shown that even a rather small difference in particle size, approximately 30 nm, has a significant effect on the lymphatic uptake. | 5,150.6 | 2015-08-11T00:00:00.000 | [
"Medicine",
"Biology",
"Chemistry"
] |
An analysis of the impact of oil price shocks on the growth of the Nigerian economy : 1970-2011
This paper examines the impact of oil price shocks on Nigerian economic growth while controlling the effects of unrest in the international oil market, exchange rate and agriculture output using quarterly time series data from 1970:q1-20011:q4.The broad objective of the study is to evaluate the long run relationship among the variables namely; oil price, exchange rate, agriculture output, unrest and economic growth. The research applied ADF unit root tests to ascertain the stationary of the series and also employed Johansen and Juselius (1990) trace and maximal eigenvalue tests to ensure long-run relationship among the variables under the study. In addition, structural Vector Autoregression (SVAR) is also applied in examining the link between the shocks emanating from oil price, unrest and their impacts on economic growth. The finding from ADF revealed that all the series at level are not stationary but stationary at first difference with constant. Moreover, the findings from SVAR using the Impulse response functions (IRFs) and variance decompositions (VDCs) indicated that the response of oil price shocks and unrest to (rGDP) economic growth depicts both positive and negative impact, i.e. long-run impact on economic growth exists. The study concludes that oil price, exchange rate, agriculture output and unrest contained some useful information in predicting the future path of economic growth in Nigeria. It, therefore, recommends that government should diversified the economy from oil to non oil sectors base and to improving the security situation in the Niger Delta with a view to boosting oil output, hence leading to increased revenue and by implication growth of the economy.
INTRODUCTION
Oil price shocks are predominantly defined with respect to price fluctuations resulting from changes in either the demand or supply side of the international oil market (Wakeford, 2006).These changes have been traditionally traced to supply side disruptions such as OPEC supply quotas, political upheavals in the oil rich Middle East and activities of militant groups in the Niger Delta Region of Nigeria.The shocks could be positive (a rise) or negative (a fall).
In Nigeria, oil plays a critical role in the conduct of fiscal and monetary policies because it accounts for an average of 80 percent of government revenue, 90-95 E-mail: musayusuf149@gmail.com;Tel: 08065308346 Authors agree that this article remain permanently open access under the terms of the Creative Commons Attribution License 4.0 International License percent of foreign exchange earnings and 12 percent of the real gross domestic product (Anyanwu, 1997).
Historically, the price of oil had been fairly stable until 1973.Since then, the impact of oil price shocks on the world economy has been larger (Hamilton, 2003).In the past three decades, the price of oil has been volatile and given the role of oil in the Nigerian economy, the effects of oil price shocks have been very significant and disstabilizing.
Nigeria has been the major oil producer in African continent together with Libya.Indeed, attacks on the oil refineries and the kidnapping of foreign engineers by the movement for the emancipation of the Niger Delta in the Niger Delta region was reported to have been one of the causes of international oil price increase from 2006-2007.This notwithstanding, in general, Nigeria's production can be considered to be not enough to affect the international oil price, thus this assumption is appropriate (CBN, 2008).
As an oil exporter and importer of refined petroleum products, any volatility or fluctuations in oil prices will adversely affect the Nigerian economy either positively or negatively.Several empirical studies have been undertaken to investigate the effect of oil price volatility on macroeconomic variables in different economies.Although the literatures are mixed on the causality between the oil price volatility and the macroeconomic variables, most empirical studies show that oil price directly impacts on macroeconomic variables (Joseph, 2013;Aliyu, 2009).
As a mono-product economy, Nigeria remains susceptible to the movements in international crude oil prices.During periods of favorable oil price shocks triggered by conflicts in oil-producing areas of the world, the surge in the demand for the commodity by consuming nations, seasonality factors, trading positions, etc; the country experiences favorable terms-of-trade quantified in terms of a robust current account surplus and exchange rate appreciation.On the converse, when crude oil prices are low, occasioned by factors such as low demand, seasonality factors, excess supply and exchange rate appreciation, the Nigerian economy experiences significant drop in the level of foreign exchange inflows that often result in budget deficit and or slower growth.A recent example was the dramatic drop in the price of crude oil in the wake of the global financial and economic crises.The price of oil fell by about two thirds from its peak of $147.0 per barrel in July 2008 to $41.4 at end-December 2008.
However, various episodes of oil shock have been observed in Nigeria.Each of the shocks had connections with some movements in key macroeconomic variables in Nigeria.For instance, the 1973For instance, the -74, 1979For instance, the -80, 1990For instance, the , 1999For instance, the -2000For instance, the , 2003For instance, the -2006For instance, the , 2007For instance, the -2008For instance, the and 2011 periods were associated with price increases while the oil market collapse of 1986, the Iranian revolution of 1991-1992, the East Asian Crisis of 1997-1998, Energy Crisis and tension from Middle East of 2000-2001were an episode of price decrease.
Theoretically, oil price increases translate to higher production costs, leading to commodity price increases at which firms sell their products in the market.Higher commodity prices then translate to lower demand for goods and services, therefore shrinking aggregate output and employment level.Furthermore, higher oil prices affect aggregate demand and consumption in the economy.
The transfer of income and resources from an oilimporting to oil-exporting economies is projected to reduce worldwide demand as demand in the former is likely to decline more than it will rise in the latter [Hunt et al., 2001].The resulting lower purchasing power of the oil-importing economy translates to a lower demand.Also, oil price shocks pose economic uncertainty on future performance of the Macroeconomy.People may postpone consumption and investment decisions until they see an improvement in the economic situation.
It is against this background, the study finds a gap to fill.i.e. by considering the effects of unrest asa variable that potentially affects oil output which in turn leads to revenueleakages which is assumed to have implications on the economic growth of both oil exporting and importing countries (especially Nigeria).Therefore, given the above scenario, the research paper seeks to address the following questions: Do all the variables under study have a long run relationship?What are the impacts of these different shocks to the growth of Nigerian economy?The broad objective of this paper is to examine the impact of oil price shocks on the growth of Nigerian economy.It thereby adds to the scanty existing empirical literatures on the impact of oil price shocks on macroeconomic variables in both oil exporting and importing developing countries (more specifically Nigeria).
Following the introductory section of the paper, the study focuses on the review of related literatures on the oil prices-macroeconomic variables relationship in section 2. Data Descriptions and econometric model specifications used in section 3. Section 4 would be the data presentation and interpretations of estimation results.While conclusions and recommendations and policy implications of the findings are presented in section 5.
LITERATURE REVIEW
In this section of the study we shall consider the research work which was carried out by various researchers.Bjornland (2004), Berument and Ceylan (2005), Huang and Guo (2007) did a study on the impact of oil prices on economic growth of the following countries which include; Venezuela, China, Algeria, Iran, Iraq, Jordan, Kuwait, Oman, Qatar, Syria, Tunisia, UAE, Norway, Philippine and G7 countries by using a structural vector auto-regressive (SVAR) framework.Their findings show that an oil price shock stimulates the economy while for countries as Bahrain, Egypt, Lebanon, Morocco and Yemen did not find any significant impact on oil price shocks on their economy.
Furthermore, studies on Nigerian economy like that of Aliyu (2009), Olomola and Adejumo (2006), Ayadi (2005), Gunu (2010), Agbede (2013) used VAR frame work to examine the effect of real exchange rate, oil price shocks, oil production shocks, money supply, net foreign assets, interest rate, inflation, and output.Empirically, the response of the real exchange rate is generally positive after a positive oil production shock, indicating a real depreciation of the naira.The impulse response of the real exchange rate is negligible relative to that of oil production, but the response of the real exchange rate after a year is about two times larger than that of oil production.Rautava (2004) develops a small VAR model to examine these dynamics in the Russian economy and shows that oil has played a significant role in movements of Russian GDP.Higher oil price leads to higher GDP, in both the short and long run.On the other hand, in the model, a higher oil price does not lead to a stronger real exchange rate, although the author conjectures that this may be because of the estimation strategy.Anshasy et al. (2005) examine the effects of oil price shocks on Venezuela's economic performance over 1950-2001.They investigate the relationship between oil prices, governmental revenues, government consumption spending, GDP and investment by employing a general to specific modeling (VAR and VECM).They found two long run relations consistent with economic growth and fiscal balance and that this relationship is important not only for the long run performance but also for short term fluctuations.
Jimenez-Rodriguez and Sanchez (2012) studied the role of oil price shocks in Japanese macroeconomic developments using quarterly data from Japan over the period 1976-2008.They also use VAR framework to find the evidence of non-linear effects of oil price on both industrial output and inflation.The theory predicts that, in an oil importing economy like Japan, unexpected hikes in oil prices should lead to lower economic activity and higher inflation.The empirical findings concerning the effects of oil shocks on industrial output growth and inflation confirm the expected pattern.Englama et al. (2010) examined the effects of oil price volatility, demand for foreign exchange, and external reserves on exchange rate volatility in Nigeria using monthly data for the period 1999:1 to 2009:12.The authors utilized cointegration technique and vector error correction model (VECM) for the long-run and the shortrun analysis, respectively.The results showed that a 1.0 per cent permanent increase in oil price at the international market increases exchange rate volatility by Yusuf 105 0.54 per cent in the long-run, while in the short-run by 0.02 per cent.Also a permanent 1.0 per cent increase in demand for foreign exchange increases exchange rate volatility by 14.8 per cent in the long-run.The study reaffirms the direct link of demand for foreign exchange and oil price volatility with exchange rate movements and, therefore, recommends that demand for foreign exchange should be closely monitored and exchange rate should move in tandem with the volatility in crude oil prices bearing in mind that Nigeria remains an oildependent economy.Ayoola (2013) examines the effects of crude oil price changes on economic activity in an oil dependent economy-Nigeria.A small open economy structural vector autoregressive (SVAR) technique is employed to study the macroeconomic dynamics of domestic price level, economic output, money supply and oil price in Nigeria.The study covers the period between 1985:q1 to 2010:q4.The results of the Impulse Response Functions (IRFs) and the Forecast Error Variance Decompositions (FEVDs) suggest that domestic policies, instead of oilboom should be blamed for inflation.Also, oil price variations are driven mostly by oil shocks; however, domestic shocks are responsible for a reasonable portion of oil price variations.The study concludes that oil still has very important indirect impact on the Nigerian economy and the monetary policy is the channel through which this indirect impact transmits.
However, from the above strand of literature we will come to observe more especially most of the study frequencies were too scanty.To this end, studying 'shocks' and 'relationships' using these frequencies the clustering effect is gone-some vital information will be lost.The study finds this interesting to re-estimate these shocks using structural VAR framework on Nigerian data from 1970q1 to 2011q4 so as to filter through.
Datadescriptions and econometric model specifications
This study relies heavily on secondary data; variables including real GDP, exchange rate, agriculture output are sourced from Central Bank of Nigeria (CBN) statistical bulletin, average world oil price from Energy Intelligence Agency (EIA) and unrest(dummy) is sourced from both International Crisis Group (ICG) and Nigerian National Petroleum Corporation (NNPC statistical Bulletin).The trend of the data would be analyzed by the use of unit root test (Augmented Dickey Fuller ADF) test for stationary, for the accessing of the long run relationship among the variables Johansen Cointegration test is to be employed while for examining the long run impact of the shocks Structural VAR (Blanchard and Quah, 1989) Long run restriction pattern on the basis of impulse response functions and forecast error variance decomposition would be employed.Finally quarterly data will be used for the period between 1970-2011 (i.e.168 observations), which is the period that represents occurrence of the oil shocks in international oil market.
Econometricsmodel specification
The general econometric specificationof the model to be estimated is as follows: Where: GDP =Gross Domestic Product OILP = Crude oil prices EXR = Nominal foreign exchange rate UNRST=Unrest (oil shocks) AGR= Output of Agriculture
Stationarity test and study variables
The variables of interest (i.e.endogenous variables) are seasonally adjusted real GDP and nominal foreign exchange rate, agriculture output, oil price and unrest (dummy variable).The choice of variables is mainly driven by similar studies, in particular Aliyu (2009) and is used as a benchmark, which is been conducted in Nigeria and is in accord with economic theory.Since it is a time series data, the regressions involving unit root processes may give spurious results and the naive application of regression analysis may yield nonsense results.
Therefore, distinction between whether the levels or differences of a series is stationary leads to substantially different conclusions and hence test of non-stationarity, that is, unit roots are the usual practice today.
Therefore, the study applies the commonly used augmented Dickey-Fuller (ADF) unit root tests to determine the variables' stationarity properties or integration order.Before estimating the VAR model, we would use the most recommended Akaike information criterion (AIC) test to determine the lag length of the VAR system to make sure the model is well specified.
The test estimation procedure takes the following forms; (ADF-test): Where Δy t denotes lag difference of the variable under consideration.m is the number of lags and εt is the error term.The stationarity of the variables is tested using the hypothesis; For ADF: Ho: δ 1 = 0 (Null hypothesis), [where δ1= ρ -1= 0] Ho: δ 1 < 0 (Alternative Hypothesis) Based on the critical values of respective statistics, if null hypothesis cannot be rejected, then the time series are non-stationary at the level and need to go through first or higher order differencing process to achieve stationarity and to find the order of integration.The test is applied to each variable used in the model.
Johansen and Juselius 1990 Test for Cointegration
The VAR model is specified as follows; Where y t is a (n× 1) vector of non-stationary I (Ⅰ) variables, n is the number of variables in the system, in this study four in each case.A 0 Is (n× 1) vector of constant terms, A k is a (n×n) matrix of coefficients, e t is a (n × 1) vector of error terms, which is independent and identically distributed, and p is the order of auto regression or number of lags.In this study weuse quarterly frequency data for all analysis.
Thus, y t is expressed as a linear combination of current and past innovations.Based on (2), impulse response functions are simulated for assessing dynamic effects of oil price shocks on output rGDP, exchange rate, output of agric and oil price.To test for cointegration, we employ a VAR-based approach of Johansen and Juselius (1990).
In particular, the Johansen and Juselius (JJ) test for cointegration is based on evaluating the rank of coefficient matrix of level variables in the regression of changes in a vector of variables on its own lags and lagged level variables.The rank of the matrix, which depends on the number of its characteristic roots (eigenvalues) that differ from zero, indicates the number of cointegrating vectors governing the relationships among variables.Johansen and Juselius (1990) develop two test statistics to determine the number of cointegrating vectors -the Trace and the Maximal Eigenvalues (M.E) statistics; Where T is the number of effective observations and s are estimated eigenvalues.For our analysis although our sample size is 168, in case of handling sample size of less than 100, we adjust the trace and M.E statistics by a factor (T=np)/T, where Tis the effective number of observations, nis the number of variables and p is the lag order.This is to correct bias towards finding evidence for cointegration in finite or small sample.The adjusted Trace statistic tests the null hypothesis that, the number of distinct cointegrating relationships is less than or equal to r against the alternative hypothesis of more than rcointegrating relationships.Meanwhile, the adjusted M.E test statistic tests the null hypothesis that the number of cointegrating relationships is less than or equal to ragainst the alternative of r+ 1 cointegrating relationships.
Structural VAR model
The advantage of the SVAR approach is that the system dynamics can be easily investigated via impulse response analysis, and the statistical significance of the various shocks can be evaluated with confidence intervals.Moreover, the relative importance of stochastic shocks can be examined by forecast error variance decomposition.The different structural shocks are identified by means of long-run restrictions, whereby certain shocks are allowed to have long-run impacts on all or some of the system variables.
However, after we might ascertain the relationship among the variables using VAR, then we follow the discussion on the SVAR approach.The starting point is a reduced form K-dimensional VAR model y t = A 1 y t-1 +….+A p y t-p + ε t, …………………………….…(Ⅰ) In (Ⅰ) above, is a vector of (K× 1) endogenous variables, five system variables among them are real GDP(y t ), oil price (oil price) and unrest(un t ).They are fixed (k× k) coefficient matrices which means bivariate model (2 × 2) using SVAR to examine the impact of oil price shocks and unrest on economic growth following Blanchard and Quah (1989), and we assume that follows a -dimensional white noise process with: Theequation Ⅲ and following the Blanchard and Quah (1989), the model is expressed as an infinite moving average representation of the variables such that: Where; Changes in (ΔLrGDP, Δoilprice) and (ΔLrGDP, Δunrest) are all assumed to be stationary while permanent and transitory errors ε, are uncorrelated white noise disturbances.The and are the demand shocks and supply shocks respectively.It is assumed that demand shocks have temporary effect on the level of GDP.The identity matrix is obtained by normalizing the variance of the structural shocks such that: E(ε ε ) = I that is, these shocks are orthogonal and serially uncorrelated.
The reduced form of the model in the moving average representation is: This can be represented as follows: Where e t is a vector of estimated reduced-form residuals with variance E(e t e t ) = Ω and matrices C i represent the impulse response function of shocks to ΔLrGDP, ΔLoilprice and Δun (dummy) respectively and C(L) is an infinite polynomial in the lag operator A(L) = C(L) -1 .
From equations (Ⅳ) and (Ⅴ), It can be shown that the
Introduction
This section of the paper deals with the presentation and analysis of the estimated results arrived at, i.e. it shows the estimated results which include; unit root tests of the variables on the time series, cointegration results and impulse response analysis with forecast error variance decomposition results are presented here.
Unit root test
The study conducts unit root tests on the variables with Augmented Dickey Fuller (ADF).Outcomes of the tests are presented in Table 1.According to ADF test statistics at level, there is enough evidence to infer that the null hypothesis is true and the alternative hypothesis is false.
On the other hand, at first difference with constant, there is enough evidence to infer that the null hypothesis is false and the alternative hypothesis is true.The study therefore, rejects the null hypothesis of unit root at first difference and not rejects the alternative hypothesis.However, the paper adopts ADF Test as the statistic that produces first difference stationary of all the series at 1% level of significance.In conclusion, there is enough empirical evidence to infer that, all series at first difference appears to be I (1) processes.Therefore, this allows us to conduct co-integration tests among the variables.
VAR based on Johansen and Juselius 1990 Cointegration test
To achieve objective one, the study accessed long run relationship among the variables using VAR based Johansen and Juselius (1990 From Tables 2(a and b) and 3(a and b), the normal criterion to find the result of trace test, is to compare the trace value with the critical value.If the trace value is higher than the critical value it means there is cointegration.This method of analysis suggests that there is the existence of long run relationship between GDP, as dependent variable and OILP and UNRST as independent variables.This shows thatboth the trace and maximum eigenvalue tests indicate that there are two cointegrating equations at the 5% significance level among the volatility of oil price, unrest and GDP.
From Table 4(a and b), this method of analysis suggests that there is the existence of long run relationship between GDP, as dependent variable and OILP, EXR, AGR, and UNRST as independent variables.This suggests that there are only two cointegrating equation at 5% level of significance i.e. the above table indicates two cointegrating equations at 5% level of significance.Therefore,Table 4(a and b) test statistics indicates that the null hypothesis stated that all variables under study do not have long run relationship can be safely rejected at all levels of significance and not reject the alternative hypothesis by concluding that there is enough empirical evidence to infer that the alternative hypothesis is true.Therefore, these series do have common long run relationship in Nigeria considering the period under review.
Blanchard and Quah (1989) Long run Pattern (SVAR Model)
To achieve the second objective, the study has detailed discussion on Structural vector autoregressive framework by which restrictions are based and supported by economic theory.As already explained for just-identified restrictions to be achieved, we need at least one restriction i.e. n (n +1) / 2 restrictions and following Blanchard and Quah (1989) framework, to test the null hypothesis that oil price shocks and unrest both do have long-run impact on economic growth.The unrest is regarded to have temporary effect being the research gap; therefore, all the temporary effects are restricted to zero.After estimating the just identified restrictions, the results generated from impulse responses are reported in Figure 1 (a and b).The estimation of SVAR is carried out in a multivariate VAR model.The results of the unit root tests indicate that all the series are I(1) and lag 3 is used which suggests absence of serial correlation.It indicates a negative response of unrest to rgdp innovations from 1 st to 7 th quarters but from that quarter it dwindles up to last quarter of the innovations.The results indicate that oil price shocks and unrest in international oil market do have impacts on the economic growth in Nigeria considering the period under review.More so, the study rejects the null hypothesis that states oil price shocks and unrest do not have long run impact on the economic growth considering the period under review by stating that, there is enough evidence to infer that the null hypothesis is false and that the alternative hypothesis is true.The study concludes that there are enough empirical evidences to infer that the alternative is true and these variables have impacts on the economic growth in the long-run.However, both impulse responses analysis for oil price and unrest depict positive and inverse implications with the level of real GDP respectively.This is consistent with prior expectation of the theory.And also consistent with the VAR results in Aliyu (2009), that positive relationship for both an oil importing and exporting country like Nigeria.In similar findings, there is a need for policymakers to consider unrest as another source of shocks before oil price shocks being a major source of shocks or fluctuations for many variables in the Nigerian economy as similar prescriptions for New Zealand in the study by Grounder and Bartleet (2007).Looking critically at Figures (1a) impulse responses, we can now have an overwhelming feature of Dutch Disease (resource curse) hypothesis in Nigeria and the same as in Olomola and Adejumo (2006).
Forecast error variance decomposition
Under this fragment, the forecast error variance decom--3,000,000 -2,000,000 -1,000,000 0 1,000,000 5a shows that the variance decomposition of agriculture output accounts for a relative proportion of forecast error due to its own innovation throughout the periods.From the table, oilp, unrst and EXR contributions to agriculture output fluctuations are less than that of rGDP in the given period.I.e.exchange rate, oil price and unrest explain about 0.308, 0.076 and 0.235%, while gross domestic product explains about 2.109% fluctuations in agriculture at 10 th periods respectively.Contemporaneously and over the time horizon, agriculture output drives its own variance by over 100% at 1 st period.
Table 5b shows that the variance decomposition of exchange rate accounts for the highest proportion of forecast error due to its own innovation in the first period.Exchange rate accounts for 96.42% in the1 st period.Its proportion in the 2 nd period decreases continually until it reaches 0.308% in the 10th period.While the innovations of rgdp, agriculture output, oil price, and unrest explain about less than one percent in the1 th period.AGR increases from 2 nd to 10 th periods.But the contributions of rGDP, OILP and UNREST to EXR are very small because they dwindle throughout the periods.
Table 5c shows that the variance decomposition of oil price accounts for the highest proportion of forecast error due to its own innovation while the innovations of agriculture output, exchange rate, unrest and rgdp explain about 97.28, 0.306%, 0.235% and 2.109% at 10 th period respectively.Contemporaneously and over the time horizon, oil price drives its own variance by over 97.47% at 1 st period.Table 5d shows that the variance decomposition of unrest accounts for the highest proportion of forecast error due to its own innovation while the innovations of rgdp, exchange rate, agriculture output and oil price explain about 2.108, 0.307% 97.27 and 0.076% at 10 th period respectively.Contemporaneously and over the time horizon, unrest drives its own variance by over 99.13% at 1 st period.After the 1 st and 2 nd periods, UNRST deceases drastically to 0.236% in 10 th period which is less than the proportions of rGDP, EXR, and AGR in the 10 th periods (i.e.2.108, 0.307 and 97.27%).
Table 5e shows that the variance decomposition of rGDP accounts for the highest proportion of forecast error due to its own innovation.This means that the fluctuations of GDP are explained mainly by GDP shocks and other variables shocks, in the long run.Gross Domestic Product (GDP) accounts for 63.72% in the 1 st period.Its proportion decreases continually until it reaches 4.04 % in the 10th period.EXR, OILP, AGR and OILP shock account for less that 1% in the1 st period.But AGR proportion increases over time and reaches 94.56% in the 10th period.While, the proportion of EXR, OILP and UNRST dwindles over time from the 2 nd -10 th periods.The result shows that in the long run Agriculture output shocks account for the major variation in gross domestic product.
DISCUSSIONS/POLICY IMPLICATIONS
The policy implications of the results from VAR and Structural VAR have striking issues in the forecasting performance of an estimate; estimation using Structural VAR has error band while using unrestricted VAR has no error band.The findings from this study indicated usefulness of these variables through their contributions in predicting future path of Nigerian economic growth.Jimenez-Rodreguez and Sanchez (2012), Olomola and Adejumo (2006) reported similar result with respect to oil price shocks both in the Nigerian economy and Japan economy.In the analysis, agriculture output has the highest long run contribution, followed by exchange rate then oil price.It is widely accepted that agricultural output contributes to economic growth.
Nevertheless, the result from the estimated regression output is in line with a priory expectation.In other words, it mirrors the fact that unrest has ripple effect on the economy.However, considering the research scope, unrest has inverse ripple effect on the Nigerian macroeconomic variables with its coefficient correctively signs.This suggests that, to achieve meaningful macroeconomic targets as far as Nigerian economy is concerned, emphasis should be geared towards addressing unrest.
In addition to the above, the number of cointegrating relationship has also played a key role in this line of exercise.So, to impose restrictions to recover the shocks in oil price and unrest, the study will now have to refer back to the number of cointegrating vectors.As for the broad objective, the study normalizes the coefficients of the regression, with one cointegrating equation; the theory needs only one restriction, that is, the just identified restriction.
To this end, this study examines the impact of oil price shocks on rGDP, exchange rate, agriculture output and unrest on the Nigeria's economic growth.Since Nigeria is an oil producing country, naira real exchange rate appreciates with higher oil prices leading to higher inflow of foreign exchange into the economy.Although this may sound good to the economy, unrest has ripple effect on real economic activities as it reduces the volume of oil output and this translates into less optimum revenues.
CONCLUSION AND RECOMMENDATIONS
In conclusion therefore, unrestricted VAR has been extensively used in recent empirical research to assess the evidence in support of central proposition of macroeconomics, such as the impact of oil price shocks and aggregate variables.Estimated impulse response and forecast error variance decomposition have also played a key role in these exercises.The approach has been vigorously pursued following the research of Blanchard and Quah (1989).
The study raises some important issues about what is expected to be learning from this line of empirical research.The asymptotic analysis shows that in studying shocks or volatility on frequencies of annual data the clustering effect is gone.Some previous research as e.g.Olomola and Adejumo (2006) has shown that estimated impulse response can be very sensitive to changes in VAR model specification, such as the inclusion of trends and additional variables; and there has been debate about the robustness of the empirical findings in this line of research.This result corroborates with earlier findings on unrestricted VAR impulse response, given a clear analytical reasons why impulse responses from unrestricted VARs are unreliable even in very large samples and show that different models in the VAR class produce impulse response with very different behavior.Model like unrestricted VAR has no theory supporting it and then produces inconsistent impulse responses.It is particularly important that the cointegrating relations in a system (hence the number of unit roots) be estimated consistently.
In general, the results upon being a pioneer study for controlling the effect of unrest (systemic risk) to the study impact of oil price shocks with an approach that follows structural econometric model, while there are certainly differences in forecasting performance in time series models, the most serious disagreements between time series model arise in policy analysis.The main conclusion is that, differing treatment of cointegration in the models plays a big role in affecting the outcomes of policy analysis.Although this issue was not investigated in the previous empirical assessment, it seems likely (by analogy to the result for structural -just identifying restriction approach) that similar effect to those had discovered come into play in structural econometric models when unit roots or near unit roots are estimated.
Therefore, the study recommends that government shoulddiversify the economic base from oil to non-oil as a necessary condition for sustainability and growth.Also government should improve the security in the Niger Delta area with a view to boosting oil output, hence leading to increase oil revenue and by implication growth of the economy.
Finally, in analyzing economic shocks we have to be careful in the choices of variables; the study recommends carrying out misspecification tests of no-serial correlation, normality and heteroscedasticity tests for the model.In this case the study recommends diagnostics test.
Table 1a .
Unit root test at level with constant.
Table 1b .
Unit root test at 1 st difference with constant.
price, Agr output, Exchange rate and Unrest) have a common long run relationship in Nigerian economy.The results of cointegration tests are shown in Tables 2(a and b) to 3(a and b).
Table 2a .
Unrestricted cointegration test (trace statistics) between GDP and oil price.
Table 2b .
Unrestricted cointegration test (Maximum Eigen Value Statistics)between GDP and oil price.
Table 3a .
Unrestricted cointegration test (trace statistics) between GDP and unrest.
Table 3b .
Unrestricted cointegration test maximum Eigen value statistics between GDP and unrest.
Table 4b .
Unrestricted cointegration test maximum eigen value statistics.
Source: Researchers computation, E-views 7.1, 2015 Figure 1a.Impulse responses.Figure 1b.The bivariate model for rgdpand unrest.price to rgdpin the bivariate model for rgdpand oil price variable.It shows that the level of rgdp increases to about 0.2% from 1 st quarter up to 7 th quarters, i.e. the resultant shocks due to oil price positively responses which lead to appreciation in the rGDP innovation.Meanwhile, in Figure (1b)shows the bivariate model for rgdpand unrest.
Table 5e .
Variance Decomposition of GDP.
Source: study 2015.Asterisks indicate presentations of a variable shocks in relation to other innovations in the system. | 7,840 | 2015-02-14T00:00:00.000 | [
"Economics"
] |
Adsorption Kinetics for the Removal of Fluoride from Aqueous Solution by Activated Carbon Adsorbents Derived from the Peels of Selected Citrus Fruits
Activated carbons (ACs) were prepared from the peels of Citrus documana, Citrus medica and Citrus aurantifolia fruits. Adsorption of fluoride onto these activated carbons was investigated. Effect of contact time in the removal of fluoride from aqueous solution at neutral pH was studied. Five kinetic models; the pseudo firstand second-order equations, intraparticle diffusion, pore diffusion and the Elovich equation, were selected to follow adsorption process. Adsorption of fluoride onto adsorbents could be described by pseudo second-order equation. Kinetic parameters; rate constants, equilibrium adsorption capacities and correlation coefficients, for each kinetic equation were calculated and discussed. The good fitting of kinetic data to pore diffusion and Elovich equations indicate that pore diffusion plays a vital role in controlling the rate of the reaction.
Introduction
Fluoride is found in all natural waters at some concentration. Long term exposure to higher levels of fluoride (> 1.5 ppm) in drinking water leads to serious health problems 1,2 like skeletal fluorosis, brain damage, osteoporosis, thyroid disorder and cancer 3,4 . Various defluoridation technologies based on the principle of precipitation 5 , ion exchange 6 and electrochemical 7 methods have been proposed to remove excess fluoride in drinking water and industrial effluents. Among various methods, adsorption is a suitable technique. The application of activated carbons in the adsorptive removal of inorganics from water has been the subject matter of numerous investigators. Activated carbon prepared from rice straw 8 , bio-materials 9 , alumina impregnated carbons 10,11 and waste carbon slurry 12 are different S420 CH. CHAKRAPANI et al.
adsorbents used for defluoridation process. In the present study, activated carbons (ACs) prepared from the peels of some selected citrus fruits are used in removing fluoride from aqueous solution with the aim of understanding the kinetics of the adsorption process at neutral pH.
One of the most important factors in designing an adsorption system is predicting the rate at which adsorption takes place, referred to the 'kinetics of sorption'. In adsorption processes, the selection of an adsorbent, its configuration and attainment of equilibrium are related to the 'rate-limiting' process. An understanding of the rate-limiting step will greatly aid in deciding the time of contact to be allowed between the sorbent and sorbate. So, to properly interpret the experimental data, it is necessary to determine the rate-limiting step for the adsorption process, which governs the overall removal rate and mechanisms of sorption. There are essentially three consecutive steps in the adsorption of materials from solution by porous adsorbents, namely bulk diffusion, film diffusion and pore diffusion. Any of these steps can be 'rate-limiting' in adsorption. Generally, both pore diffusion and film diffusion were considered to be the major factors controlling rates of sorption from solution by porous adsorbents. As they act in series, the slower of the two, was regarded as ratelimiting in an adsorption process 13,14 .
Experimental
The activated carbon adsorbents viz., NCDC, NCMC and NCAC were prepared form the peels of Citrus documana, Citrus medica and Citrus aurantifolia fruits respectively. The peels of selected citrus fruits were obtained from a local fruit stall at Eluru, Andhra Pradesh. The peels were dried, crushed and washed thoroughly with de-ionized water to remove adhering dirt. They were air dried in an oven at 100-120 o C for 24 h. After drying, they are carbonized at 500 o C in a uniform nitrogen flow and subjected to liquid phase oxidation with 0.1 N HNO 3 (analytical grade). Further they were washed with double-distilled water to remove the excess acid and dried at 150 o C for 12 h.
The stock solution of 100 mg L -1 fluoride was prepared by dissolving 221 mg of anhydrous NaF in 1 L of distilled water. Test solution of 5 mg L -1 Fwas prepared from fresh stock solution. All the experiments were carried out in 250 mL conical flasks with 100 mL test solution at room temperature (25±2 o C). The flasks, along with test solution and 1 g of the adsorbent at neutral pH, were shaken in horizontal shaker at 120 rpm to study the equilibration time (5-50 min) for maximum adsorption of fluoride and to know the kinetics of adsorption process. At the end of the desired contact time, the samples were filtered using Whatman no. 42 filter paper and the filtrate was analyzed for residual fluoride concentration by SPADNS method, described in the standard methods of examination of water and wastewater 15 .
Effect of contact time
A plot between time t (min) and amount of fluoride adsorbed with time q t (mg g -1 ) is shown in Figure 1. As agitation time increases, fluoride removal also increases initially, but then gradually approaches a more or less constant value, denoting the attainment of equilibrium. Similar trend was observed by other workers during adsorption of fluoride onto protonated chitosan beads 16 . With respect to contact time, NCDC, NCMC and NCAC reached saturation after 30, 35 and 35 min respectively, which were fixed as their optimum contact times. Among the three sorbents, NCDC exhibits higher adsorption capacity followed by NCMC and NCAC.
Fitness of the kinetic models
The best-fit among the kinetic models was assessed by the squared sum of errors (SSE) values. It is assumed that the model which gives the lowest SSE values is the best model for the particular system 17,18 . The SSE values were calculated by the equation, Where q e(expt.) and q e(cal.) are the experimental sorption capacity of fluoride (mg g -1 ) at equilibrium time and the corresponding value that is obtained from the kinetic models. SSE values and various kinetic parameters for all the kinetic models were calculated and are summarized in Table 1.
Adsorption kinetics
Five simplified kinetic models namely pseudo first-order, pseudo second-order, Weber and Morris intraparticle diffusion model, Bangham's pore diffusion model and Elovich equations have been discussed to identify the rate and kinetics of sorption of fluoride onto NCDC, NCMC and NCAC.
Pseudo first-order model
The Lagergren's rate equation is one of the most widely used rate equation to describe the adsorption of adsorbate from the liquid phase 19,20 . The linear form of pseudo first-order rate expression of Lagergren is given as Where, q e and q t are the amounts of fluoride adsorbed on adsorbent (mg g -1 ) at equilibrium and at time t (min), respectively, and k 1 is the rate constant of pseudo first-order kinetics. Figure 2(a) shows the plots of linearized form of pseudo first-order kinetic model for the three sorbents. The plots were found linear with good correlation coefficients (>0.9) indicating the applicability of pseudo first-order model in the present study. The pseudo first-order rate constant (k 1 ) and q e(cal.) values were determined for each adsorbent from the slope and the intercept of corresponding plot (Figure 2(a)) and are listed in Table 1. Pseudo second-order model The adsorption kinetics was also described as pseudo-second order process using the following equation 21 , Where, q e and q t have the same meaning as mentioned previously and k 2 is the rate constant for the pseudo second-order kinetics. The plots of t/q t versus t for the three adsorbents are shown in Figure 2(b). The values of q e(cal.) and k 2 were determined for each adsorbent from the slope and intercept of the corresponding plot and are compiled in Table 1.
The correlation co-efficient (R 2 ) values for pseudo second-order adsorption model have high values, 0.9946, 0.9971 and 0.997 for NCDC, NCMC and NCAC respectively. Comparatively in each case, the R 2 value is higher than that of pseudo first-order model. The lower SSE values for pseudo second order model also indicate that the adsorption kinetics of fluoride onto NCDC, NCMC and NCAC can be better described by pseudo second order model. Similar phenomenon has observed in the literature for the adsorption of fluoride on various adsorbents 8
Intraparticle diffusion
Rate of sorption is frequently used to analyze nature of the 'rate-controlling step' and the use of the intraparticle diffusion model has been greatly explored in this regard which is represented by the following Weber and Morris equation 20 .
Where, C is the intercept, related to the thickness of the boundary layer and k ip is the intraparticle diffusion rate constant. According to this model, if adsorption of a solute is controlled by the intraparticle diffusion process, a plot of q t versus t 1/2 gives a straight line. Weber and Morris plots of q t versus t 1/2 are shown in Figures 3(a), 3(b) and 3(c) for NCDC, NCMC and NCAC respectively. It is evident from the plots that there are two separate stages; first linear portion (Stage I) and second curved path followed by a plateau (Stage II). In Stage I, nearly 50% of fluoride was rapidly up taken by carbon adsorbents within 5 min. This is attributed to the immediate utilization of the most readily available adsorbing sites on the adsorbent surfaces. In Stage II, very slow diffusion of adsorbate from surface site into the innerpores is observed. Thus initial portion of fluoride adsorption by carbon adsorbents may be governed by the initial intraparticle transport of fluoride controlled by surface diffusion process and later part is controlled by pore diffusion. Similar dual nature with initial linear and then plateau were found in the literature 8,25 .
Though intraparticle diffusion renders straight lines with correlation co-efficients (>0.98) for all the three sorbents, the intercept of the line fails to pass through the origin in each case may be due to difference in the rate of mass transfer in the initial and final stages of adsorption 26 and indicates some degree of boundary layer control which implies that intraparticle diffusion is not only the rate controlling step 19 . The data were further used to learn about the slow step occurring in the present adsorption system using pore diffusion model.
Where, C i is the initial concentration of the adsorbate in solution (mg L -1 ), V is the volume of the solution (mL), m is the weight of adsorbent (g L -1 ), q t (mg g -1 ) is the amount of adsorbate retained at time t and α (less than 1) and k 0 are the constants. As such log log[C i /(C i -q t m)] was plotted against log(t) in Figure 4 for all the three sorbents. The plot was found to be linear for each adsorbent with good correlation co-efficient (>0.9) indicating that kinetics confirmed to Bangham's equation and therefore the adsorption of fluoride onto NCDC, NCMC and NCAC was pore diffusion controlled. Similar trend was observed in the literature for the adsorption of fluoride onto waste carbon slurry 12 .
Elovich equation
The Elovich equation 28 is given as follows: Where α (mg g -1 min -1 ) is the initial sorption rate and the parameter β (g mg -1 ) is related to the extent of surface coverage and activation energy for chemisorption. The kinetic results will be linear on a q t versus ln(t) plot ( Figure 5), if the results follow an Elovich equation. It was suggested that diffusion accounted for the Elovich kinetics pattern 29 ; conformation to this equation alone might be taken as evidence that the rate-determining step is diffusion in nature 30 and that this equation should apply at conditions where desorption rate can be neglected 31 . The kinetic curve of sorption demonstrated good fitting with the model (R 2 > 0.9) which may indicate that the diffusional rate-limiting is more prominent in fluoride sorption by NCDC, NCMC and NCAC.
Conclusion
The fitting of the kinetic data demonstrate that the dynamics of sorption could be better described by pseudo second-order model indicating a chemisorptive rate-limiting for all the three adsorbents, NCDC, NCMC and NCAC. Though the plots of intraparticle diffusion renders straight lines with good correlation co-efficients, they fail to pass through origin in each case. This suggests that the process is 'complex' with more than one mechanism limiting the rate of sorption. The good fitting of the kinetic data, to Bangham's and Elovich equations indicates that pore diffusion plays a vital role in controlling the rate of reaction. | 2,871.4 | 2010-01-01T00:00:00.000 | [
"Chemistry"
] |
Conical Statistical Optimal Near-Field Acoustic Holography with Combined Regularization
For the sound field reconstruction of large conical surfaces, current statistical optimal near-field acoustic holography (SONAH) methods have relatively poor applicability and low accuracy. To overcome this problem, conical SONAH based on cylindrical SONAH is proposed in this paper. Firstly, elementary cylindrical waves are transformed into those suitable for the radiated sound field of the conical surface through cylinder-cone coordinates transformation, which forms the matrix of characteristic elementary waves in the conical spatial domain. Secondly, the sound pressure is expressed as the superposition of those characteristic elementary waves, and the superposition coefficients are solved according to the principle of superposition of wave field. Finally, the reconstructed conical pressure is expressed as a linear superposition of the holographic conical pressure. Furthermore, to overcome ill-posed problems, a regularization method combining truncated singular value decomposition (TSVD) and Tikhonov regularization is proposed. Large singular values before the truncation point of TSVD are not processed and remaining small singular values representing high-frequency noise are modified by Tikhonov regularization. Numerical and experimental case studies are carried out to validate the effectiveness of the proposed conical SONAH and the combined regularization method, which can provide reliable evidence for noise monitoring and control of mechanical systems.
Introduction
Vibration and noise have a great influence on the accuracy of mechanical processing and reliability of electromechanical products, especially stealth and detection performance of underwater vehicles. The primary problem in the evaluation of stealth performance of underwater vehicles is the acquisition of radiated sound field. However, it is difficult to carry out high-precision tests in the far fields. Therefore, near-field acoustic holography (NAH) is applied to gain three-dimensional (3D) visualization of sound radiation based on measurements over a surface near the sound source, whose ability to reconstruct the evanescent wave components also ensures a very high spatial resolution.
The traditional NAH [1] is based on regular-grid measurements across a level surface in a separable coordinate system, allowing the calculation to be performed by spatial discrete Fourier transform (DFT). Consequently, the main limitation is the requirement for full coverage of the areas with significant sound pressure. A second limitation is the requirement of a regular measurement grid to support the use of spatial DFT. Both above limitations are overcome by the Patch NAH technique named Statistical Optimal Near-Field Acoustic Holography (SONAH), which provides a new way of better soundfield reconstructions directly by setting up a projection matrix that is optimized for a wave-number spectrum of wave functions with a specific amplitude distribution (scaling), avoiding the truncation effects and winding error caused by DFT of the traditional NAH. Nevertheless, most current methods focus on the statistical optimal planar near-field acoustic holography and only a few studies on the cylindrical sound source for military equipment have been carried out [2][3][4]. When the submarine travels at a low speed, its radiation noise mainly comes from the mechanical noise of the conical stern. Currently, there is a lack of research on the reconstruction of the sound field radiated by conical sources and the lack of research on acoustic inverse problems still exists. Therefore, the conical SONAH demands immediate attention, so that it can be applied to the conical shell underwater weapons. On this basis, based on cylindrical SONAH, cylindrical SONAH with the combined regularization method is proposed to solve the inadequacy of inverse problems and strengthen the stability and accuracy of the reconstruction process.
Near-field acoustic holography (NAH) developed in the 1980s is a very useful tool for three-dimensional (3D) visualization of sound radiation and for precise source localization. The traditional NAH [5,6] is based on regular-grid measurements across a surface in a separable coordinate system, allowing the calculations to be performed by spatial discrete Fourier transform (DFT), causing the spatial truncation effects and winding error. Therefore, a set of techniques have been developed to avoid the spatial truncation effects. One of them is Helmholtz Equation least-squares (HELS) [7,8] and another similar method is the Equivalent Source Method (ESM) [9][10][11][12]. A third much-related technique is the patch inverse boundary element method (BEM) [13][14][15]. The fourth and last technique that should be mentioned here is statistically optimized NAH (SONAH) [16][17][18], which can reconstruct the sound field in a partial holographic surface and overcome strict requirements for the size of the measurement surface. Jacobsen [19] confirmed that SONAH with advantage can be based on measurement of the sound pressure and the normal component of the particle velocity to overcome the limitation, that all sources should be on one side of the measurement plane whereas the other side must be source free. Hald [20] provided an overview of the basic theory of SONAH and investigated the sensitivity of the inherent error distribution with changes in the parameters of the SONAH method. Kim [21] proposed an improved SONAH procedure and reconstructed the sound locations and radiation patterns of the two loudspeakers in a subconical moving fluid medium. Zhu [22] analyzed a modulated sound source with SONAH technology and applied it to loudspeaker tests and air compressor tests. Hald [23] introduced scaling of the applied plane wave functions that took the evanescent wave amplification into account to support the regularization methods in finding the best compromise between noise suppression and reconstruction accuracy. However, most of the existing research has focused on the statistically optimized planar NAH (SOPNAH). Cho [24] visualized source regions of a refrigeration compressor accurately by using fewer measurement positions than conventional NAH. Yang [25] identified the noise sources in underwater cylinder based on measurement of particle velocity in the simulation and experiments. Wall [26] used the inclusion of multiple wave-function to modify SONAH and obtained an accurate near-field reconstruction. Later, Wall [27] combined an equivalent wave model (EWM) and SONAH for the reconstruction of near-field pressures in multisource environments with lower errors and fewer measurements, which was used to reconstruct apparent source distributions between 20 and 1250 Hz at four engine powers and produced accurate field reconstructions for both inward and outward propagation [3]. Nevertheless, most of these existing studies focused on planar sound sources, and only some focused on cylindrical sound sources. Furthermore, currently, there is a lack of research on the reconstruction of sound fields radiated by conical sources aiming at high accuracy that are suitable for conical shells of underwater vehicles.
NAH is a linear, ill-posed inverse problem due to the existence of strongly decaying evanescent waves. Regularization provides a method of generating a solution to the linear problem in an automated way. Therefore, regularization methods are often used to Sensors 2021, 21, 7150 3 of 28 overcome ill-posed problems in NAH and the regularization parameter selection is very important. Saijyou [28] proposed an estimation method of appropriate regularization filter for applying the K-space data extrapolation method to solve the problem of the measured pressure contaminated by noise. Pascal [29] determined the regularization parameter using the Morozov discrepancy principle in SONAH and made significant improvements to the standard NAH. Gomes [30] compared the three regularization parameter choice methods (PCMs) used in NAH: GCV, L-curve and Normalized Cumulative Periodogram (NCP). He [31] first combined the two most commonly used regularization methods, Tikhonov regularization and truncated singular value decomposition (TSVD), into ESM-based NAH. However, all these regularizations were investigated only in NAH and SOPNAH and it is difficult to obtain a stable and meaningful solution for conical SONAH by a single regularization method. Therefore, it is necessary to study a new regularization method for applying conical SONAH to strength the stability of reconstruction.
In the existing research, our group developed some related researches which provided some important evidence for noise monitoring and control [32,33]. On this basis, conical SONAH is proposed in this paper to realize the identification and localization of noise source to gain three-dimensional (3D) visualization of sound radiation of the conical sound source. In this method, the elementary cylindrical wave is transformed into the characteristic elementary wave suitable for the radiated sound field of a conical surface through the cylinder-cone orthogonal coordinate transformation method, forming the matrix of characteristic elementary wave in the conical spatial domain. Then, the sound pressure is expressed as the superposition of this characteristic elementary wave. Next, the superposition coefficient is solved according to the principle of superposition of wave field. Lastly, the reconstructed conical pressure is expressed as a linear superposition of the holographic conical pressure. Besides, aiming at the above ill-posed inverse problem, a combined regularization method that combines the advantages of TSVD and Tikhonov regularization is proposed to further improve reconstruction accuracy, where the different singular values of the acoustic transfer matrix are dealt differently. The low-frequency component without noise corresponding to the larger singular values is processed by the TSVD method, and the smaller singular values corresponding to the high spatial frequency component containing noise in the measurement are processed by Tikhonov regularization. The stability and accuracy of the proposed method were verified through a set of computer simulations. Subsequently, an experiment was designed in the anechoic chamber to validate the effectiveness of the proposed method and combined regularization method.
The remainder of the paper is organized as follows: In Section 2, the basic implementation procedure of conical SONAH with combined regularization method is provided. In Section 3, numerical case studies on conical SONAH are provided to study the influence of cone angle on the accuracy of acoustic field reconstruction and the performances are comparatively studied with the conventional cylindrical SONAH. The combined regularization method is also comparatively studied with the traditional single regularization method. In Section 4, a test bed with shell structures and the test implementation steps are introduced, and the performances of the proposed conical SONAH and combined regularization method are also given. In Section 5, the conclusions are summarized. Generally, this study is intended to provide a reference for the engineering application of conical SONAH and can benefit vibration and noise monitoring, reduction, and control for mechanical systems.
Implementation Process of Conical SONAH
The conical SONAH is based on the cylindrical SONAH. First, the elementary cylindrical wave is transformed into the characteristic elementary wave suitable for the radiated sound field of a conical surface through the cylinder-cone orthogonal coordinate transformation method, and the matrix of characteristic elementary wave in the conical spatial domain is formed. Then, the sound pressure is expressed as the superposition of this characteristic elementary wave. Next, the superposition coefficient is solved according to the principle of superposition of the wave field with the combined regularization method. Lastly, the reconstructed conical pressure is expressed as a linear superposition of the holographic conical pressure. The implementation process of conical SONAH is given in Figure 1. The detailed implementation process of conical SONAH is as follows: From the Equation of acoustic waves of small amplitude in an ideal fluid medium, the Helmholtz Equation of a time-independent and single-frequency acoustic field can be obtained [24]. ∇ 2 p(x, y, z) + k 2 p(x, y, z) = 0 where p(x, y, z) is the spatial sound pressure and it is a function of the spatial position, k = ω/c = 2π/λ is the number of sound waves, c is the sound speed, λ is the wavelength. For Equation (1), cartesian coordinates can be converted into a cylindrical coordinate system by ordering x = r cos θ, y = r sin θ. The Laplace operator can be expressed as a cylindrical coordinate: In this cylindrical coordinate, Equation (1) is solved by the separation variable method, and the form of the solution is: where p(r, θ, z) is the spatial sound pressure in cylindrical coordinate system. By substituting Equation (2) Because p z is only related with coordinates z in the above Equation, so it can be set to be equal to the constant −k 2 z , the result is shown below: Similarly, the item only related with angle can θ be expressed as: Substitute Equations (5) and (6) into the Equation (4) to obtain: In the Equation, when k 2 z ≤ k 2 , The traveling wave solution of the Helmholtz Equation in the cylindrical coordinate system can be obtained as: n (k r r)e ik z z + D (2) n (k z )H (2) n (k r r)e ik z z dk z where H (1) n and H (2) n are two types of Hankel function, D n (k z ) and D (2) n (k z ) are unknowns. In the case of out-of-column acoustic radiation, all sources of radiation are included in the cylinder r = a. There is no cylindrical wave converging inward, only the sound waves propagating outward are considered, so in Equation (10), D (2) n (k z ) = 0. Therefore, the solution to the problem of acoustic radiation outside the cylinder is: Elementary wave functions in the spatial frequency domain determined by wavenumber vector K m = (n, k z ) is: In the cylindrical SONAH, for Equation (12), the radius r of a certain cylindrical measurement surface is determined, and the radius r of the cylinder sound source is also determined. Thus, for an actual cylindrical sound source, its axial coordinates z and circumferential coordinates θ vary in the cylindrical coordinate system, so there are two variables in the expression of element cylindrical waves. However, for a conical source, the radius varies with the axial distance but the cone angle of measurement surface or sound source surface is fixed. For an actual conical source, given the size of the cone angle, a certain point on the cone can be only determined by the axial and circumferential coordinates. Although the radial size of the conical sound source is also variable, it can be expressed by the geometric relationship with the axial coordinates and cone angle. In view Sensors 2021, 21, 7150 6 of 28 of the above analysis, considering the similarity between the conical surface and cylindrical surface, cylindrical SONAH is extended to the conical surface. To further illustrate its implementation process, the conical near-field acoustic holographic measurement scheme is shown in Figure 2, and the measurement surface is expanded as shown on the right. The outermost conical surface is the holographic measurement surface, and the length along the conic bus is L. The conical surface with the radius r s of the upper surface is the reconstruction surface, and the sound sources are all contained in the conical surface with the radius r a of the upper surface. The front view of the measurement surface is shown in Figure 3. The conical Angle remains unchanged over all conical surfaces, and the source surface is located at the dotted line. Therefore, the cylindrical invariant in Equation (12) can be expressed as the cone angle of the conical surface, and Equation (12) can be changed into the following form: If the integral operation is discretized, the sound pressure on the holographic and reconstructed conical surface can be expressed as: Similarly, according to the superposition principle of wave field, the characteristic surface wave of the same wavenumber vector has the superposition property. Then, for the reconstruction of the conical surface, the characteristic surface wave at any point r S = (r S , θ, z) with the wavenumber vector K m can be obtained by the superposition of the characteristic surface wave with the same wavenumber vector K m at all points r Hn = (r H , θ n , z n ) (here r H = z n × tan β + rh 1 ) on the holographic surface.
where r Hn = (r H , θ n , z n )(n = 1, 2, · · · , N) is a measuring point of sound pressure on the holographic cone, M is the number of characteristic surface waves contained in the complex pressure on the reconstructed cone and holographic cone, C β n (r S ) is the superposition coefficient.
Substitute Equation (15) into Equation (14) to obtain: A system of linear equations can be determined by Equation (15): To make the solution unique, M ≥ N is required. Moreover, the Equation can be expressed as: To obtain the coefficient matrix C β n (r S ), first, for the finite subset of elementary cylinder wave Φ β Km (r S ), Equation (15) gives an optimal estimate. Similarly, for Equation (16), the optimally estimated sound pressure of reconstruction surface can also be given by appropriate weighted processing. Next, the regularization method is used to suppress the influence of the small-scale evanescent waves, and the regularization of the above Equation is obtained as: where A H β is the conjugate transposed matrix of A β , θ is the regularization parameter that acts as a filter, I is the unit diagonal array.
Theories of the Combined Regularization Method
Statistically optimal near-field acoustic holography is an acoustic inverse problem. The ill-posed problem is very sensitive to measurement errors, but the actual measurement data contains errors inevitably, so it cannot be solved directly by conventional methods. However, it does not mean that a meaningful solution cannot be obtained. A combined regularization method based on TSVD and Tikhonov is proposed to obtain a stable and meaningful solution to the inverse problem. Before introducing the combined regularization method, it is necessary to have a detailed overview of the role of TSVD and Tikhonov regularization in sound field reconstruction:
Truncated Singular Value Decomposition (TSVD)
The main idea of TSVD is to find a good matrix A k to make it better approximate the matrix A under the 2-norm. To filter out the contribution in solution = Ax can be written as: In this method, all singular values smaller than the threshold value are discarded, which reduces the condition number of the acoustic transfer matrix and improves its illposed property. However, the truncated small singular values contain the high spatial frequency information of the sound field. Although it can suppress the amplification effect of noise and other errors hidden in the high-frequency information in the reconstruction process, the lost high-frequency detail information of the sound field reduces the accuracy of reconstruction. In addition, for Equation (22), the key lies in the selection of truncation points. The selection of a very small truncation point will result in losing a large amount of high-frequency components in the sound field, while a very large truncation point would not be able to suppress the influence of noise. The truncation point is determined by the contribution rate CR [34] of singular values in this paper: where σ j is the singular value. In engineering practice, the contribution rate of singular values is usually 1~5%.
Tikhonov Regularization
Another way of suppressing the effect of the right-end error is to add a constraint when solving the original discrete problem, which limits the "size" of the solution (measured by the appropriate norm) to make the solution smoother. Therefore, the problem can be described as the following optimization problem: where λ is the regularization parameter, Ω(x) is a smooth norm. This regularization method is known as Tikhonov regularization. A balance between the residual norm and the "size" of the solution is controlled by λ. If λ, Equation (24) degenerates into a least square problem, the solution is not regularized. When solving Equation (24), by taking a discrete smooth norm Ω(x) = Lx 2 , the Equation (24) becomes: where L is a real regularization matrix of p × n dimensions, usually p ≤ n. The Equation (25) is called the general form of Tikhonov. When L = I n , it is called the standard form of Tikhonov. Equation (24) can be expressed as the following Tikhonov problem: Its solution is: If L = I n , the Tikhonov regularization can be written as follows through the TSVD: Therefore, the filtering factor of the Tikhonov regularization is f i = σ 2 i /(σ 2 i + λ 2 ). As can be seen from Equation (28), Tikhonov regularization revised all singular values of the acoustic transfer matrix, not only the smaller singular values corresponding to the evanescent wave of high spatial frequency, but also the larger singular values corresponding to the propagation wave of low spatial frequency which does not contain the noise. In this way, low-frequency information components without high-frequency noise will be distorted, which will affect the accuracy of sound field reconstruction. Moreover, the improper selection of regularization parameters will cause over-filtering and under-regularization.
Combined Regularization Method
As can be seen from Equations (22) and (28), the difference between the two regularization methods is the number of corrections for singular values. Based on this, the combined regularization method is proposed, where the singular values of the acoustic transfer matrix are treated by different regularization methods. Figure 4 shows the schematic diagram of the combined regularization method. Firstly, the truncation point in TSVD is selected by the appropriate contribution rate to truncate the singular values of the matrix. Then, because the singular values before the truncation point correspond to the low-spatial frequency components of the sound field and which do not contain high-frequency noise, they are not processed. Finally, since the small singular values after the truncation point correspond to the high-spatial frequency components of the sound field containing high-frequency noise, measuring errors such as noise will be amplified with the reconstruction of high-frequency evanescent waves. Therefore, it is necessary to use the Tikhonov regularization to modify the small singular values. Based on the above analysis, the solution of the combined optimized regular method can be obtained as: From the above Equation, the combined regularization method also has the problem of selection of regularization parameters k in TSVD and λ in Tikhonov. The truncation point k is determined by the method of Equation (23). In the latter term of Equation (29), the regularization parameter λ in Tikhonov is the singular value corresponding to the truncation point, which ensures that there is only one undetermined parameter in the combined regularization and that all singular values contribute to the solution. It not only avoids the loss of high-frequency components of the sound field caused by the TSVD method, but also restrains the influence of measuring errors such as noise in the reconstruction accuracy. Based on the above analysis, the final solution obtained by combined regularization method is: where x T&Tik is the solution of combined regularization, k is the truncation point.
Quantitative Study on the Influence of Cone Angle
In cylindrical SONAH, when the axial size of a cylindrical sound source is fixed, the radiated sound field is affected by the radius of the radiation source. However, in a conical source, when the axial size is constant, the radiated sound field of the source is affected by both cone angle and radius. Therefore, it is necessary to study the influence of cone angle on the accuracy of acoustic field reconstruction. Conical SONAH is used to explore the reconstruction effect at different cone angles.
Simulation Settings are as follows: The line array of pulsating balls with gradually increasing radius placed along (x, y) = (0, 0), spanning from z = 0 to 1.0 m are adopted to simulate the conical sound source. The distance among them is kept much lesser than half of the sound wavelength and the frequency is 300 Hz. The radius of the small bore of the measurement surface is 0.1 m, the axial dimension is 1.0 m, the circumference interval is 18 • , the axial interval to be 0.05 m, and the reconstruction distance (the distance from the measurement surface to the reconstruction surface) is 0.05 m. During the simulation process, random noise with a signal-to-noise ratio of 20 dB is applied to the measurement surface.
The amplitude error of a certain reconstructed point is defined as [34]: where L e is the total relative error, P si is the sound pressure at a point of reconstruction surface, P ri is the theoretical sound pressure at a point of the reconstruction surface. Based on the sound pressure on the measurement/holographic surface, the proposed conical SONAH is used to reconstruct the sound field on the reconstruction surface. To observe the degree of agreement between the reconstructed values and theoretical values on the reconstruction plane more clearly, Figure 5 Analyzing the reason for this phenomenon, for the conical shell structure sound source, the holographic measurement of sound pressure is limited in the axial direction in the actual application process, so there is a discontinuity at the edge of the measurement aperture, causing the leakage of spatial frequency. Hence, the reconstructed value at the edge of the reconstruction surface is diverged and becomes meaningless. Furthermore, Figure 6 shows the amplitude variation rule of characteristic waves with radius in the conical spatial domain. As it can be seen, when the radius of the measurement surface decreases, the amplitude of the characteristic surface wave also decreases, and the number of smaller singular values of the matrix increases. At the same time, the value of the superposition coefficient vector corresponding to the solved sound pressure of small bore is unstable, which leads to a larger reconstruction error in the small section of the cone than in the large bore.
To further illustrate the above conclusions, Figure 7 shows the total relative errors of reconstruction at different cone angles and at different frequencies. When the cone angle increases from 15 • to 75 • , the total relative error of reconstruction also increases. The reason for this is that the characteristic wave applicable to the conical sound source is obtained by the element cylindrical wave through the orthogonal coordinate transformation, and in the cylindrical wave function of Equation (12), its radius is constant and when θ is determined, the element cylindrical wave is only related to the axial coordinate. In Equation (13), the radius of a fixed cone varies. When θ is fixed, the characteristic wave of the conical SONAH is not only related to the axial coordinate but also affected by the radius. Moreover, with the increase in the cone angle, the radius variation range of the cone also increases, leading to a greater difference between the maximum and minimum singular values in the acoustic field transfer matrix, aggravating the ill-health of the transfer matrix, and enhancing the amplification effect of measurement errors such as noise in the reconstruction process, so that the reconstruction error increases. Through the above investigations, the influence of cone angle on the accuracy of acoustic field reconstruction is obtained, which indicates that the proposed method is practical in engineering.
Comparative Study of Conical and Cylindrical SONAH
Conical SONAH is proposed in Section 2.1, then considering that cylindrical SONAH can also be used to directly reconstruct the sound field of the conical sound source, and Figure 8 shows a schematic diagram of the two methods. The direct reconstruction of cylindrical SONAH is based on cylindrical measurement, and the characteristic surface wave matrix of the cylindrical spatial domain is used to reconstruct the radiated sound field of the conical sound source. In comparison, the conical spatial domain characteristic surface wave matrix is used in conical SONAH, which is proposed in this paper to reconstruct the sound field by conical measurement. As it can be seen from Figure 10, in comparison to the cylindrical SONAH, when the cone angle increases up to 40 • , the proposed conical SONAH method still has better reconstruction performance. Furthermore, Figure 11 shows the total relative errors of two methods in the whole reconstruction surface when the cone angle varies from 15 • to 75 • . For the conical sound sources with different cone angles, the overall relative errors of the proposed conical SONAH are smaller than that of the cylindrical SONAH, which can be reduced by about 15%. When the cone angle increases, the advantages of the proposed method can be highlighted, and the maximum reconstruction error can be reduced by about 30%. Therefore, it can be seen from the above analysis that the proposed conical SONAH method can achieve a more effective reconstruction of the sound field radiated by the conical sound source than that of the cylindrical SONAH, indicating the effectiveness of the proposed method. When the cone Angle is 40 • , Figure 12 shows that the overall relative error of the proposed method is about 25% higher than that of direct reconstruction at different SNRs and frequencies. Therefore, it is verified that the proposed method has higher accuracy and robustness under different reconstruction parameters.
Quantitative Study of Combined Regularization Method
Just like cylindrical SONAH, the conical SONAH method is an acoustic inverse problem, and its unfitness makes it difficult to obtain the real solution of the problem by standard numerical method. Therefore, it is necessary to use the corresponding regularization method to obtain the stable and meaningful solution of the inverse problem.
Ill-Condition Analysis of Acoustic Transfer Matrix
As described in the previous section, the direct inverse method cannot be used to solve the ill-posed problem of large-number conditions of the acoustic field transfer matrix. Before introducing the regularization method, it is necessary to study the several important factors affecting the conditional number of transfer matrix in conical SONAH. It is required then to keep other simulation parameters the same as the ones presented in Section 3.2 and change the cone angle, measuring distance, and microphone spacing. Finally, the corresponding conditional number of the acoustic field transfer matrix of transfer matrix in conical SONAH can be calculated. The obtained results are shown in Figures 13-15.
From the above figures, we can draw the following conclusions: when the size of cone angle and the microphone spacing changes, the condition number of the matrix does not change much and when the measuring distance increases, the condition number tends to increase. However, it is worth noting that no matter how the conditional number of the matrix changes, it is always above a large value. Therefore, the acoustic transfer matrix is seriously ill-conditioned and needs to be dealt with by the regularization method.
Comparative Studyof Different Regularization Methods
Simulation settings are shown in Section 3.1, and the case of a cone angle of 30 • is taken as an example to compare different regularization methods of the conical SONAH. Figure 16 shows the reconstruction effect of the Tikhonov regularization parameter selection method combined with the Hald criterion. When the SNR is 30 dB, the reconstructed value is in good agreement with the theory; when the SNR increases, the reconstruction effect becomes worse. To analyze the reasons, the Hald criterion selects regularization parameters according to the following equation: where SNR is the signal to noise ratio, d is the distance between measuring and reconstruction surface. As can be seen from Equation (32), when the signal-to-noise ratio increases, the regularization parameter decreases in order of magnitude of 10, which leads to the lack of regularization of the problem. Therefore, it is impossible to effectively deal with small singular values, and the measurement errors such as noise are amplified.
From the above figures, we can draw the following conclusions: when the size of cone angle and the microphone spacing changes, the condition number of the matrix does not change much and when the measuring distance increases, the condition number tends to increase. However, it is worth noting that no matter how the conditional number of the matrix changes, it is always above a large value. Therefore, the acoustic transfer matrix is seriously ill-conditioned and needs to be dealt with by the regularization method.
Comparative Study of Different Regularization Methods
Simulation settings are shown in Section 3.1, and the case of a cone angle of 30° is taken as an example to compare different regularization methods of the conical SONAH. Figure 16 shows the reconstruction effect of the Tikhonov regularization parameter selection method combined with the Hald criterion. When the SNR is 30dB, the reconstructed value is in good agreement with the theory; when the SNR increases, the reconstruction effect becomes worse. To analyze the reasons, the Hald criterion selects regularization parameters according to the following equation: where SNR is the signal to noise ratio, d is the distance between measuring and reconstruction surface. As can be seen from Equation (32), when the signal-to-noise ratio increases, the regularization parameter decreases in order of magnitude of 10, which leads to the lack of regularization of the problem. Therefore, it is impossible to effectively deal with small singular values, and the measurement errors such as noise are amplified. Further, Figure 17 shows the reconstruction effect of Tikhonov combined with the GCV regularization parameter selection method. It is also difficult to achieve effective sound field reconstruction, and the reconstructed error obtained from Equation (31) is 97.79%. Simulation experiments were carried out on this method many times, and the results showed that this method had the problem of too small regularization parameter selection and could not realize reconstruction under different SNR. When the reconstruction (SNR = 30 dB) can be achieved by the Hald criterion, the comparison with the regularization parameters shows that the regularization parameter obtained by GCV are about 10 times smaller. Therefore, Tikhonov combined with the GCV method has the problem of under-regularization and cannot effectively realize sound field reconstruction. Further, Figure 17 shows the reconstruction effect of Tikhonov combined with the GCV regularization parameter selection method. It is also difficult to achieve effective sound field reconstruction, and the reconstructed error obtained from Equation (31) is 97.79%. Simulation experiments were carried out on this method many times, and the results showed that this method had the problem of too small regularization parameter selection and could not realize reconstruction under different SNR. When the reconstruction (SNR = 30 dB) can be achieved by the Hald criterion, the comparison with the regularization parameters shows that the regularization parameter obtained by GCV are about 10 times smaller. Therefore, Tikhonov combined with the GCV method has the problem of under-regularization and cannot effectively realize sound field reconstruction. Based on the above analysis, the single regularization method cannot realize the effective reconstruction of the acoustic field. Therefore, a combined regularization method is proposed in Section 2.2.3. In this method, the key is a way of determining the truncation point k. Table 1 shows the influence of singular value contribution rate on reconstruction error. When the contribution rate of the selected singular values increases, the overall relative error of sound field reconstruction firstly decreases and then increases. The reason for this is that when the contribution rate is less than 4%, the number of truncated smaller singular values increases with the increase in contribution rate, which can more effectively suppress the amplification of reconstruction errors caused by noise. At the same time, the retained large singular values can contain most of the energy of the sound field, so the reconstruction errors tend to decline. When the contribution rate is more than 4%, the number of truncated small singular values increases with the increase in contribution rate. In this way, the influence of noise on the reconstruction can be suppressed, however, high-frequency evanescent wave energy used for reconstruction is lost, so the reconstructed error increases. Therefore, the singular value contribution rate is 4% in this paper. On this basis, to illustrate the advantages of the combined optimization method, Figure 18 shows the reconstructed sound pressure amplitude of TSVD and the combined regularization method when the cone angle is 30 • , in which the Hald criterion is used to select the regularization parameter. Compared with the TSVD method, the reconstruction values of the combined optimization method are in better agreement with theoretical values at all points on the reconstruction surface.
The reconstruction performance of the two methods is further studied under different cone angles, and the results are shown in Figure 19. At each cone angle, the overall relative error of the reconstruction by the combined regularization method is smaller than that by the TSVD method, and the maximum reduction is about 5%. The reason is analyzed as follows: in the TSVD method, some small singular values are discarded directly, which can suppress the amplification effect of measurement errors such as noise to some extent. However, detailed information of high-frequency evanescent waves in the sound field is also discarded, so that the energy of the sound field used for reconstruction is lost. In the combined regularization method, instead of directly discarding the small singular values after truncation, it is further filtered. In the reconstruction process, the high-frequency details in the sound field are not lost, and the influence of measurement errors such as noise amplified with evanescent waves in the reconstruction process on the reconstruction results is suppressed, so the reconstruction error is smaller. All the above studies are conducted under the condition of a certain signal-to-noise ratio. Furthermore, Figure 20 shows the overall relative errors of the combined regularization method in the reconstruction of different signal-to-noise ratios, so as to study the stability of the combined optimization method in the reconstruction process. It can be seen that with the increase in SNR, the overall relative error of sound field reconstruction tends to decrease, but changes little. For different cone angles, when the signal-to-noise ratio decreases from 50 dB to 5 dB, the overall relative error of reconstruction only increases by about 2%, which fully demonstrates that the proposed combined optimization method can effectively suppress the influence of noise and other measuring errors on reconstruction results, reflecting the stability of its solution.
Introduction of Experimental System
To evaluate the actual performance of the proposed combined method for large conical surfaces, a test bed is constructed as shown in Figure 21. The test bed is a combination structure of hemisphere, cylinder, and cones. The hemisphere and cones are connected with cylinders through interference fitting, which can be disassembled and assembled according to research requirements, as shown in Figure 22.
The test bench is mainly composed of a combination structure of a plate and shell, an eccentric block vibration motor, and a bracket. The submarine hull is simulated by the combined shell and shell structure, and the submarine power device, the vibration source, is simulated by two eccentric block vibration motors. The plate and shell combination structure mainly comprises a cylindrical shell, a reinforcing rib, and a horizontal plate, and the horizontal plate is used for installing the eccentric block vibration motor. A motorsupported vibration-damping structure is added to the horizontal plate to simulate the power equipment base of the submarine. The motor supporting vibration damping structure is composed of a supporting platform and a damping rubber, and the four supporting legs are elastically connected with the shell transverse plate through the damping rubber and the bolt. The four-shell rubber spring and two brackets are used to realize the elastic isolation of the shell-and-shell structure from the ground to simulate the suspension state of the submarine in the water. When the motor is turned on, the vibration generated by the vibration source is transmitted to the cylindrical shell through the supporting structure of the motor, and then to the cone and hemispherical body through the contact coupling mode of interference coordination among the three, to stimulate the vibration on the whole composite structure surface and thus radiate the sound field to the three-dimensional space.
Sensor Arrangement and Signal Acquisition
The microphone array consists of 16 sensors, whose parameters are shown in Table 2. The measurement method adopts a single reference source transfer function method. To reduce the reflection of the surrounding walls and simulate the free sound field as much as possible, this test was carried out in a semi-anechoic chamber environment. The measurement scheme is shown in Figure 23. The data acquisition system uses 64-channel HBM, which can collect analog signals output by various types of sensors and convert them into digital signals and input them into computer processing to obtain specific data results. At the same time, the calculated waveforms and values are displayed in real time to monitor physical quantity status.
Test Implementation Steps
In the conical SONAH test, only the large motor shown in Figure 23 is turned on. The vibration generated by the motor radiates the sound field to the space through the support and the surface of the conical shell.
The microphones are fixed on the microphone bracket. The number of microphones and the distance between microphones can be adjusted. In this experiment, 16 microphones were used, with a distance of 10 cm. The bracket is kept parallel to the axis of the shell, 15 cm away from the surface of the shell, and rotates around the axis of the shell to form a conformal conical test surface with the surface of the shell, to measure the sound pressure on the cone at a certain distance from the surface of the shell. Then, the microphone is adjusted from the surface of the shell to measure the sound pressure on the surface of the conical shell. Phase information of the sound pressure on all measurement points is calculated from the cross-spectrum of the sound pressure signals on a fixed reference point #1 as shown in Figure 23. The specific steps of the experiment are as follows: (1) As shown in Figure 23, connect the experimental equipment, set relevant parameters, pre-sample and roughly analyze the measurement data, and ensure that all equipment and channel signals are good. (2) Start the large motor alone and slowly adjust the motor speed to 1800 r/min. At this time, the vibration generated by the motor is transmitted to the cylindrical shell through the motor support, and then to the cone through the connection between the cone and the cylindrical shell, causing the vibration of the conical shell and thus radiating sound field into space. (3) The microphone bracket is rotated from one side of the shell to the other side at a circumferential interval of 22.5 • to collect data of each circumferential angle position.
Since the experimental conditions are limited, the radiative sound pressure of the upper part of the conical shell is measured. (4) The measurement distance is adjusted to the position 0.01 m away from the shell surface, and the above steps are repeated to obtain the radiation sound pressure data of the conical shell surface under the same condition.
Analysis of Test Data
The experimental data is processed by conical SONAH, and the amplitude error of reconstruction is defined in Section 3.1. Figure 24 shows the sound pressure amplitude distribution obtained by actual measurement of the measurement surface and reconstruction surface. The sound pressure amplitude decreases with the decrease in the cone section. The reason is that the vibration is transmitted from the motor vibration inside the cylindrical shell to the conical shell through the connection between the cylinder and the large section of the cone, and the vibration energy gradually attenuates in the transfer process. Moreover, the sound pressure amplitude on both sides of the conical plane is larger than the middle position. This is because the two surfaces are connected by interference fit. Due to the slight deformation of the conical shell and its large stiffness, the contact surface at the connection is not uniform, and the contact area on both sides is large and the coupling is firm, so larger vibration can be transmitted. Figure 25 shows the reconstructed sound pressure amplitude obtained by the cylindrical SONAH direct reconstruction method (Traditional method) and the conical SONAH with different regularization methods. As compared to Figure 24b, the sound pressure amplitude directly reconstructed by cylindrical SONAH is quite different from that obtained by actual measurement. In the conical SONAH method, among three regularization methods, the sound pressure distribution reconstructed by the combined optimization method is consistent with the actual measured values, which preliminarily indicates that the proposed method can achieve more efficient reconstruction. Table 3 shows the relative errors of reconstructed maximum with different methods, and the calculation methods are as follows: where e max is the relative error of reconstructed maximum, p true is the measuring sound pressure, p rec is the reconstructed sound pressure. The reconstruction errors of the conical SONAH method are lesser than that of cylindrical SONAH. Moreover, it can be found that in the conical SONAH, the relative errors of the reconstruction maximum of existing regularization methods are greater than that of the combined regularization method proposed in this paper. As compared to cylindrical SONAH, the reconstruction error of conical SONAH with the combined regularization method proposed in this paper can be reduced by 23.56%.
In the above discussion, the reconstruction effect of the two methods is only roughly observed as a whole, and the relative error is only compared from the maximum value of reconstruction. To further explore the reconstruction effect on the entire reconstruction surface, the reconstruction surface is divided into different areas as illustrated in Figure 26, which expands from the center to the edge, so as to see the overall relative errors of the reconstruction of different areas by the two methods. Based on the above discussion, only conical SONAH with the combined regularization method is compared with the traditional cylindrical SONAH. Figure 27 shows the reconstruction errors of the two methods in different regions at different frequencies. Firstly, for the two methods, the overall relative error of reconstruction also increases when the reconstruction region expands from the central region to the edge at different frequencies, which is consistent with the results in Section 3.3. On one hand, the actual measurement can only be carried out on the finite surface in the axial direction of the conical shell structure, so that the measurement signal is discontinuous and there is spatial frequency leakage, which leads to the increase in reconstruction error on the edge nodes of the reconstruction surface while on the other hand, it can be seen from the change rule of characteristic surface waves with the actual cone sound source radius in Figure 28, that smaller the measuring radius is, smaller the amplitude of characteristic surface waves will be, and the number of smaller singular values corresponding to the transfer matrix will increase. Therefore, when the conical surface with a small section is reconstructed, the small singular values will magnify the measurement errors such as noise, resulting in a larger reconstruction error at the small section of the conical surface. Furthermore, the proposed methods have smaller reconstruction errors at different frequencies in different regions of the reconstruction surface. In each region, the error reduction is up to about 15%. The reason is as compared to the direct reconstruction method of cylindrical SONAH, the proposed conical SONAH constructs characteristic surface waves in the spatial domain of the conical surface through cylindrical and conical coordinate transformation, and then express the spatial sound pressure as the superposition of these characteristic surface waves to construct the acoustic transfer matrix, and finally realize the reconstruction of the sound field radiated by the cone sound source. While in the cylindrical SONAH, the construction of the acoustic transfer matrix is realized through element cylindrical waves. However, for cone sound sources, its radius varies with the axial position. Moreover, it is obvious from Figure 28 that cylindrical waves with a fixed radius cannot correctly represent this change rule, causing the greater reconstruction error. Therefore, the proposed conical SONAH has more obvious advantages than the direct reconstruction method of cylindrical SONAH.
In the proposed conical SONAH technology, a combined regularization method is proposed according to the characteristics of conical sound sources. In this method, the large singular values corresponding to the propagation waves without measurement noise are not processed to ensure the main energy acquisition of the sound field and avoid the distortion caused by regularization processing. The smaller singular values corresponding to the evanescent wave components with high spatial frequency are dealt with by Tikhonov regularization, so as to suppress the phenomenon that measurement errors such as noise are amplified with evanescent wave reconstruction, which further improves the reconstruction accuracy of the proposed method.
Based on the above experimental verification and result analysis, the conical SONAH proposed in this paper can effectively realize the reconstruction of the radiated sound field of the conical sound source at different frequencies and has higher reconstruction accuracy than the direct reconstruction method of cylindrical SONAH. It is worth noting to see that no parametric optimization of the hologram array or selection of characteristic wave function bases is performed in these simulations and experiments. It is possible to implement the conical SONAH method more successfully by altering the numerical measurement parameters, such as the distance between the hologram and sources, sensor density in the hologram, or by the inclusion of more wave functions.
Conclusions
In this paper, based on cylindrical SONAH, a conical SONAH with combined regularization method is proposed for the acoustic field reconstruction of conical sound sources through cylinder-cone coordinate transformation method to stabilize the solution of the acoustic inverse problem. The main conclusions of this study can be summarized as: (1) In the theoretical framework, based on the cylindrical SONAH, the elementary cylindrical wave is transformed into the characteristic elementary wave suitable for the radiated sound field of the conical surface through the cylinder-cone orthogonal coordinate transformation method, and the matrix of characteristic elementary wave in the conical spatial domain is formed. Then, the sound pressure is expressed as the superposition of this characteristic elementary wave. Next, by incorporating the advantages of TSVD and Tikhonov, a combined regularization method is proposed to obtain a stable and meaningful solution to the inverse problem so that the superposition coefficient is solved according to the principle of superposition of wave field. Lastly, the reconstructed conical pressure is expressed as a linear superposition of the holographic conical pressure.
(2) Simulation investigations demonstrated that the proposed conical SONAH can reconstruct the sound field of conical sound source efficiently. When the cone angle increases gradually, the reconstruction error of the sound field will increase accordingly at different frequencies. As compared to cylindrical SONAH, the proposed conical SONAH has better reconstruction accuracy under different cone angles, and the relative reconstruction error can be reduced by about 15%. As compared to the standalone Tikhonov regularization combined with GCV and TSVD method, the re-construction error of the proposed combined method is reduced by 90% and about 5%. Therefore, the proposed combined regularization method is more accurate and stable. (3) The effectiveness of the modified method was validated through a set of experiments, and the results verified the better reconstruction performance of the modified method in comparison to the direct reconstruction method of cylindrical SONAH. It is difficult to achieve the effective reconstruction of the sound field with a single regularization method, while the proposed combination regularization method based on TSVD and Tikhonov can effectively solve the acoustic inverse problem to obtain high reconstruction accuracy. The relative error of the largest value is 11.14%, while that of the traditional method is 34.70%. Furthermore, the overall relative errors of different regions of the reconstruction surface are reduced by 10-15% with different frequencies.
Therefore, the proposed method can effectively reconstruct the sound field radiated by the conical source. This study can provide a reference for the engineering application of conical SONAH and can benefit vibration and noise monitoring, reduction, and control for mechanical systems. | 11,921 | 2021-10-28T00:00:00.000 | [
"Mathematics"
] |
Numerical simulation of the flow over a tubercled wing
The objective of the present study is to carry out a numerical study of the flow around a NACA0021 modified wing by the incorporation of sinusoidal tubercles on its leading edge at a Reynolds number equal to 225,000. The SST k- turbulence model is used as closure to the incompressible governing equations. Runs have been performed for several attack angles. Results show that for lower angles of attack, tubercles reduce the drag coefficient with a slight increase in lift.
Introduction
Fauna and flora have always inspired humankind in their inventions and problem solving. Several technologies in all fields of science have been developed by the observation of nature. This approach, known also as biomimetic, has been widely adopted in the design of submarines and marine propellers in order to reduce drag. In particular, the performance of airfoils can be improved by generating streamwise vortices in boundary layer using tubercled leading edge inspired by prior studies carried out by marine biologists on the morphology of humpback whales pectoral flippers [1][2]. They demonstrated that the high aspect ratio with large sinusoidal tubercles along the flipper leading edge can cause a considerable increase in the stall angle of attack and the maximum lift coefficient. Several studies have been performed on tubercled leading edge wings during the last years. In particular, experimental studies have demonstrated their benefits. Generally, in most studies, the utilization of airfoils with tubercled leading edge showed improved performance in terms of lower drag, higher lift and shorter separation region. However, in some studies, it is found an opposite trend. Experimental studies aimed to measure lift, drag and pitching moments for various attack angles in wind tunnels. Results are generally analyzed by varying amplitude and wavelength of tubercles. Table 1 summarizes previous experimental studies on tubercled wings. Recent applications of CFD to solve the Navier-Stokes equations for tubercled wings are summarized in Table 2. In most of studies, in-house research codes that are rarely available for researchers have been used. It is more convenient to perform studies in this topic using available commercial CFD codes such as Fluent. The latter can solve laminar and turbulent, incompressible and compressible, 2D and 3D, steady and unsteady flows.
The conflicting findings observed in the literature regarding the effect of tubercles on the performance of wings make this topic a challenging subject. Further studies are necessary and the present paper constitutes one.
Mathematical model
Two NACA 0021 wings designed, built and tested by Bolzon et al. [21] have been used in the present study. One wing had a smooth leading edge, whereas the other wing had tubercles with amplitude of 10.5 mm and a wavelength of 60 mm along its entire leading edge as depicted in Figure 1 The three-dimensional, incompressible and steady Reynolds averaged Navier-Stokes equations are given below: The Reynolds stress tensor ij R is approximated using one of the different turbulence models provided in the CFD commercial software Fluent. A careful analysis of the literature reveals that there is no agreement on the selection of turbulence models in the analysis of the flow around tubercled wings. However, it seems that SST k- provides better performance in predicting the flow structure around a tubercled wing. The computational domain has been created according to experiments by Bolzon et al. [21] in a wind tunnel. The domain has an inlet section of 0.5×0.5 m and a length of 2.2 m. The wing is placed at a distance of 0.45 m from the inlet. The domain has been meshed using a multi block technique. A block represented by a cylinder containing the wing to allow its rotation in order to obtain the desired angle of attack and a second block for the remaining domain. Figure 2 depicts an overview the computational domain.
Fig. 2. Computational domain meshing
The steady Reynolds averaged Navier-Stokes equations, the turbulence model equations and the corresponding boundary conditions have been numerically solved using the pressure based solver of the commercial CFD code Fluent. A second order upwind scheme has been selected to discretize momentum and turbulence terms. The algorithm PRESTO is used to compute the coupling between the pressure and the velocity field. The runs were assumed to reach convergence once the residuals fall below the value of 10 −6 for all variables.
Results
The results of the simulations are presented as follows.
First, a mesh independency test has been performed in order to ensure that the numerical solution is independent to the size of the grid used. This operation is then followed by a detailed analysis of the flow around the wings in terms of lift coefficient, drag coefficient and streamlines. The results have been plotted at Reynolds number of 0.225 million for angles of attack ranging from 0 to 20°.
Mesh independency test
A mesh independency test has been performed. Three grids of 1 million, 2 million and 4 million have been used for angles of attack ranging from 0 to 20° at a Reynolds number of 0.225 million. Figure 3 shows a comparison of lift and drag coefficients obtained using the three grids. It is observed that similar values are obtained using grids 2 and 3. Thus, results are obtained using a mesh of 2 million cells. a.
Lift and drag coefficients
The effect of the angle of attack on the lift and drag coefficients of wings with smooth and tubercled leading edges is presented in Figure 4. Both lift and drag coefficients increase as the angle of attack increases. For both experimental and numerical data, at lower angles of attack, leading edge tubercles produce lift coefficient slightly greater than smooth leading edge. The difference increases as the angle of attack increases. As stated by many researchers, this trend can be attributed to the formation of a laminar separation bubble on the section side of the wings [10][11][12][13][14][15][16][17][18][19][20]45]. However, there is no widespread agreement on this finding. The laminar separation bubble on the section side of wings have not been observed by other researchers [4,6]. For lower angles of attack ( < 8 (for experimental measurement) and < 12 (for numerical data)), tubercles reduce the drag coefficient as illustrated in Figure 3.b. However, for angles of attack greater than these values, tubercles increase the drag coefficient.
Conclusions
The flow around two swept wings, one smooth and one tubercled, has been numerically analyzed using the commercial code Fluent. Turbulence has been modeled using SST k- model, the most widely used model for this kind of flows. It is found that for lower angles of attack, tubercles reduce the drag coefficient with a slight increase in lift. The discrepancy between numerical results and experimental measurements at attack angles greater than 5 cannot be attributed to experimental errors as it is observed for both wings. Indeed, under these conditions, laminar separation bubble on the section side of the wings observed in many previous experimental studies appears and SST k- model falls in predicting correctly the airfoil's pressure distribution. | 1,632.4 | 2020-01-01T00:00:00.000 | [
"Physics"
] |
Cephalopod assemblages and depositional sequences from the upper Cenomanian and lower Turonian of the Iberian Peninsula ( Spain and Portugal )
The comparison and correlation of the biostratigraphic successions identified in the upper Cenomanian and lower Turonian of the Iberian Trough (IT, Spain) and the Western Portuguese Carbonate Platform (WPCP, Portugal) allows differentiating nine cephalopod assemblages (1 to 9), with notably different taxa, and two (3rd order) depositional sequences (A and B). Some of these main intervals can be divided in minor ones, such as assemblage 4 (in 41 and 42) and sequence B (in B1 and B2). Assemblages 1 to 3 are related with sequence A, and assemblage 4 to 9 with sequence B (specifically, 4 to 6 with B1, and 7 to 9 with B2). The analysis and interpretation of these biostratigraphic data allows us to infer certain palaeoecologic turnovers that happened in the studied basins, both with external origin or due to local tectonic and palaeogeographical changes. Though partially altered by hypoxic phenomena (especially the sequence B1, assemblage 4) and local tectonics (mainly in the WPCP), in each of these cycles there were events of extinction of the cephalopods from shallow environments and survival of those from pelagic or deep environments, of settling of new environments, and of adaptation to them caused, successively, by intervals of low, ascending and high sea-level.
Introduction
The well-marked relative sea-level changes during the late Cenomanian and early Turonian have been widely recorded in the Iberian Peninsula, especially when carbonate or mixed sequences are concerned.These facies yielded a diverse ensemble of boreal and meridional cephalopod assemblages, which allow the establishment of detailed stratigraphic settings and interregional correlations with Western Europe, North Africa and, even, the Western Interior of the USA.
Both the Spanish and Portuguese domains have a long historical tradition, from the early 19 th century, of research on the stratigraphy and palaeontology of the Upper Cretaceous, and are especially known for their meridional ammonite assemblages with vascoceratids and associated temperate and warm faunas, mainly acanthoceratids and pseudotissotiids.Unfortunately, with only few exceptions, such as the Iberian Field Conference on Mid Cretaceous Events of 1979, these investigations have been carried out separately, without significant shared field-works, discussions or conclusions.With the purpose to join common efforts on this matter, next work presents a first concise synthesis and correlation of the Iberian cephalopod biostratigraphy for the upper Cenomanian and lower Turonian, considering the advances of the last decades on the Upper Cretaceous palaeontology and biostratigraphy.
Field work was carried out in outcrops with upper Cenomanian and lower Turonian sequences situated in the localities of Puentedey and Soncillo, in the north of the province of Burgos, of Fuentetoba and Villaciervos, in the centre of Soria, and of Cantalojas, Galve de Sorbe, Condemios, Somolinos, Atienza and Tamajón, in the north of Guadalajara, Spain.Within Portugal, the exposures of Salmanha-Figueira da Foz, Costa d'Arnes, Tentúgal, Ançã-Trouxemil, in the west of the province of Beira Litoral, and of Olival, Leiria and Nazaré, in the north of Estremadura, were sampled (Text-fig.1).These Spanish sections are distributed along the southeastern Cantabrian Ranges, the southwestern Iberian Ranges and the northeastern Central System, whereas the Portuguese ones are located between the Atlantic Coast, the uplifted Jurassic Massifs of Estremadura, and the western border of the Hesperian Massif.
Historical background
Both the Iberian Trough and the Western Portuguese Carbonate Platform successions have been exhaustively studied since the late 19 th century.During the 60's and 70's of the 20 th century, when Wiedmann worked on the Spanish cephalopods, and Ferreira Soares and Berthou on the correlative palaeofaunas of Portugal, there has been a substantial advance in the knowledge about the ammonite assemblages.In the last decades, the ammonite palaeofaunas from the upper Cenomanian and lower Turonian of Western Portugal have been methodically reviewed by Callapez (1998) and Callapez and Ferreira Soares (2001), with recognition of new meridional ammonite assemblages with North African affinities.Identical work has been done by Barroso-Barcenilla (2006) in the Iberian Trough, but with the advantage of a larger field-work area with more expanded successions and deeper water facies.However, there has not been a tradition of comparing the Spanish and Portuguese faunal and stratigraphic settings, in order to set up an integrated biostratigraphic model with obvious implications on further palaeogeographic interpretations.
Western Portuguese Carbonate Platform
In Portugal, early research on palaeontology and stratigraphy is dated from the middle 19 th century (Sharpe, 1849) and had continuity with subsequent contributions (among them those by Choffat, 1886Choffat, , 1898Choffat, , 1900Choffat, , 1901-02) -02) that described the Cretaceous of West Central Portugal, in-the Cretaceous of West Central Portugal, including the regions of Beira Litoral (Figueira da Foz to Coimbra and Aveiro) and Estremadura (Lisbon to Nazaré and Leiria), with emphasis on the Cenomanian and Turonian stages and main palaeontological groups.After these first decades of fruitful research, more than half a century elapsed till the stratigraphy and palaeontology of the Baixo Mondego was reviewed by Ferreira Soares (1966,1972,1980).These works were followed by biostratigraphic studies undertaken by Lauverjat (1982) and Berthou (1984), among others.
Since 1992, the Cenomanian and lower Turonian cephalopod faunas of West Central Portugal have been ex-West Central Portugal have been ex-have been extensively reviewed, with emphasis on systematics and biostratigraphy (Callapez, 1992(Callapez, , 1998(Callapez, , 2003(Callapez, , 2004(Callapez, , 2008;;Callapez and Ferreira Soares, 2001;Callapez in Hart et al., 2005).These studies have been based on a new reference collection assembled at the University of Coimbra (with species inedit to the Lusitanian area), and the Choffat Collection of the Geological Survey, making it possible for the first time, to establish an integrated biostratigraphy with other areas of the Tethyan Realm.
Geological setting
The studied Spanish outcrops form part of a large northwest-to-southeast orientated band constituted by a thick carbonate sedimentary sequence (limestones and marlstones), with some interbedded terrigenous and dolostone intervals.Their cephalopods have been mainly collected from the Margas de Puentedey (Floquet et al., 1982) and Margas de Picofrentes (Floquet et al., 1982) formations, deposited in the inner and the marginal environments of the platform, respectively.The studied Portuguese exposures are part of a band of Upper Cretaceous carbonate platform that outcrops across the northern margin of the Baixo Mondego, from Coimbra to Figueira da Foz.To the south there is a related set of outcrops orientated along the southern block of the northeast-southwest tectonic and diapiric axis of Nazaré-Leiria-Pombal.
During the Late Cretaceous, the current Iberian Peninsula was a relatively independent tectonic unit, called Iberian Subplate.The combined incidence of the eustatic worldwide changes and the local tectonic readjustments generated several faunal turnovers and depositional sequences in the epicontinental flooded regions of the Iberian Subplate, among those the Iberian Trough and the Western Portuguese Platform (Text-fig.2).
Specifically, the Iberian Trough (IT) was a long, nar-was a long, narrow and relatively stable intracratonic ramp, and com-and relatively stable intracratonic ramp, and com-intracratonic ramp, and com-, and comprised the northern, central and south-eastern regions of the Iberian Subplate that were temporally or permanently flooded by the Protoatlantic Ocean, the Tethys Sea or both.It was bordered on the west by the Hesperian Mas-It was bordered on the west by the Hesperian Massif and on the east by the Ebro Massif, and was broadly related to the North Cantabrian, Basque, Pyrenean and Levantine basins.The IT was divided into different domains.These were, from the north to the southeast, the Outer Navarro-Cantabrian Platform, the Inner Castilian Platform and the Levantine Platform.The Inner Castilian Platform was divided into the North-Castilian Sector and the Central Sector.The North-Castilian Sector included the North-Ebro Area and the South-Ebro Area, and the Central Sector comprised the La Demanda Area and the Guadarrama Area.Further readings about the main subjects related with the geological evolution of the IT were given by Amiot (1982), Floquet et al. (1982), Rosales et al. (2002), Mas et al. (2002), Floquet (2004) and García et al. (2004).
The Western Portuguese Carbonate Platform (WPCP) included the central-western regions of the Iberian Subplate, and was broadly related with the Atlantic Lusitanic Basin (Ferreira Soares and Rocha, 1985).It experienced a complex tectonic control with reactivated late Hercynian faults and halokinetic structures, and at least three main rifting phases intercalated with intervals of post-rift thermal subsidence (Wilson et al., 1989;Hiscott, 1990).The tectono-sedimentary evolution of this basin enabled the formation of a relatively homogeneous and stable basal upper Cenomanian carbonate platform with cephalopods.Subsequently, a clear differentiation on the palaeogeographic setting of this carbonate platform was established, with ammonites restrained to the Baixo Mondego, and a large domain of shoals with coral and rudist fringes developed southwards in Leiria and Nazaré.Further readings about the main subjects related with the stratigraphy of the WPCP can be consulted in Ferreira Soares (1980), Lauverjat (1982), Berthou (1984), Callapez (1998Callapez ( , 2004Callapez ( , 2008)), Hart et al., (2005) and Rey et al. (2006).
In the present paper, the palaeogeographical division and the ammonite zonation for the upper Cenomanian and lower Turonian of the IT proposed by Barroso-Barcenilla et al . (2009) and of the WPCP developed by Callapez and Ferreira Soares (2001) and Callapez (2003Callapez ( , 2008) ) have been followed.These zonal schemes, on the basis of the occurrence of the index species, have been correlated between them and with that of the type-section of Pueblo, USA, and with those zonations that can be considered as representative of the Boreal (Western Europe) and Tethyan (North Africa) domains (Text-fig.3).The interregional correlations presented herein have been made according to the conclusions of Graciansky et al. (1998).
Cephalopod assemblages
The main part of the cephalopods collected in the upper Cenomanian and the lower Turonian of the IT (Barroso-Barcenilla, 2006, 2007;Barroso-Barcenilla and Goy, 2007, 2009, 2010;Barroso-Barcenilla et al ., 2009) and the WPCP (Callapez, 1998(Callapez, , 2003;;Callapez and Ferreira Soares, 2001) do not present signs of taphonomic resedimentation or reelaboration (sensu Fernández-López, 2000), and the few that show any of these signs do not seem to have suffered notable alterations.Therefore, it can be considered that all of them maintain their respective original stratigraphic positions (Callapez, 1998;Barroso-Barcenilla, 2006).The comparison and integration of the successions and co-occurrences of these cepha-lopods has allowed us to differentiate nine cephalopod assemblages within the materials of this interval, which have been numbered in stratigraphic ascending order (Text-fig.4).
These assemblages (1 to 9) own notably different taxa and can be related to certain palaeoecologic turnovers that happened in the studied basins, both with external origin or due to local tectonic and palaeogeographical changes.Each of them has been interpreted considering the narrow relation that was established in the epicontinental platforms between the palaeoecologic changes and the general palaeontological record, emphasized by Fernández-Lopez (1999, 2000), or the specific succession of cephalopods, studied by Hirano et al. (2000) and Toshimitsu and Hirano (2000), among others.These authors maintain that the ammonoid diversity was primarily controlled by changes in the marine environments.On this basis, several eustatic, tectonic or geochemical alterations have been inferred, which concerned the habitability of the region during the considered interval, providing interesting information about the evolution of the Iberian Subplate.
Assemblage 1
This assemblage can be recognized overlaying the stratigraphic discontinuity of the middle-upper Cenomanian boundary, in the Eucalycoceras rowei zone of the IT, and includes the co-occurrence of Eucalycoceras with Calycoceras.In it, the taxa Eucalycoceras rowei, Calycoceras (Proeucalycoceras) sp., Calycoceras (Calycoceras) sp.and, seemingly, Calycoceras (Calycoceras) naviculare, have been identified, all of them belonging to the Acanthoceratidae.In the WPCP its lower part corresponds to a stratigraphic interval with the bivalve Gyrostrea ouremensis, but without known ammonites.The materials in which this assemblage has been recorded reach a restricted geographic distribution (exclusively the IT) and only own cephalopods moderately adapted to shallow marine environments (Eucalycoceras and Calycoceras: Batt, 1989;Westermann, 1996).In the same way, it can be added that both the number and the diversity of the collected specimens are relatively scarce (3 or 4 species).
Assemblage 2
It is recorded in the upper Cenomanian Neolobites vibrayeanus subzone of the IT and the Calycoceras (Eucalycoceras) guerangeri zone of the WPCP, and includes the co-occurrence of Angulithes with Neolobites, in the Nautilidae and Engonoceratidae in both basins, and of Euomphaloceras with Calycoceras, in the Acanthoceratidae in the former, and of Calycoceras, in the same family, in the latter.In it, the following taxa have been recognized: Angulithes mermeti, Neolobites vibrayeanus and Calycoceras (Calycoceras) naviculare, in both basins; Lotzeites sp. and Euomphaloceras euomphalum, in the IT; and Neolobites bussoni , Calycoceras (Proeucalycoceras) guerangeri, Eucalycoceras pentagonum, Thomelites hancocki and Puzosia (Parapuzosia) sp., in the WPCP.In this interval, it can be observed a notable increase in the number of localities that yielded fossil cephalopods, as well as in the diversity of these invertebrates (10 species).Among them, the abundance of Angulithes and Neolobites stand out, initially typical of relatively deep or open waters, and there is a near absence of taxa characteristic of epicontinental environments.The co-occurrence of Angulithes and Neolobites in this interval has been observed by different authors in other basins (Peru: Benavides-Cáceres, 1956;Morocco: Meister andRhalmi, 2002, Cavin et al., 2010).
Assemblage 3
It can be recognized in the upper Cenomanian Metoicoceras mosbyense and Metoicoceras geslinianum subzones of the IT, where includes the co-occurrence of Metoicoceras, and only owns the species Metoicoceras mosbyense and Metoicoceras geslinianum, of the Acanthoceratidae.Although it can be identified across a relatively high number of localities of the IT, its taxonomic diversity is very low (2 species).Nevertheless, it is within this assemblage that the first phylogenetic line between two species of cephalopods (M.mosbyense and M. geslinianum) can be established in the upper Cenomanian of the Iberian Subplate.Also, it seems to be especially significant the fact that, as revealed by geochemical analyses realized in collaboration with the Prof.Dr. W.J. Kennedy in the Puentedey Section, the materials corresponding to the M. geslinianum subzone registered the first and punctual of the two positive excursions of the δC 13 signal (the second and main in the S. (J.) subconciliatus zone), seemingly related with the Ocean Anoxic Event of the Cenomanian-Turonian Transition (OAE2) of Schlanger and Jenkyns (1976) (Barroso-Barcenilla et al., 2011).Similar isotopic variations were observed by other authors, such as Kennedy et al. (2000) and Caron et al. (2006) in correlative levels of other regions and, thus, they were possibly produced by worldwide oceanographic and climatic changes.
Assemblage 4
This important set has been registered in the upper Cenomanian Vascoceras gamai subzone and the Spathites (Jeanrogericeras) subconciliatus zone of the IT, and the Euomphaloceras septemseriatus and the Pseudaspidoceras pseudonodosoides zones of the WPCP.It includes, in the IT, the co-occurrence of Vascoceras 1 (sensu Barroso-Barcenilla, 2006), in the Vascoceratidae, and of Spathites (Jeanrogericeras) 1 (sensu Barroso-Barcenilla, 2006) This is a notably complex main assemblage with numerous taxa of diverse affinities and influenced by significant and global phenomena of hypoxia of the OAE2 which can be divided into two consecutive minor intervals.The first one (4 1 ) can be recognized in the Vascoceras gamai subzone of the IT and the Euomphaloceras septemseriatus zone of the WPCP, and is notably better represented in the WPCP.In it, the cephalopod diversity is low (4 species), as only V. gamai (one of the Vascoceras with wider geographical distribution) has been identified in restricted areas of the IT, and the same species, Puzosia (P.) sp., Pseudocalycoceras sp. and E. septemseriatus have been collected in the WPCP.The second interval (4 2 ) has been registered in the Spathites (Jeanrogericeras) subconciliatus zone of the IT, and the Pseudaspidoceras pseudonodosoides zone of the WPCP, and is partly contemporaneous to the second and main of the two positive excursions of the δC 13 signal related with the OAE2 (Barroso-Barcenilla et al ., 2011).In it, initially (1 st ) the ammonite diversity increases (10 species), with several representatives of Vascoceras in the Guadarrama Area and in the WPCP; the first members of Spathites (Jeanrogericeras), specifically S. (J.) subconciliatus, in both basins; and the first representatives of Pseudaspidoceras and Rubroceras, specifically P. pseudonodosoides, R. cf.alatum and R. sp., in the WPCP, all of them with geographic distributions narrower than V. gamai.It continues (2 nd ) with the presence of the dark
Assemblage 5
It is recorded in the lower Turonian Choffaticeras (Choffaticeras) quaasi zone of the IT, and includes the co-occurrence of Spathites (Jeanrogericeras) 2 (sensu Barroso-Barcenilla, 2006), in the Acanthoceratidae, and the first interval of the co-occurrence of Vascoceras 2 (sensu Barroso-Barcenilla, 2006), in the Vascoceratidae, and of Choffaticeras (Choffaticeras), in the Pseudotissotiidae, with the species Spathites (Jeanrogericeras) subconciliatus, Spathites (Jeanrogericeras) tavense, Spathites (Jeanrogericeras) saenzi, Spathites (Jeanrogericeras) postsaenzi, Vascoceras durandi, Vascoceras amieirense, Vascoceras harttii, Choffaticeras (Choffaticeras) quaasi, Choffaticeras (Choffaticeras) pavillieri and, possibly, Pseudotissotia sp.This assemblage has not been recognised in the sedimentary record of WPCP.Its taxonomic diversity is very high (9 or 10 species) and, in it, phylogenetic relationships can be established between almost all the represented species (Acanthoceratidae: Barroso-Barcenilla, 2007; Vascoceratidae: Barroso-Barcenilla andGoy, 2009, 2010; Pseudotissotiidae: Barroso-Barcenilla and Goy, 2007).Although the two endemics S. (J.) saenzi and S. (J.) postsaenzi exist in the same, the groups from the Tethys, such as the vascoceratids and the pseudotisso- durandi, Fagesia tevesthensis, Fagesia superstes, Neoptychites cephalotus, Thomasites rollandi, Choffaticeras (Leoniceras) barjonai, Pachydesmoceras denisonianum and Parapuzosia (Austiniceras) intermedia orientalis, in the WPCP.In this assemblage, fully recognised in both Spanish and Portuguese lower Turonian successions, the diversity is relatively higher (14 species).Nevertheless, each of the identified families is represented by a single species, with the only exception of Pseudotissotiidae.Within this family, a progressive replacement of Ch. (L.) luciae by Ch. (L.) barjonai seems to be observed, since the latter species presents, at least in the IT, a slightly higher range than the former.Likewise, the near absence of taxa from shallow environments stands out, since the majority of the identified forms were from open or relatively deep waters, such as the oxycone Choffaticeras (Leoniceras) and the torticone Nostoceras (Eubostrychoceras) (Batt, 1989;Westermann, 1996).Within the highest interval recorded from the levels where this assemblage has been identified, Fagesia tevesthensis has been recognized.For this reason, although it can be a mere effect of the sampling detail, the lower part of the assemblage of Fagesia with Neoptychites, in the Vascoceratidae, has been placed inside the same one.
Assemblage 8
It is recorded in the lower Turonian Mammites nodosoides subzone of the IT, and includes the co-occurrence of Mammites, in the Acanthoceratidae, of Donenriquoceras, in the Pseudotissotiidae, and the middle part of Fagesia with Neoptychites, with the species Mammites nodosoides, Spathites (Jeanrogericeras) reveliereanus, Fagesia tevesthensis, Fagesia rudra, Fagesia mortzestus, Fagesia superstes, Neoptychites cephalotus, Donenriquoceras forbesiceratiforme and Pachydesmoceras linderi.In the WPCP it corresponds to a stratigraphic interval with gastropods of the species Actaeonella caucasica and of the group of the nerineids, but without ammonites.Inside its notable diversity (9 species), the groups proceeding from the Protoatlantic stand out, such as Mammites and Fagesia (Wiedmann, 1975b;Kennedy and Cobban, 1976), but those from relatively open and deep environments, such as Neoptychites (Batt, 1989;Westermann, 1996), and the endemics, such as Donenriquoceras (Wright, 1996;Barroso-Barcenilla and Goy, 2007), are also represented.
Assemblage 9
It can be recognized in the lower Turonian Wrightoceras munieri subzone of the IT, and includes the co-occurrence of Spathites with Mammites, in the Acanthoceratidae, tiids (Meister et al., 1994;Courville et al., 1998), clearly predominate.
Assemblage 7
It can be recognized in the lower Turonian Choffaticeras (Leoniceras) luciae subzone of the IT, and the L level of the Thomasites rollandi zone of the WPCP, and includes the co-occurrence of Spathites (Jeanrogericeras) 3 (sensu Barroso-Barcenilla, 2006), in the Acanthoceratidae, of Choffaticeras (Leoniceras), in the Pseudotissotiidae, and of Nostoceras (Eubostrychoceras), in the Nostoceratidae, in the former region, and of Kamerunoceras, in the Acanthoceratidae, of Fagesia with Neoptychites, in the Vascoceratidae, of Choffaticeras (Leoniceras) with Thomasites, in the Pseudotissotiidae, and of Pachydesmoceras, in the Desmoceratidae, in the latter region.In it, the following taxa have been identified: Spathites (Jeanrogericeras) reveliereanus, Choffaticeras (Leoniceras) luciae, Choffaticeras (Leoniceras) barjonai and Nostoceras (Eubostrychoceras) sp., in the IT, and Kamerunoceras douvillei, Vascoceras kossmati, Vascoceras There is an obvious relationship between these two major depositional sequences and the nine cephalopod assemblages described above in the Iberian Subplate.The first of these sequences, which have been named A, can be related to the assemblages 1 to 3, and the second one, called B, to 4 to 9. Each of these depositional sequences includes different genera of cephalopods and coincides with specific worldwide 3 rd order eustatic cycles observed by Haq et al. (1988), and certain depositional sequences recognized in the IT by numerous authors, as Floquet (1998), Gräfe (1999), Alonso et al. (1993) and Segura et al. (1999).In the same way, inside the second major sequence, two minor depositional sequences have been differentiated, named B 1 and B 2 , which group, respectively, 4 to 6 and 7 to 9 assemblages, and which agree with some specific sequences of Floquet (1998) and Gräfe (1999), among others.All these sequences, both major and minor, can be assimilated to the palaeontological cycles defined by Fernandez-Lopez (2000).
Sequence A
It is seemingly related to the 3 rd order depositional sequence UZA-2.4 of Haq et al. (1988) and the sequences DC-5 of Floquet (1998), UC-4/5 of Gräfe (1999), DS-5 of Alonso et al. (1993) and S-3 of Segura et al. (1999).It shows an extensive record that ranges from the base of the upper Cenomanian to the top of the Metoicoceras geslinianu m subzone, and includes the cephalopod assemblages 1 to 3. Lithologically, this sequence is composed, in the north of the IT by bioclastic limestones with abundant burrows and algae laminations that upwards change to nodular biomicritic limestones.In the centre of the IT this sequence is constituted by flaggy to massive dolostones or limestones with less intense burrowing and algae lamination.As a whole, this fi rst sequence corres-.As a whole, this first sequence corresponds to shallow marine platform deposits that experienced a slow and complex deepening (Carenas et al., 1989).
The boundary with the following sequence is marked by an interruption on the record of the Acanthoceratidae, till then continuous, together with the complete replacement of the identified genera and the appearance of the Vascoceratidae.Lithologically, this boundary corresponds to a net surface with a marked lithological change caused by a fast eustatic fall (Carenas et al., 1989).
Depositional sequences
Both the IT and WPCP sedimentary successions of the studied interval were deposited during the course of two 3 rd order depositional sequences, known by Haq et al . (1988) as UZA-2.4 and UZA-2.5, respectively.
In the IT, the first one of them, named DC-5 by Floquet (1998), UC-4/5 by Gräfe (1999), DS-5 by Alonso et al. (1993) and S-3 by Segura et al. (1999), includes the basal and middle upper Cenomanian.The second sequence extends from the higher upper Cenomanian to the middle Turonian.In detail, in the north of the IT (Outer Navarro-Cantabrian Platform and North-Castilian Sector), within this second sequence, two other lower order sequences can be differentiated, called DC-6a and DC-6b by Floquet (1998), and UC-5/6 and UC-6/7 by Gräfe (1999), respectively.However, these two intervals can not be individualized in the centre of the IT (Central Sector), but are coincident with the lower and middle part of the sequence DS-6 of Alonso et al. (1993) and with the sequence S-4 of Segura et al. (1999), and present a more diffuse upper boundary.These differences between the sequences of the north and the centre of the IT can be caused, in part, by the inequality of records of both areas.
In the WPCT, and despite a perceptible influence of local tectonics over the eustatic signature, the same sequences of the IT can be correlated with the 3 rd order depositional sequences and subsequent sequences defined by Callapez (1998).In particular, the Portuguese sequence CD records part of the depositional sequence A proposed on this work; sequences E/I and J match with the lower half of B 1 , and K/L and M/O with B 2 (Text-fig.5).lower order sequences correspond to two prograding lithosomes separated by nodular limestones or marls (Floquet, 1991).
Discussion and conclusions
The above indicated facts suggest that assemblage 1 began after the disappearance of the cephalopods that dominated the IT during the latest middle Cenomanian, mainly Acanthoceras, caused by a marked worldwide marine regression.Seemingly, it coincided with a period in which, as consequence of a widespread and moderate eustatic ascent, the first ammonites typical of the earliest late Cenomanian, belonging to Eucalycoceras and Calycoceras and proceeding from the Protoatlantic, arrived to the Iberian Subplate and occupied some of the vacant ecologic niches.
Assemblage 2 seems to correspond to a faunal response to a marked and fast rise of the relative sea-level.This transgression both in the IT and WPCP made possible the permanency of Calycoceras, the appearance of new groups, such as Lotzeites and Euomphaloceras, and the record of several taxa of relatively deep waters in shallow platform sediments.Seemingly, the relative sea-level rise was kept the necessary time so that Neolobites could be 5/6 and UC-6/7 of Gräfe (1999), DS-6 of Alonso et al. (1993), in its lower and middle interval, and of S-4 of Segura et al. (1999).Its record, notably more expanded and rich than that of sequence A, ranges from the base of the Vascoceras gamai subzone to the top of the lower Turonian, and includes the cephalopod assemblages 4 to 9. Lithologically, this sequence is composed by biomicritic limestones that upwards change quickly to marls and, near the top, to chalky sandstones (notably dolomitized in the centre of the IT).It corresponds to an extraordinarily extensive sequence of open marine ramp, finally affected by a fast eustatic fall that generates a very prograding shallow marine platform (Segura et al., 1993).
This sequence can be divided in two others of order that can be related to the sequences DC-6a of Floquet (1998) and UC-5/6 of Gräfe (1999), and DC-6b of Floquet (1998) and UC-6/7 of Gräfe (1999), respectively.The first one, B 1 , ranges from the base of the Vascoceras gamai subzone to the top of the Spathites (Ingridella) malladae subzone and includes the assemblages 4 to 6.In its upper limit another marked faunal change takes place, though of minor magnitude.The second one, B 2 , ranges from the base of the Choffaticeras (Leoniceras) luciae subzone to the top of the lower Turonian, and includes the assemblages 7 to 9. In detail, lithologically, these two adapted to shallower environments, but not so that the representatives of other genera evolved to specialized forms of inner platform.The important taxonomic replacement observed among assemblages 2 and 3 could be related to the beginning of a drop on relative sea-level that favoured the disappearance of many of the cephalopods of the Iberian Subplate.
The interpretation of the assemblage 3 is extremely difficult, as it has numerous and complex indicators.In general, it could correspond to a fall on the relative sealevel (no known record in the WPCP and disappearance of Angulithes and Neolobites).Nevertheless, this change should be very moderate (establishment of Metoicoceras in the IT) and, even, could experience punctual ascending pulses, possibly related to the first and punctual phase of the OAE2 (sensu Barroso-Barcenilla et al ., 2011).According to Meister et al. (1992), the practically exclusive presence of a species of cephalopod that shows a wide morphologic variability in a biostratigraphic interval is usually caused by the existence of a highly unstable environment in which the occupation of several ecological niches by a unique taxon was produced.Therefore, this Later, when the second and main phase of the OAE2 reduced its intensity, the adaptative process of S. (J.) subconciliatus continued, giving place to the varieties described by Wiedmann (1960Wiedmann ( , 1964) ) and to the endemic S. (J.) robustus in the IT.Finally, when the normal marine conditions came again and the high relative sealevel was reached, an important recovery of the Vascoceratidae began, being especially abundant V. durandi, and a significant arrival or return to the Iberian Subplate of exotic ammonoids of diverse origins, such as V. cauvini, F. catinus and P. (A.) sp.P. (P.) sp. and P. denisonianum, took place.
The transition between assemblages 4 and 5 is gradual and, seemingly, caused by progressive processes of adaptation to the favourable palaeoceanographic conditions and the arrival to the region of new taxa.This is valid for all the IT, but not for the WPCP where this was an interval of increased tectonic and diapiric activity, with uplift of the present onshore sectors of the Western Iberian Margin and subaerial exposure of the upper Cenomanian levels.These structural readjustments have been interpreted as the result of rotational movements within the subplate, but they could also be related to the tilting of the overall Iberian Ranges to the east or southeastwards.As a consequence of that and despite the contemporaneous sea-level highstand, there is no known record of marine carbonates on the WPCP below the middle part of lower Turonian.
Assemblage 5 corresponds to the beginning of the early Turonian, with a maximum relative sea-level and very favourable marine conditions that allowed the occupation of a great variety of ecological niches, the faunal exchange and the appearance of new taxa.Among others, the first Pseudotissotiidae arrived to the IT, who seemingly acceded to this region directly from the Tethys.Likewise, the arising of specialized morphologies proper of restricted ecologic niches, such as S. (J.) saenzi and S. (J.) postsaenzi, continued in the Acanthoceratidae, whereas the appearance of forms progressively more adapted to the epicontinental shallow environments persisted in the Vascoceratidae.In the development of the latter family the wide morphologic variability of V. durandi stood out.This seems to be an adaptative process to gradually more diverse ecological niches, similar to that followed by S. (J.) subconciliatus during the latest Cenomanian, with the subsequent appearance of successively more specialized forms, such as V. amieirense and V. harttii.As in the previous case, there are not strong differences between the taxonomic composition of assemblages 5 and 6, for what the transition between both was possibly produced as consequence of gradual evolu-seemingly unstable environment could be the cause of the presence in this assemblage of a single group (one genus: Metoicoceras) with certain morphologic variability (two species: M. mosbyense and M. geslinianum).Among assemblages 3 and 4, a whole replacement of the genera can be inferred.This change could be caused by a significant fall of the relative sea-level that deeply affected the sedimentary processes and the marine palaeobiotas of the Iberian Subplate.However, it does not seem rejectable that the same change was also influenced by the above indicated hypoxic event.
Assemblage 4 took place during the initial and intermediate stages of the great eustatic rise associated with the second and main phase of the OAE2 (sensu Barroso-Barcenilla et al., in press.) of the Cenomanian-Turonian Transition.Firstly and coinciding with the beginning of the relative sea-level rise, the fast widespread of the Vascoceratidae, specifically of V. gamai (one of its earliest members), happened.Secondly, these earliest Vascoceras diversified, giving place to some species almost exclusive of the Iberian Subplate, such as V. barcoicense, and even seemingly endemic, such as V. charoni, and the Acanthoceratidae returned to the region, by means of the earliest S. (J.) subconciliatus.Simultaneously, the WPCP experienced a notable arrival of American taxa, such as Rubroceras.
Nevertheless, this expansive process was interrupted by the effect of the second and main phase of the OAE2 that, with diverse intensity, concerned all the oceans of the planet.These new and unfavourable conditions of the epicontinental waters of the Iberian Subplate, although they should not have been very marked, since they did not cause the disappearance of the inoceramids in the Outer Navarro-Cantabrian Platform and the North-Castilian Sector, were responsible for the disappearance of all previous cephalopod taxa present up to this moment in the Iberian Subplate, except for S. (J.) subconciliatus.In fact, the practically exclusive presence of a species of cephalopod with a wide morphologic variability in a biostratigraphic interval (S. (J.) subconciliatus in this part of the homonymous zone) can be observed again and, as indicated by Meister et al. (1992), also seems to be caused by the existence of a highly unstable environment.This species, after overcoming the most intense phase of this palaeoceanographic crisis, occupied several of the vacant ecological niches, diversifying notably its morphology, especially in the IT.Likewise, the fact that the hypoxia affected more moderately the shallow than the deep marine environments allowed the subsistence of the Vascoceratidae in the IT and WPCP by means of V. durandi and V. gamai, among others.
to the IT, principally from the Protoatlantic (Mammites : Wiedmann, 1975b;Fagesia: Kennedy and Cobban, 1976).Some of them could adapt to this region (again with vacant shallow epicontinental environments), and even gave place to ammonites practically exclusive of the IT (Donenriquoceras: Wright, 1996;Barroso-Barcenilla and Goy, 2007).Nevertheless, the relative sea-level must be lower than that reached during the span of assemblage 5, since it does not seem that the region had a direct connection with the Tethys (scarce number of cephalopods from this sea).Nearly all the groups identified in assemblage 8 can be recognized in assemblage 9.This fact suggests that any significant palaeoecological changes should have been produced during the transition between them.
Assemblage 9 seemingly was related to the stabilization of the favourable conditions on the IT.During its development, possibly a slight fall of the relative sealevel was produced, by eustatism or even by sedimentary fill of the basin, causing certain isolation in some areas of the region and the consequent appearance or recovering of several nearly endemic species.Nevertheless, the fall of the relative sea-level should be very moderated, since the marine epicontinental environments did not disappear, allowing the permanency, the appearance and the development of some cephalopods of these spaces.The Acanthoceratidae experienced a new diversification, which it made possible for Spathites to be represented by three subgenera, whereas Fagesia continued the adaptative process initiated time before.Wrightoceras, after coming from other basins of the Tethys, returned to the IT (in the absence of evidence of a direct connection with the Tethys, possibly around the Iberian Subplate), region that also was reached by the earliest Coilopoceratidae.Many of the species from the assemblage 9 disappear on the top of the lower Turonian where they are replaced by middle Turonian taxa, such as Collignoniceras, Romaniceras and Coilopoceras.This notable and global taxonomic replacement (very useful to establish the lower-middle Turonian boundary: Bengtson, 1996) could be caused by a marked fall on the relative sea-level that forced the extinction of numerous cephalopods in the IT.
Considering the changes produced during the development of the depositional sequences and their assemblages, some differences and several similarities between them can be established, which allow us to infer certain guidelines on the dynamics of the sequences.Among the differences, it stands out that the taxonomic diversity, and the abundance and variety of endemics are much lower in sequence A than in sequence B. This circumstance can be explained by the minor duration of the former tionary changes of adaptation to small modifications in the environmental conditions.
Assemblage 6 could be caused by a small fall of the relative sea-level.The same change would be influenced not only by a eustatic change but also by sedimentary accommodation and infill of the basin, which produced certain confinement of some areas of the IT, allowing the appearance of several nearly endemic taxa, among them S. (I.) malladae and S. (J.) obliquus.As consequence of the development of the long term evolutionary process followed by Vascoceras to give place to forms that are increasingly adapted to the shallow platform environments, specialized cadicones arose, such as V. kossmati.Likewise, Wrightoceras arrived to the IT by the first time.Except for S. (J.) reveliereanus (a widespread acanthoceratid with relatively long range), none of the species of the assemblage 6 have been identified in the assemblage 7.This fact seems to indicate that the ending of the assemblage 6 was probably produced by a sudden environmental change in the region.Possibly, a rapid and marked fall of the relative sea-level took place, which motivated the disappearance (in some cases temporary but in others definitive) of all shallow water taxa, such as S. (Ingridella) and Vascoceras.
Assemblage 7 seems to correspond to the faunal recovery that took place after the strong fall of the relative sea-level that led to the disappearance of nearly all marine environments of the Iberian Subplate for a time.In fact, a moderate rise of the relative sea-level made possible the arrival of some ammonites of pelagic spaces or higher bathymetries to this region.Among the species identified, seemingly S. (J.) reveliereanus could survive in relatively deep waters and, thus, overcome the absence of shallow environments.Supposedly, Ch. (L.) luciae and Ch.(L.) barjonai could not adapt to the IT, although they acceded often to the region (numerous records with no or minimum signs of taphonomic resedimentation or reelaboration in the Central Sector).In fact, the incursions of big predator oxycones, such as those of Choffaticeras (Leniceras), to very shallow and coastal waters already have been reported by Hewitt and Westermann (1989) and Kauffman (1990).N. (E.) sp.could install in the Outer Navarro-Cantabrian Platform and the North-Castilian Sector.In the WPCP this is the single lowest Turonian assemblage recorded with diverse and abundant faunas, clearly associated with a transgressive event related to the eustatic relative sea-level rising, but limited to certain parts of the basin.
Assemblage 8 seemingly took place during the advance of the relative sea-level rising that controlled the previous assemblage.The progress of the rising of the relative sea-level favoured the arrival of new taxa dominated the region until, with the following fall of the sea-level, a new sequence began and a repetition of the described events occurred.
sequence and the lower sea-level reached.Analyzing the palaeogeographic distribution of the exotics, it can be verified that the majority of those integrated in the sequences A and B 2 are mainly characteristic of the relatively cold and deep Protoatlantic, whereas the majority of those included in the sequence B 1 are fundamentally typical of the comparatively warmer and shallower Tethys.This fact can reflect a change in the influence received, which begins being principally Protoatlantic (from the base of the upper Cenomanian to the top of the Metoicoceras geslinianum subzone), changes being mainly Tethyan (between the Vascoceras gamai and Spathites (Ingridella) malladae subzones), and finishes being eminently boreal (from the top of the Choffaticeras (Leoniceras) luciae subzone to, at least, the base of the middle Turonian).
Among the similarities, it is remarkable that, without considering the possible changes caused by the geochemical oceanic variations and the evolutionary dynamics of the implied taxa, in each of the three differentiated sequences, a recurrence of the same succession of biotic events followed by their respective cephalopods can be inferred.Initially, the arrival of cosmopolitan forms of pelagic or deep environments, later, the appearance of derivative species from the previous ones relatively adapted to the shallow environments and, finally, the increase of the proportion of endemic forms can be observed (Text-fig.6).
Though this succession of events is more difficultly observed in the major sequence A than in the minor ones B 1 and B 2 (possibly as consequence of the record being notably worse in the first one), each one of these intervals seems to correspond to successive periods of low, rising, and high relative sea-level, in which phenomena occurred respectively, of extinction of cephalopods from shallow environments, survival of taxa from pelagic or deep waters, settling of new spaces, and adaptation to them.Several studies have been developed recently relating sea-level changes and cephalopod faunal turnovers in the Mesozoic (O´Dogherty et al., 2000;Sandoval et al., 2002;Yacobucci, 2008).Specifically, when the sealevel goes down and the epicontinental environments disappeared, most of the cephalopods of shallow waters become extinct and only those that possessed certain aptitude to survive in open or deep oceanic environments overcame the crisis.During the sea-level rising, the cosmopolitan, pelagic or deep-water forms were the first ones to occupy the spaces available again as a consequence of the marine transgression in process, but soon they were replaced by others, derived from these cephalopods and better adapted to the shallow environments.Finally, specialized or nearly endemic taxa arose and Palabras clave: Cephalopoda, asociación fósil, secuencia deposicional, Cenomaniense superior, Turoniense inferior, Península Ibérica al. (2009) stood out.
Fig. 3 .
Fig. 3.-Possible correlation of the biostratigraphic zonations used for the Iberian Trough and the Western Portuguese Carbonate Platform with other scales.For interregional correlations, the work of Graciansky et al. (1998) has been especially useful.The oblique lines indicate interval without record, and the arrows represent that the zone or level continues.D. problematicum is Dunveganoceras problematicum, D. albertense is Dunveganoceras albertense and D. conditum is Dunveganoceras conditum.Gyrostrea ouremensis and Actaeonella caucasica zones are in brackets because their index species are not ammonites.M, N and O levels are in brackets because, to date, they have not provided ammonites.Fig. 3.-Posible correlación de las zonaciones bioestratigráficas utilizadas en el Surco Ibérico y la Plataforma Carbonatada Occidental Portuguesa con otras escalas.Para correlaciones interregionales ha sido especialmente útil el trabajo de Graciansky et al. (1998).Las líneas oblicuas indican intervalo sin registro, y las flechas representan que la zona o el nivel continúa.D. problematicum es Dunveganoceras problematicum, D. albertense es Dunveganoceras albertense y D. conditum es Dunveganoceras conditum.Las zonas Gyrostrea ouremensis y Actaeonella caucasica están entre paréntesis porque sus especies índice no son ammonites.Los niveles M, N y O están entre paréntesis porque, hasta ahora, no han proporcionado ammonites.
Fig. 4 .
Fig. 4.-Cephalopod assemblages identified in the Iberian Trough and the Western Portuguese Carbonate Platform.The oblique lines indicate interval without record, and the arrows represent that the assemblage continues.Fig. 4.-Asociaciones de cefalópodos identificadas en el Surco Ibérico y la Plataforma Carbonatada Occidental Portuguesa.Las líneas oblicuas indican intervalo sin registro, y las flechas representan que la asociación continúa.
Fig. 5 .
Fig. 5.-Depositional sequences identified in the Iberian Trough and the Western Portuguese Carbonate Platform.The oblique lines indicate interval without record, and the arrows represent that the sequence continues.Fig. 5.-Secuencias deposicionales identificadas en el Surco Ibérico y la Plataforma Carbonatada Occidental Portuguesa.Las líneas oblicuas indican intervalo sin registro, y las flechas representan que la secuencia continúa. | 9,716 | 2011-10-05T00:00:00.000 | [
"Geology"
] |
Preflight Spectral Calibration of Airborne Shortwave Infrared Hyperspectral Imager with Water Vapor Absorption Characteristics
Due to the strong absorption of water vapor at wavelengths of 1350–1420 nm and 1820–1940 nm, under normal atmospheric conditions, the actual digital number (DN) response curve of a hyperspectral imager deviates from the Gaussian shape, which leads to a decrease in the calibration accuracy of an instrument’s spectral response functions (SRF). The higher the calibration uncertainty of SRF, the worse the retrieval accuracy of the spectral characteristics of the targets. In this paper, an improved spectral calibration method based on a monochromator and the spectral absorptive characteristics of water vapor in the laboratory is presented. The water vapor spectral calibration method (WVSCM) uses the difference function to calculate the intrinsic DN response functions of the spectral channels located in the absorptive wavelength range of water vapor and corrects the wavelength offset of the monochromator via the least-square procedure to achieve spectral calibration throughout the full spectral responsive range of the hyper-spectrometer. The absolute spectral calibration uncertainty is ±0.125 nm. We validated the effectiveness of the WVSCM with two tunable semiconductor lasers, and the spectral wavelength positions calibrated by lasers and the WVSCM showed a good degree of consistency.
Introduction
As an image-spectrum merging technology, hyperspectral imaging has been widely used in agriculture, ocean observations, urban planning, disaster monitoring, and many other fields [1][2][3]. The quantitative retrieval of a target's surface spectral reflection characteristics is one of the important features of hyperspectral imagers, requiring an accurate spectral position for the instrument. For an imaging spectrometer with a 10 nm full width at half maximum (FWHM), a spectral shift of 1 nm shows relative errors of up to ±25% in the measured radiance near strong atmospheric absorption valleys [4]. The Jet Propulsion Laboratory (LaKan Yada and Pasadena, America) has reported that a measured radiance error of about 8% occurs when spectral calibration accuracy approaches 5% of the FWHM in the atmospheric absorption range [5]. High-precision spectral calibration is indispensable in the field of hyperspectral remote sensing applications. The popular methods for spectral calibration are divided into two main categories: The characteristic spectrum calibration method (CSCM), which relies on sources with unique spectral properties, such as tunable lasers [6], filters containing rare earth oxides [7], atmospheric characteristic absorption lines [8,9], gas molecules absorb cells [10], spectrum lamps [11], etc. The other is a monochromatic and collimator-based wavelength scanning calibration method, called the monochromatic collimation light calibration method (MCLCM). The advantage of CSCM is that it is easy to operate and can quickly detect the spectral offset of the spectrometers. Its disadvantage is that the spectral characteristic is untunable or the tunable spectrum range is narrow and cannot cover the whole operation spectral range of the spectrometer. The absolute uncertainty of CSCM can approach 10% of the spectrum sampling interval of the responsive channel, which could meet practical requirements.
The monochromatic collimation light calibration method can realize high-precision continuous wavelength scanning in a wide spectral range, which is generally adopted by scholars. One of moderate resolution imaging spectroradiometer's (MODIS) on-orbit calibration methods is to use a spectro-radiometric calibration assembly (SRCA) to calibrate the offset of center wavelength and the deviation of FWHM [12,13]. Zadnik et al. successfully calibrated the spectral response function of compact airborn spectral sensor (CAMPASS) in a laboratory using a high-resolution monochromator with an absolute calibration accuracy of ±0.5 nm [14]. The monochromatic collimated spectrum calibration method has gradually become the first choice for spectral calibration of spectrometers, but the uncertainty of the monochromator's stability has always been the bottleneck limiting the accuracy of spectrometers. In response to such problems, Zhang et al. [15] analyzed the relationship between the mechanical error of the monochromator system and the wavelength of the emitted light and established a mathematical model to calculate the monochromatic light's wavelength offset. The calibration accuracy is ±0.3 nm. The European Space Agency calibrated the monochromator via a HeNe laser and a series of gas atomic lamps (Hg, Ne, Ar, Kr, and Xe). The absolute spectral uncertainty of the airborne prism experiment (APEX) was increased to ±0.15 nm [16]. With the enhancement of the spectral resolution of the hyperspectral imager, the calibration accuracy of MCLCM in laboratory is continuously improving. However, for hyperspectral imagers with a 3-nm spectral sampling interval, the effect of water vapor absorption on the spectral calibration accuracy of the channels located in an absorptive wavelength range is gradually being understood. Unlike the hyperspectral imagers in a space remote sensing field, laboratory spectral calibration for hyperspectral imagers in an aerial remote sensing application was performed under a normal atmospheric environment. Due to the strong absorptive effect of water vapor in 1350-1420 nm and 1820-1940 nm, the actual digital number (DN) response curves of hyperspectral imagers obtained by MCLCM deviate from the Gaussian shape, which leads to a decrease in the calibration accuracy of the channel's spectral response function. We studied the actual DN response curve in the wavelength range mentioned above and found that each absorption valley along the actual DN response curve of every spectral channel located in the absorptive range corresponds to the spectral absorption characteristics of water vapor. This phenomenon not only helps us to solve the problem of the decrease in the spectral calibration accuracy of the spectral channels, but also corrects the wavelength offset of the monochromator simultaneously. The water vapor spectral calibration method (WVSCM) is an improved laboratory spectral calibration method based on monochromator and transmittance characteristics of water vapor. WVSCM can promote the application of hyperspectral imagers in the aerospace field.
In the second section of this paper, we introduce the laboratory spectral calibration principle and the water vapor spectrum calibration method for the hyperspectral imagers. In Section 3, the experimental verification of the WVSCM and the wavelength offset of the monochromator being removed simultaneously are described. The Section 4 provides the error analysis and effectiveness validity of the WVSCM. The experimental results in this paper were consistent with the theory, which confirmed the feasibility of the water vapor spectrum calibration method.
Methods
The spectral response function of an imaging spectrometer applies the relative response ability of each spectral channel to different wavelength monochromatic light. The spectral response function can be expressed as a convolution of the slit function, the spectrometer optical system line spread function, and the detector pixel response function [17]. The hyper-spectrometer uses an optical system of a prism or grating to split light and map between objects and their images. A charge-coupled device (CCD) focal plane detector and its electronic system are used to accomplish the process of photoelectric information conversion, amplification, sampling, quantization, and coding. If we assume the spectral radiance at a monochromator's exit slit is L(λ), the atmospheric transmittance is v(λ), the energy transfer efficiency of the optical system is τ(λ), the distribution function of the dispersive system is ψ(λ), the quantum efficiency of the CCD pixels is η d (λ), the spectral response efficiency of the electronic system is η e (λ), the charges' conversion coefficient from the quantum well to CCD output voltage is κ, the total voltage gain coefficient of the detector's matching circuits is G, the total noise voltage introduced by the process of photoelectric conversion and voltage amplification is n, the quantization bit of the ADC chip is m, and the quantization reference voltage is V REF , then the DN value generated during the integral time T can be expressed as where d is the size of the detector's pixel, D is the aperture of the optical system, f is the focal length of optical system, and h and c are the Planck constants and the speed of light, respectively. If we use k and b to represent the absolute radiation transfer coefficient of the hyperspectral imager, and S(λ) as the spectral response function, then Equation (1) can be rewritten as For practical convenience, the spectral response function S(λ) is usually expressed by a Gaussian function [18], as shown in Equation (3). The subscripts i and j in Equation (3) denote the pixel in the j-th spatial sequence and the i-th spectrum sequence of the CCD focal plane. λ c is the center wavelength position, FWHM is the full width at half maximum, and A is the relative spectral response efficiency of pixel i j: If the wavelength of a bundle of monochromatic light received by the spectrometer is recorded as λ 0 , the DN response value of pixel i j can be written as In the wavelength range without the effect of atmospheric absorption, the DN value is the linear gain of the spectral response function. It can be accurately calibrated by the actual DN response curve. However, when the influence of the atmosphere cannot be ignored, especially in the strong absorptive wavelength range of 1350 nm to 1420 nm and 1820 nm to 1940 nm caused by water vapor, the actual DN response curve deviates from the Gaussian shape, which cannot reflect the characteristic of the spectral response function accurately. Take pixel i j located within the strong absorptive range as an example, then we use Gaussian function Func Ori i,j (λ) shown in Equation (5) to represent its original DN response function, which is obtained by Gauss fitting of the pixel's actual DN curve: DN prac i,j (λ): Func Ori i,j (λ) is just an approximation of Func r i,j (λ), which is the intrinsic DN response function of pixel i j. We can solve function Func r i,j (λ) by fitting the actual and the simulated (theoretical) DN response curve based on the spectral absorption characteristics of water vapor. If we set the atmospheric transmittance as v(λ) and introduce the relative DN response height variation ∆P, the offset of the center wavelength ∆λ, and the stretch of the full width at half maximum ∆FWHM into Equation (5), then we create the difference function E(∆P, ∆λ, ∆FWHM) to represent the difference distance between the simulated DN response curve and the actual one, which is expressed in Equation (6): The set of solutions in Equation (6) is a three-dimensional (3D) cube matrix, which is shown in Figure 1. By bringing the solutions of the global minimum value E min back into Equation (5), we can determine the real intrinsic DN response function Func r i,j (λ), as shown in Equation (7). The parameter P 0 , FWHM stretch , and λ shift are the solutions of E min : Sensors 2019, 19, x FOR PEER REVIEW 4 of 12 determine the real intrinsic DN response function Func r i,j (λ), as shown in Equation (7). The parameter P0, FWHMstretch, and λshift are the solutions of : The content mentioned above provides the theoretical basis and calculation process of the water vapor spectral calibration method (WVSCM). By repeatedly using the WVSCM, we could calibrate the intrinsic DN response functions of all spectral channels located in the wavelength range of 1350 nm to 1420 nm and 1820 nm to 1940 nm one at a time. The spectral response functions can be determined simultaneously. We verified the WVSCM through experiments, which are described in Section 3.
Experiment Validation
The laboratory temperature was 18 degree Celsius, the relative humidity was 34%, and the partial pressure of water vapor in the air was around 700 Pascal. The optical distance of monochromatic light was about 3 m. The laboratory spectral calibration structure is shown in Figure 2.
The instrument for wavelength scanning was an iHR550 monochromator produced by HORIBA, Ltd. (Kyoto, Japan), and the calibration light source was an LSH-250 tungsten halogen lamp. The spectrometer for spectral calibration was a full spectral airborne hyperspectral imager (FSAHI) [19][20][21], which was developed by the Shanghai Institute of Technical Physics, Chinese Academy of Sciences (Shanghai, China). The main FSAHI parameters are shown in Table 1.
We selected the middle field view of FSAHI to receive the monochromatic light, and the wavelength scanning step of the monochromator was 0.2 nm. The actual DN response curves The content mentioned above provides the theoretical basis and calculation process of the water vapor spectral calibration method (WVSCM). By repeatedly using the WVSCM, we could calibrate the intrinsic DN response functions of all spectral channels located in the wavelength range of 1350 nm to 1420 nm and 1820 nm to 1940 nm one at a time. The spectral response functions can be determined simultaneously. We verified the WVSCM through experiments, which are described in Section 3.
Experiment Validation
The laboratory temperature was 18 degree Celsius, the relative humidity was 34%, and the partial pressure of water vapor in the air was around 700 Pascal. The optical distance of monochromatic light was about 3 m. The laboratory spectral calibration structure is shown in Figure 2.
The instrument for wavelength scanning was an iHR550 monochromator produced by HORIBA, Ltd. (Kyoto, Japan), and the calibration light source was an LSH-250 tungsten halogen lamp. The spectrometer for spectral calibration was a full spectral airborne hyperspectral imager (FSAHI) [19][20][21], which was developed by the Shanghai Institute of Technical Physics, Chinese Academy of Sciences (Shanghai, China). The main FSAHI parameters are shown in Table 1.
We selected the middle field view of FSAHI to receive the monochromatic light, and the wavelength scanning step of the monochromator was 0.2 nm. The actual DN response curves covering the range of Figures 3 and 4, respectively. The absorption of water vapor could cause the actual DN response curve to deviate from the Gaussian shape. This phenomenon is consistent with the theory presented in Section 2. Taking the i-th channel with a center wavelength around 1376 nm in j-th spatial sequence as an example, we used the WVSCM to calculate the pixel's intrinsic DN response function Func Figure 2. Laboratory spectral calibration structure of a full spectral airborne hyperspectral imager (FSAHI). LSH-T250 is a tungsten halogen source produced by HORIBA, Ltd., with a spectral coverage of 350 nm to 2400 nm, and iHR550 is a monochromator produced by HORIBA, Ltd. Taking the i-th channel with a center wavelength around 1376 nm in j-th spatial sequence as an example, we used the WVSCM to calculate the pixel's intrinsic DN response function Func Taking the i-th channel with a center wavelength around 1376 nm in j-th spatial sequence as an example, we used the WVSCM to calculate the pixel's intrinsic DN response function Func r i, j (λ), as shown by the solid black line in Figure 5. The red dotted and dashed line is the actual DN response Figure 5 is the simulated DN response curve, which is the product of Func r i,j (λ) and v(λ) in the wavelength direction. Although, the simulated DN response curve has a high degree of similarity with the actual curve, a misalignment between them was apparent. This phenomenon is mainly caused by the difference between the spectral positions of the monochromator and the MODTRAN model. Since the state of monochromator changes with time, it is necessary to calibrate the spectral deviation of the monochromator using MODTRAN. We used the least-square method to fit DN prac i,j (λ) and the simulated response curve to obtain the best matching spectral position via translating the actual DN response curve, as shown in Figure 6. The translation distance of the actual DN response curve is the spectral offset of the monochromator, which we recorded as ∆λmono. Although, the simulated DN response curve has a high degree of similarity with the actual curve, a misalignment between them was apparent. This phenomenon is mainly caused by the difference between the spectral positions of the monochromator and the MODTRAN model. Since the state of monochromator changes with time, it is necessary to calibrate the spectral deviation of the monochromator using MODTRAN. We used the least-square method to fit DN prac i,j (λ) and the simulated response curve to obtain the best matching spectral position via translating the actual DN response curve, as shown in Figure 6. The translation distance of the actual DN response curve is the spectral offset of the monochromator, which we recorded as ∆λmono. Although, the simulated DN response curve has a high degree of similarity with the actual curve, a misalignment between them was apparent. This phenomenon is mainly caused by the difference between the spectral positions of the monochromator and the MODTRAN model. Since the state of monochromator changes with time, it is necessary to calibrate the spectral deviation of the monochromator using MODTRAN. We used the least-square method to fit DN prac i,j (λ) and the simulated response curve to obtain the best matching spectral position via translating the actual DN response curve, as shown in Figure 6. The translation distance of the actual DN response curve is the spectral offset of the monochromator, which we recorded as ∆λ mono . difference between the spectral positions of the monochromator and the MODTRAN model. Since the state of monochromator changes with time, it is necessary to calibrate the spectral deviation of the monochromator using MODTRAN. We used the least-square method to fit DN prac i,j (λ) and the simulated response curve to obtain the best matching spectral position via translating the actual DN response curve, as shown in Figure 6. The translation distance of the actual DN response curve is the spectral offset of the monochromator, which we recorded as ∆λmono. By using the WVSCM to analyze the channels of the j-th spatial sequence located in the wavelength range of water vapor absorption, the actual DN response curves and the simulated ones could be obtained, as shown in Figure 7. After completing the overall translation on the actual DN response curves with ∆λ mono , shown in Figure 8, the simulated and actual response curves were consistent with each other. The wavelength offset of the monochromator was removed. By using the WVSCM to analyze the channels of the j-th spatial sequence located in the wavelength range of water vapor absorption, the actual DN response curves and the simulated ones could be obtained, as shown in Figure 7. After completing the overall translation on the actual DN response curves with ∆λmono, shown in Figure 8, the simulated and actual response curves were consistent with each other. The wavelength offset of the monochromator was removed.
Result Analysis
According to the wavelength position of each absorptive valley of water vapor in the wavelength range of 1350 to 1430 nm and 1800 to 1960 nm provided by the MODTRAN model, we performed By using the WVSCM to analyze the channels of the j-th spatial sequence located in the wavelength range of water vapor absorption, the actual DN response curves and the simulated ones could be obtained, as shown in Figure 7. After completing the overall translation on the actual DN response curves with ∆λmono, shown in Figure 8, the simulated and actual response curves were consistent with each other. The wavelength offset of the monochromator was removed.
Result Analysis
According to the wavelength position of each absorptive valley of water vapor in the wavelength range of 1350 to 1430 nm and 1800 to 1960 nm provided by the MODTRAN model, we performed statistics on the wavelength position deviations of the 132 absorptive valleys between the translated actual DN response curves and the theoretical DN response curves. The wavelength deviations of each valley are shown in Figure 9.
Among the absorptive sample points, the maximum positive offset was 0.11 nm, and the maximum negative offset was 0.14 nm. Their root mean square error was 0.07 nm. The average full width at half maximum (FWHM) of FSAHI is 4.70 nm. Therefore, it is reasonable to think that the absolute accuracy of spectral calibration is ±0.125 nm. A 4.5% level of FWHM accuracy, which refers to three times the root mean square error, was achieved.
The WVSCM takes the spectral position of MODTRAN as the benchmark to calibrate the spectrometer and monochromator. To verify the effectiveness of MODTRAN, we performed an imaging experiment with two tunable single-frequency semiconductor lasers to examine the spectral calibration results of the WVSCM. The principle of the experiment is shown in Figure 10. The WVSCM takes the spectral position of MODTRAN as the benchmark to calibrate the spectrometer and monochromator. To verify the effectiveness of MODTRAN, we performed an imaging experiment with two tunable single-frequency semiconductor lasers to examine the spectral calibration results of the WVSCM. The principle of the experiment is shown in Figure 10. The single-frequency semiconductor laser (SFSL) applied in this experiment used a Distributed Feedback (DFB) laser with an integrated Thermoelectric Cooler (TEC) module. By adjusting the magnitude of the drive current, small range modulation of the monochromatic light's wavelength can be achieved [22,23]. Since the response efficiency of different spectral channels to the same monochromatic light differ, we ascertained the wavelength of the laser by a hyperspectral imager according to this phenomenon. We used the WVSCM to calibrate the intrinsic DN response functions of the j-th spatial sequence, as shown in Figure 11. The responsive DN values of the spectral channels that responded significantly to the monochromatic light λ0 arranged from large to small were recorded as: DN , ( ), DN , ( ), DN , ( ), DN , ( ) and DN , ( ). Therefore, we defined the single-frequency spectral scaling loss function DELT(λ) in Equation (8) where , is the normalization gain coefficient of each channel's intrinsic DN response function. The purpose of normalization is to eliminate the effects of noise levels in different channels. , is the minimum DN response value among the channels and , is the matching weight of the channels, which decreases with decreasing , . The single-frequency semiconductor laser (SFSL) applied in this experiment used a Distributed Feedback (DFB) laser with an integrated Thermoelectric Cooler (TEC) module. By adjusting the magnitude of the drive current, small range modulation of the monochromatic light's wavelength can be achieved [22,23]. Since the response efficiency of different spectral channels to the same monochromatic light differ, we ascertained the wavelength of the laser by a hyperspectral imager according to this phenomenon. We used the WVSCM to calibrate the intrinsic DN response functions of the j-th spatial sequence, as shown in Figure 11. The responsive DN values of the spectral channels that responded significantly to the monochromatic light λ 0 arranged from large to small were recorded as: DN i,j (λ 0 ), DN i+1, j (λ 0 ), DN i−1, j (λ 0 ), DN i+2,j (λ 0 ) and DN i−2, j (λ 0 ). Therefore, we defined the single-frequency spectral scaling loss function DELT(λ) in Equation (8), according to Figure 11: where γ k,j is the normalization gain coefficient of each channel's intrinsic DN response function. The purpose of normalization is to eliminate the effects of noise levels in different channels. DN min,j is the minimum DN response value among the channels and ρ k,j is the matching weight of the channels, which decreases with decreasing DN k,j . The intrinsic wavelengths of lasers were calibrated using a HighFinesse-WS8 wavelength meter produced by the HighFinesse Company (Munich, Germany). The results of the loss function DELT(λ) are a function of wavelength λ, as shown in Figure 12. The wavelength position corresponding to the minimum value of DELT(λ) is the monochromatic wavelength of laser calibrated by FSAHI. The measurement results of two different semiconductor lasers measured by HifghFinesse-WS8 and FSAHI under different driving currents are shown in Tables 2 and 3. The intrinsic wavelengths of lasers were calibrated using a HighFinesse-WS8 wavelength meter produced by the HighFinesse Company (Munich, Germany). The results of the loss function DELT(λ) are a function of wavelength λ, as shown in Figure 12. The wavelength position corresponding to the minimum value of DELT(λ) is the monochromatic wavelength of laser calibrated by FSAHI. The measurement results of two different semiconductor lasers measured by HifghFinesse-WS8 and FSAHI under different driving currents are shown in Tables 2 and 3.
Laser Power
Intrinsic Wavelength Wavelength Calibrated Figure 11. The spectral calibration principle of single-frequency semiconductor lasers. Figure 11. The spectral calibration principle of single-frequency semiconductor lasers.
The intrinsic wavelengths of lasers were calibrated using a HighFinesse-WS8 wavelength mete oduced by the HighFinesse Company (Munich, Germany). The results of the loss function DELT(λ a function of wavelength λ, as shown in Figure 12. The wavelength position corresponding to th nimum value of DELT(λ) is the monochromatic wavelength of laser calibrated by FSAHI. Th asurement results of two different semiconductor lasers measured by HifghFinesse-WS8 an AHI under different driving currents are shown in Tables 2 and 3. Table 3. Wavelength calibration results of tunable semiconductor laser 02.
Conclusions
In this paper, we proposed a spectral calibration method based on water vapor transmission characteristics, which we named the water vapor spectrum calibration method (WVSCM). The method does not rely on the use of lasers or a series of gas atomic lamps to calibrate the monochromator beforehand, and it can not only remove the distortions of spectral channels in the wavelength range of 1350 nm to 1420 nm and 1820 nm to 1940 nm, which is caused by water vapor, but the decrese in laboratory spectral calibration accuracy can be reduced simultaneously. WVSCM is an economical and less time consuming laboratory spectral calibration method. The absolute spectral uncertainty of the method is ±0.125 nm, and the root mean square error is 0.07 nm. We used two tunable semiconductor lasers to verify the effectiveness of the water vapor spectrum calibration method. The calibration results of semiconductor lasers and the method are in accordance with each other. The WVSCM provides a new reference method for laboratory spectral calibration, and is helpful to promote the application of hyperspectral imagers. | 5,980.8 | 2019-05-01T00:00:00.000 | [
"Environmental Science",
"Engineering"
] |
Detecting Events in Aircraft Trajectories: Rule-Based and Data-Driven Approaches
: The large amount of aircraft trajectory data publicly available through open data sources like the OpenSky Network presents a wide range of possibilities for monitoring and post-operational analysis of air traffic performance. This contribution addresses the automatic identification of operational events associated with trajectories. This is a challenging task that can be tackled with both empirical, rule-based methods and statistical, data-driven approaches. In this paper, we first propose a taxonomy of significant events, including usual operations such as take-off, Instrument Landing System (ILS) landing and holding, as well as less usual operations like firefighting, in-flight refuelling and navigational calibration. Then, we introduce different rule-based and statistical methods for detecting a selection of these events. The goal is to compare candidate methods and to determine which of the approaches performs better in each situation.
Introduction
Large-scale data produced daily by the aviation world open promising perspectives for situational awareness, monitoring and decision making [1,2] systems. Massive amounts of ADS-B data available today from open sources like the OpenSky Network are valuable to academics, who, despite limited access to operational data, have been developing effective methods to automatically extract meaningful information from trajectory data. We focus in this paper on the detection of significant events in aircraft trajectories. Some methods in the literature are rule-based algorithms based on the expertise of the authors [3], whereas other methods join the growing trend around machine learning and efficiently detect events based on data-driven statistical approaches [4,5].
We first present in Section 2 a taxonomy of the events we intend to identify. These include the occurrences of specific missions (e.g., firefighting) or the detection of the different parts of a trajectory corresponding to each flight phase. Furthermore, we cover several navigational events such as holding patterns, "direct to" the ATCinstructions or landing assisted by Instrument Landing Systems (ILSs). Section 3 reviews simple ad hoc methods applying a set of rules to trajectory data. They range from simple pattern-matching on the aircraft registration number or call sign to geodetic computations for navigational events. Section 4 compares data-driven models with rule-based alternatives for detecting runway changes in final approach [4] and identifying holding patterns.
All the event detection methods presented in this paper were implemented using the traffic library [6]. The code used to produce figures in this paper is available, together with additional interactive visualisations, from the GitHub repository of the library. Table 1 proposes a taxonomy of flight events to be found in trajectories based on three main categories: missions, flight phases and navigational events. It is crucial to keep in mind that the different levels of granularity may overlap. For instance, a flight on a zero gravity experiment will alternate many climb and descent flight phases, or an inspection flight for instrument landing systems will have many descent segments aligned with one of the runways of a given airport. External sources of information are essential to properly label trajectories: registration databases validate the identified missions as not all aircraft have the required equipment for those. Navigational events also heavily rely on the structure of the airspace where aircraft evolve. Section 3 presents the most basic methods to leverage these sources of information to identify missions, navigational events and flight phases.
Identification of Missions Based on Tail Numbers
The most basic method to select trajectories for specific missions is to make use of aircraft trajectory databases like the OpenSky Aircraft database.For instance, firefighting operations are mostly conducted by aircraft and helicopters owned by specific institutions and agencies. Rescuing operations, aerial surveys or flight inspections are also usually conducted by the same fleet of aircraft (including helicopters). Figure 1 shows how aircraft trajectories flown by the California Department of Forestry and Fire Protection during a severe period of wildfires are consistent with the location of wildfires. A simple selection of pieces of circling segments can produce a trajectory-based heat map of wildfires in a given region.
This approach has three main limitations. Firstly, databases are difficult to maintain and easily become deprecated. Secondly, governmental aircraft trajectories are often obfuscated or only accessible through multilateration. Finally, some activities are subcontracted: during wildfire season, private aircraft and helicopters are contracted to assist tankers in their firefighting operations. Off season, such aircraft come back to their regular activities if they do not assist firefighters in the other hemispheres.
Identification of Missions Based on Call Sign Information
A call sign is an eight-character identifier used for communication with the ATC. General aviation commonly uses the aircraft registration (tail number) as a call sign; commercial flights use a unique identifier per route, starting with three letters identifying the airline operator, BAW for British Airways, AFR for Air France, etc. Call signs commonly refer to the mission operated by an aircraft, and this can help distinguish the original intention of an aircraft used for specific purposes.
For example, F-HNAVuses the CALIBRA call sign for VOR/ILS calibration operations, the JAMMINGcall sign during jamming investigation and a regular NAKcall sign when commuting between airfields. Similarly, test flights operated by Airbus use an AIB call sign; Boeing uses a BOE call sign; ambulance helicopters often use explicit call signs: SAMUin France (stands for Urgent Medical Aid Service) and LIFEin many European countries. Australian firefighting operations use a specific call sign depending on the role of the aircraft during the operations: BMBR for fire bombing; SPTR for fire spotters; BDOG, bird dog, for fire attack supervisions (often subcontracted); and FSCN, firescan for remote sensing fire operations.
Detection of Take-Off and Landing
Among the navigational events listed in Table 1, some can be directly specified and implemented on ADS-B data, in spite of some tricky corner cases to keep in mind, as they can become limitations in specific contexts. Aircraft broadcast specific type codes (between 5 and 8) in ADS-B messages (DF17) when on the ground. This piece of information is commonly used to determine whether an aircraft has landed or not taken off yet. The OpenSky Impala database stores this piece of information as an onground boolean. This bit is usually sufficient to determine whether an aircraft is really on the ground. However, in some situations, aircraft, possibly subject to faulty sensors, continue to broadcast airborne positional messages while on the ground. Vertical rate information can be a good way to cross-check the validity of the onground flag.
The determination of take-off and landing airports is commonly implemented based on positional information and aircraft databases. The OpenSky Impala database contains a flight table where take-off and landing airports are determined based on the distance of the first and last points of each trajectory. The field remains empty if no airport can be inferred.
Pitfalls (Figure 2): • The onground flag is rather unreliable; information should be cross-checked with other features; • Aircraft may land or take off outside designated areas (gliders, helicopters).
Detection of Events around a Flight Plan
Commercial aircraft are equipped with onboard devices that are very precise for flying trajectories heading toward a particular position, regardless of its technical characteristic (VOR, NDB, FIX, GPS coordinates). Provided a flight plan, the selection of segments heading toward a defined position is direct, after a comparison of the true track angle with the bearing of the aircraft toward the defined location. Figure 3 shows the angular difference between both angles for a set of flights between Paris-Orly LFPO and Toulouse LFBO airports, the usual flight plan to be filed being ERIXU UN860 GUERE UZ365 NARAK.
This figure plots an angular difference normalised by the remaining distance to the navigational point. Rule-based detection of targeted navigational points involves setting a threshold for the selected criterion and a minimal time the target should be locked on: the figure suggests corner cases with GUERE on the green and red trajectories.
Pitfalls:
• During a long-haul flight, aircraft sometimes target navigational points that are far ahead, pushing ad hoc criteria to their limits; • It is difficult to know which point is targeted when they appear nearly aligned (see ERTOKand ERIXUin the red trajectory in Figure 3); • Flight management systems are able to follow Standard Lateral Offset Procedures (SLOP) during transatlantic flights: they follow a route parallel to the next navigational point, which adds complexity to an automatic rule-based detection procedure.
Detection of Events during Final Approach
There are two direct ways to detect landing on a particular runway: 1. consider the runway thresholds as targeted navigational points (ILS modelling); 2. consider the ground trajectory, and select the part matching the footprint of a runway (taxi modelling).
Pitfalls ( Figure 4): • VFR (Visual Flight Rule) landing may be harder to detect using the ILS modelling; • Successive runway alignments (ILS modelling) may suggest a runway change (if the aircraft continuously descends) or a go-around (if the aircraft climbs between the two segments); • Circle to land manoeuvres yield a different runway with the ILS modelling approach and with the taxi modelling approach.
Identification of the Flight Phase with Fuzzy Logic
Commercial flights can commonly be split into five different flight phases, namely take-off, climb, cruise, descent and landing. Determining a set of rules to properly identify flight phases can be arduous as, e.g., aircraft can cruise at different altitudes or levelling may occur during climb or descent. While these varying conditions cause predefined rules on altitude, speed and vertical rate to fail with uncommon conditions, the human brain excels at understanding general trends, thereby intuitively splitting flights into distinctive flight segments.
Fuzzy logic aims at modelling this way of reasoning: instead of selecting a set of fixed thresholds, the degrees of truth model target states. Degrees of truth are computed using a combination of different parameters represented by membership functions. For example, an aircraft is likely to be in cruise if it is flying at a high speed at a high altitude and has no vertical speed. With fuzzy logic, membership functions are designed to model high speed, high altitudes, no vertical speed and other possible membership states. Figure 5 illustrates the fuzzy logic phase identification implementation from OpenAP [10] on an example flight trajectory. Figure 4 suggests that rule-based methods perform well at detecting runway changes during final approach, between parallel or different bearing runways. In a previous contribution [4], we introduced a Functional Principal Component Analysis (FPCA)-based approach to detect runway changes and assess safety-related contributors for the risk of runway excursion. FPCA is a statistical tool well suited to performing dimensionality reduction through a linear operator of projection.
Runway Changes
In this section, we compare the runway change situations detected: • with a rule-based method: segments of trajectories aligned with a runway (ILS modelling) for at least one minute are computed for each flight; we select only trajectories yielding two different alignments without go-around; • with a statistical method: trajectories, limited to the time series associated with the track angle values, are selected between zero and eight nautical miles before the runway threshold, scaled down to a constant number of samples (resampled), before computing their Karhunen-Loève decomposition [4]; the first component models the alignment phase with the runway, and we select trajectories with a second and third component-the variation modes associated with a runway change-above the 90th percentile.
In our dataset, the rule-based method detected 183 runway change situations, 44 of which were not detected by the statistical method. Figure 6 plots on the left-hand side the distribution of distances to the runway threshold at the end of the first segment aligned with one runway for situations detected by the rule-based method, but not by the statistical method: • The blue part of the distribution corresponds to runway changes occurring beyond the eight nautical miles where we clipped trajectories for the statistical method. This is not a surprise as the dataset for the statistical method was clipped within eight nautical miles from the runway thresholds. • The red part of the distribution corresponds to a very late runway change. This suggests that we should probably include the fourth component of the PCA in the criterion. Conversely, the statistical method detected 23 situations that were not detected by the rule-based method. Indeed, the rule-based method missed trajectories catching the ILS for a period of time shorter than the fixed threshold. In Figure 7, the trajectory could look like the aircraft crossed the right runway before catching the right ILS; however, the PCA detected the variation mode, and the runway change can be confirmed from the altitude profile as the aircraft adapted to a lower glide profile. Figure 7. The rule-based method did not match situations that had similar characteristics to regular runway changes. This late runway alignment on 32R with a late catching of the glide plan after the aircraft attempted an alignment on 32L is detected by the statistical method.
Holding Patterns
A holding pattern is a manoeuvre designed to delay an aircraft already in flight. These are commonly implemented in TMA or in flight when the crew needs to run through check lists [8]. In TMAs, holding patterns are usually designed around a holding fix, and specific rules explain how to enter and exit the racetrack pattern. A standard holding pattern uses right-hand turns and takes approximately 4 min to complete (one minute for each 180-degree turn and two one-minute straight ahead sections), but deviations are common.
It is very tempting to use this set of rules to implement a holding pattern detection mechanism, but trajectories flying similar looking patterns would result as true negatives. The first three trajectories in Figure 8 are easily mistaken (true negatives) for holding patterns because of similar features: long parallel returning tracks and self-intersecting segments. In particular, large tankers used to refuel military aircraft fly (in designated areas) a variant of classical holding patterns with longer straight lines. On the other hand, variations of holding patterns are easily missed (false positives) by rule-based methods: while it could be argued that the aircraft in a. did not engage in a holding pattern, it looks like the aircraft in b. did not fly a full pattern; as for c., it stacked two holding patterns, but it may be relevant to label the full 270 • right-hand turn out of the pattern as part of it. On the other hand, the line between a holding pattern and other sequencing procedures is thin: the trajectory in a. has been subject to sequencing actions by the ATC because of heavy traffic, but the oval shape is not visible; the trajectory in b. looks like a holding pattern was initiated, but the full shape is not visible. The trajectory in c. stacks two holding patterns, but the way out of the pattern should probably be labelled as well.
A data-driven approach to identify such patterns is to project trajectories over a particular airspace into a lower dimensional latent space and expect them to be isolated in certain clusters or appear as outliers [5]. In an attempt to label parts of the trajectories as holding patterns, we considered sliding windows of n = 10 min iterating over trajectories every k = 2 min. Track angles were unwrapped, resampled, here with m = 30, and rescaled so that the first sample in each window has a track angle value of 0 • . Principal Component Analysis (PCA) is then used to project all samples into a lower dimension space holding most of the variance. Figure 9 plots the resulting latent space where holding patterns tend to cluster on the right-hand side of the scatter plot. Holding patterns can then be properly labelled; commonly mistaken trajectories stay out of this cluster. The publication of the in depth details of the implementation of a properly validated holding pattern detection mechanism is planned in the near future. Figure 9. The 19,480 trajectories are split into sliding windows, rescaled and resampled (30 samples per window). The resulting 72,353 samples are then projected with a PCA. Holding patterns cluster in the latent space: the red part of the trajectory is well identified as a holding pattern, whereas the green trajectory, in spite of a pattern easily mistaken with rule-based models, stays in the regular cluster.
Conclusive Remarks
Large-scale analysis of trajectories requires flight phase, pattern and procedure identification so as to bring useful insight to situational awareness, monitoring and decision making systems. In the first sections of this paper, we present how very simple rules based on aircraft or call sign identification can be enough to extract relevant information.
External structured knowledge about flight plans, airspace structure, procedures and operational practice is essential to implement relevant rule-based detection mechanisms, whereas statistical machine-learning-based methods are of great help when access to such information is sparse. Section 4 presents a more in depth analysis of the pros and cons of both approaches for two specific use cases: runway changes and holding patterns.
Rule-based methods are easy to describe. They present robust and explainable results, but are subject to many parameters to adjust in order to select corner cases only fitting part of the implemented definition, albeit widespread in the operational world. ML-based statistical methods on the other hand excel at extracting knowledge without or with poorly specified operational input; however, significant expertise is essential to interpret the results.
Based on these observations, the authors would recommend statistical methods when a detailed definition or knowledge about context and external constraints is missing. Based on the results and extracted information, rule-based methods should be considered and implemented, and the performance of both approaches should be reviewed without bias. | 4,092.2 | 2020-12-01T00:00:00.000 | [
"Computer Science"
] |
Exploiting Marine BD to Develop MLDB and Its Application to Ship Basic Planning Support
⎯ recently, the global marine logistics industry has changed significantly because of the global movement of goods. Where the amount of available data and attention paid to extensive data analysis in various topics exponentially grows, it is possible to obtain vast amounts of marine BD. However, the collection of BD groups is difficult to organize and frequently redundant. This is why the database can be so important. If these BD are effectively utilized, great innovation can be achieved in the marine industry. In this study, we develop a marine logistics database to ship basic planning support in the future. The database under study consists of BD sets, i.e. port, ship, route, international trade, and ship operation information from automatic identification system data. As a result, the relational database was developed. The effectiveness of the database is evaluated and extracted data from the database necessary for ship basic planning is discussed.
I. INTRODUCTION 1
Bi g Data (BD) is more than a word, and the business benefit of utilizing BD is widely understood. It is found that data-driven organizations perform 5% to 6% better per year [1]. BD is already playing a more significant role in shaping the maritime industry's future. By analyzing the data, the opportunity to drive improved efficiency and quality could be managed by the marine logistics players [2]. Moreover, it is widely known that BD can aid in improving forecasts, and BD can be adequate for forecasting demand and planning processes [3] [4].
A popular definition of BD is high volume, high velocity, and high variety information assets requiring new processing forms to enable enhanced decision making, insight discovery, and process optimization [5] [6]. The characteristics of BD are defined as the three Vs (volume, velocity, and variety), as shown in Figure 1. It can be further described as follows: • Data volume defines as the amount of data, and many factors can contribute to the volume increase in data. It could be the number of hundreds of tera/petabytes of information that is generated everywhere [7]. • Data velocity is defined as the generation of data that is rapid; acquiring, processing, and analyzing it requires fast mechanisms. The velocity emphasizes the real-time processing power of BD for enterprise needs [8]. • Data variety define as data types of BD, which includes structured and unstructured data such as text, audio, video, sensor data, posts, log files, and many more [8].
Significant potential and high value are hidden in the huge volumes of data widely used in various kinds of areas, including the maritime field. Because of the global movement of goods, global marine logistics has changed significantly. Hence, it is essential to develop ships that meet the specifications of the market needs. Simultaneously, marine logistics BD, e.g. port, ship, AIS data, etc., can be acquired more efficiently. If these data are effectively harnessed, great innovation might be obtained. Since BD are frequently redundant, to organize them, the database should be developed.
By considering the data available from marine BD, the objective of this study is to develop marine logistics database (MLDB) by exploiting marine BD and its application to ship basic planning support by extracting the data from MLDB. The bulk carriers which operate between Australia to Japan, Korea, and China are taken as an example. The effectiveness of the databases is discussed.
A. BD in the marine field
Many studies have applied BD in the marine field. The high accuracy of block component measurement method for construction applications has been developed by using the point cloud data of the 3D scanner [9]. Aoyama et al. [10] proposed new methods of extracting and utilizing monitoring data by introducing two different monitoring technologies and considering the reliability of each for advanced shipbuilding construction management. Perera et al. [11] analyzed large ship performance datasets to propose a model for evaluating ship performance under various seagoing conditions in the operations field.
Ando et al. [12] and Yoshida et al. [13] proposed a data collection platform called the Ship Information Management System. They utilized the data collected for many purposes (e.g., energy efficiency determinations, ship performance monitoring, and engine monitoring). MD Arifin et al. [14] [15] [16] develop a ship allocation model using marine logistics data that can forecast the demand for bulk carriers and examine the adequate principal particulars of ships for cargo transportation. Based on [12], BD application areas in the marine field are shown in Table 1. Limited studies have employed BD to improve ship construction, operation, and performance; few have examined the use of BD for basic ship planning [14] [16] [17]. Therefore, the aim of this study is to develop ship allocation for ship basic planning support by using marine BD.
B. The basic concept of MLDB
Marine logistics database is developed by integrating marine BD, i.e. operation information from AIS data, ship, port, route, and international trading information, into a relational database. The illustration of MLDB and its application for ship basic planning is shown in Figure 2. To extract valuable information from MLDB, the following requirements should be considered.
• The data structure of MLDB should be defined to make a relation between the data (operation, ships, port, route, and trade data) integrated into the relational database. • An error cleaning is required to remove some errors to ensure the quality of the data. • The cargo volume should be estimated to confirm the effectiveness of the database in this study
C. The basic concept of MLDB
The MLDB is developed by considering the following data: • Automatic Identification System (AIS): indicated speed, indicated draft, ship position, timing arrival and departure dates, and arrival and departure port from MINT [18]. • Port data that is collected from Sea-web Port, e.g., port name, longitude, latitude, port dimension, and cargo handling [19]. • Ship data that is collected from Sea-web Ship, e.g., ship name, DWT, IMO number, ship classification, principal dimension, operator, shipbuilder, ship status, and build year collected [20]. • Route data that is collected from Sea-web Port and IHS-Fairplay, e.g., departure, arrival, route, and route distances between the port [19] [21]. • Trade data that were collected from UN Comtrade, e.g., commodity trade data, trade periods, trade value, commodity code, trade quantity, reporter, and partner [22].
D. The basic concept of MLDB
The structure of MLDB in this study is shown in Figure 3. As shown in the figure below, all input data are organized and connected as a relational database. The relational concept faces challenges in handling BD and providing horizontal scalability, availability, and performance required by BD applications.
In this study, the effort of integration and relation among data ensures that valuable knowledge from vast amounts of BD can be found and easy to extract. For example, by integrating ships data and routes data and utilizing the extracted data from it, e.g. by considering speed and distances, the actual speed of ships can be identified. Moreover, by integrating ships and port data with operation data, some information related to the basic information of ships' operation state can be analyzed.
A relational database effectively organizes data in tables (or relations). The relationships created with the tables enable a relational database to efficiently store vast amounts of data and effectively retrieve selected data. The relational database design process is described in the following steps: • Step 1. Define the data structure as shown in
E. Error cleaning
In a high-density shipping area where thousands of ships may transmit AIS messages, it is a challenge for the AIS system to collect, process, and download all the messages efficiently. It results that many messages being lost, and sometimes error data collection occurs. Below are the samples of errors from the Automatic Identification System, as follows: • The draft value (d) is zero (0).
• Null information or blank space. Therefore, to ensure the quality of the data, the duplicate and NULL data should be eliminated, and the draft data should be evaluated.
F. Generating cargo information
Cargo information on an operating ship is essential for forecasting demand and understanding ship usage. However, that information is unavailable in AIS data. Therefore, the cargo type and volume of each operation are estimated. The cargo type is selected from 3 types: iron ore, coal, grain & others.
• Checking the data reliability Confirmation of data's reliability is required for a good cargo volume estimation. In our study, data reliability was evaluated by using Equation (1). Where di is defined as draft rate, dsail(i) (m) is defined as the sailing draft, and dmax(i) (m) is defined as the maximum draft of the ship.
• Checking the cargo type by using the information of the port The cargo of each operation was estimated by analyzing the cargo type from port data. As shown in Table 2, the estimation of cargo type was identified by checking the combination of arrival and departure ports cargo. For example, in operation from departure Port A to arrival Port D, the only common cargo is identified as coal. Therefore, the cargo type was estimated as coal. However, in the case of operation from departure Port B to arrival Port D, common cargos were identified as iron ore and coal.
In such a case, the cargo type was defined as multicargo and should be estimated by using the ship size • Checking the cargo type by using ship size information If the two or more common cargo types exist in port data, the cargo types were estimated using ship size. The distribution of ship size from Australia to Japan, Korea, and China for fixed cargo is shown in Figure 4 to Figure 6. Since the ship size and cargo type are closely related, the remaining operation could be estimated.
• Cargo Volume Estimation
The cargo volume was estimated by considering the maximum draft and deadweight of the ship and sailing draft that is extracted from AIS data. The cargo volume was estimated by using Equation (2). Where Vi (ton) is defined as cargo volume, DWTi is defined as deadweight, and di is the draft rate.
A. Evaluation of the cargo estimation
To verify the cargo estimation result in Section 2.6.4, we evaluate the estimation results by comparing the results with actual trade value from UN Comtrade data, using bulk carriers operating from Australia to Japan, Korea, and China in 2014. The estimation result covered around 93% from Australia-Japan, 93% from Australia-Korea, and 88% from Australia-China, as shown in
B. Extracted data for ship basic planning
The relational database structure in MLDB allows a user to get some valuable knowledge. The operation information and other information, e.g. LOA (m), DWT (ton), design speed (knot), B (m) d (m), could be easily extracted. By harnessing the extracted data from MLDB, the characteristics of the bulk carrier (Australia-Japan) and (Brazil-Japan) from 2014/01/01 ~ 2014/12/31 could be identified as shown in Table 3. Moreover, the critical information for predicting the ship's demand in the future can be obtained by using the ship allocation model. Ship basic planning support using the ship allocation model will be developed for future applications. A ship allocation model is developed by using the information from MLDB. The ship allocation model predicts the ship allocation when the user inputs the trade volume, economic situation, etc. Therefore, when the user inputs the future scenario, e.g., the state of the world economy, fuel price fluctuation, canal and port expansion, the new ship allocation will be generated so that the effective ship specification can be simulated and estimated. The future application architecture by exploiting the extracted data from MLDB is highlighted in Figure 10.
IV. CONCLUSION
In this study, MLDB using marine logistics BD is developed. Integrating BD and error cleaning, which is essential for MLDB, is executed. The estimation methods of cargo type and volume are proposed, and the valuable information that is important for ship basic planning support was extracted. The evaluation of the proposed methods is confirmed by comparing the estimation result with the international trade data from UN Comtrade. The critical data for basic planning support is extracted from the MLDB and discussed. The architecture of the future application by extracting data from MLDB is illustrated. | 2,860.2 | 2021-12-29T00:00:00.000 | [
"Computer Science"
] |
A Gaussian Kernel-Based Spatiotemporal Fusion Model for Agricultural Remote Sensing Monitoring
Time series normalized difference vegetation index (NDVI) is the primary data for agricultural remote sensing monitoring. Due to the tradeoff between a single sensor's spatial and temporal resolutions and the impacts of cloud coverage, the time series NDVI data cannot serve well for precision agriculture. In this study, a Gaussian kernel-based spatiotemporal fusion model (GKSFM) was developed to fuse high-resolution NDVI (Landsat) and low-resolution NDVI (MODIS) to produce a daily NDVI product at a 30-m spatial resolution. Considering that the NDVI curve of crop in each growing season can be characterized by Gaussian function, GKSFM used the Gaussian kernel to fit the nonlinear relationship between the high-resolution NDVI and the low-resolution NDVI, to obtain a more reasonable temporal increment. The experimental results show that GKSFM outperformed the comparative models in different proportions of cropland/noncropland and different crop phenology. In addition, GKSFM was also applied for crop mapping of Mishan County by fusing the NDVI images during the crop growing season. This study demonstrates that the accuracy of the proposed method can be improved in the midseason of crops.
I. INTRODUCTION
T IME series normalized difference vegetation index (NDVI), which can effectively reflect the vegetation cover, crop growth [1], [2], and crop health status [3], [4], has been widely used for crop field extraction [5], [6], crop growth monitoring [7], and yield prediction [8]- [12]. NDVI is usually derived from optical multispectral remote sensing data. However, there is a prominent tradeoff between the spatial resolution and temporal resolution of a single sensor. For example, many satellite sensors are with a high temporal resolution of one day or several days, e.g., moderate resolution imaging spectroradiometer (MODIS). Its spatial resolution ranges from 250 m to several kilometers, which is rough and cannot meet the observation requirements of precision agricultural monitoring. On the contrary, sensors, such as Landsat TM/ETM+/OLI, Sentinel 2 MSI, have a high spatial resolution of 30 or 10 m, providing more spatial details, Manuscript while the single sensor mentioned above has a revisit cycle of ten or more days. What is worse, the optical satellite images are inevitably influenced by the severe cloud coverage, which further degrades the temporal resolution and quality of the NDVI data. The previous study shows that around 35% of the Landsat and Sentinel 2A/B images are covered by the cloud [13]. As a result, NDVI with high spatial and temporal resolutions is often unavailable, which limits the implementations of large-scale and full-coverage agricultural monitoring. At this point, spatiotemporal fusion is one of the effective techniques to overcome these obstacles, by integrating the high spatial resolution and high temporal resolution of different sensors feasibly and at low cost [12], [14], [15]. In recent years, many spatiotemporal fusion methods have been developed under different assumptions and application purposes [16]- [21]. The spatial and temporal adaptive reflectance fusion model (STARFM) [22] was the first proposed and most widely used spatiotemporal fusion method [23]. It utilizes a sliding window to perform temporal, spatial, and spectral weighting operations on similar pixels in the neighborhood to avoid the uncertainty caused by single-pixel prediction. STARFM performs well in homogeneous regions with stable land cover during the period of prediction. To increase the accuracy of heterogeneous regions, the enhanced spatial and temporal adaptive reflectance fusion model (ESTARFM) was proposed by Zhu et al. [14] to enhance the prediction by adding a pair of images and adopting a linear conversion coefficient to characterize the relationship between the high-resolution and low-resolution images. That is, ESTARFM is based on two pairs of high-resolution and low-resolution images obtained on two dates to predict the target high-resolution image. It can reduce the system biases of different sensors and reserve more spatial details. Xie et al. [24] improved the STARFM in aid of the unmixing method, which performs better in the heterogeneous region but is still sensitive to the change of land cover [12]. Qing et al. [25] introduced the idea of nonlocal filtering to predict the target image more accurately and robustly, especially for both heterogeneous regions and temporal dynamic regions, although it is based on a linear assumption, which is not accurate over a long period. Zhu et al. [15] integrated the STARFM, the spatial interpolation and the unmixing method into a framework, which performs better in predicting abrupt land cover changes compared with other methods. However, the prediction accuracy largely depends on the degree of land cover changes between the two dates of the input images. Luo et al. [26] developed an efficient method named STAIR, which includes filling and fusion This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ steps to generate a daily 30 m image. In addition to fused-based methods, time series-based methods, such as harmonic analysis of time series (HANTS), was used to generate a continuous time series NDVI based on high spatial resolution NDVI alone [7]. This kind of method can generate time series NDVI flexibly since they do not need information from other sensors. However, such methods are limited in areas with long periods of cloud coverage.
To sum up, most of the existing methods are based on the assumption that there is a linear relationship between the lowresolution and high-resolution image pairs [27], which ignores the nonlinear changes of NDVI as the crop progress. In this study, a Gaussian kernel-based spatiotemporal fusion model (GKSFM) was proposed to fuse high-resolution NDVI (Landsat) and low-resolution NDVI (MODIS) during the crop growing season to produce a daily NDVI product at a 30-m spatial resolution. This study verified the performance of the GKSFM for different phenological stages and different proportions of cropland/noncropland. The proposed method was also applied to produce a time series NDVI with a high spatiotemporal resolution for the crop mapping in Mishan County of Heilongjiang, China, to validate the effectiveness of the proposed method in practical application.
II. GAUSSIAN KERNEL-BASED SPATIOTEMPORAL FUSION MODEL
There are four main steps for the implementation of GKSFM as follows (see Fig. 1).
1) Reconstruct the low-resolution time series NDVI and extract the linear parameters. 2) Search pixels that are similar to the central pixel in a local window by two high-resolution NDVI images acquired at different dates. 3) Calculate the weights and parameters of the nonlinear conversion of cropland pixels and the linear conversion of noncropland pixels. 4) Estimate the NDVI value of the center pixel on the prediction date for its cropland and noncropland.
A. Basic Principle
The NDVI of cropland changes gradually with crop growth. Existing literature and tools [28] reported the functional representations of the NDVI curve of the crop growing season, e.g., Gaussian function [29], asymmetric Gaussian function [30], double logistic method [31], etc. With the tradeoff between the number of parameters that each function needs to solve and the number of input image pairs, the Gaussian function is used to characterize the temporal changes of NDVI of crop growing season in this study, and a fusion model of GKSFM is proposed.
Due to the discrepancies of the sensor system, such as orbit pass, viewing angle, and spatial scale [32], the NDVI curves from different satellite sensors are inconsistent, mainly reflected in the inconsistency of mean and variance of Gaussian functions. Therefore, this study adopts the Gaussian function to represent two kinds of NDVI data with different spatial resolutions. The high-resolution and low-resolution NDVI data can be formulated by where F represents the high-resolution NDVI data; C represents the low-resolution NDVI data; i and j are the coordinates of the image; t k is the time; M C , N C , M F , and N F represent the linear parameters of the Gaussian function of the low-resolution NDVI data and the high-resolution NDVI data, respectively; g(i, j, t k , θ F ) and g(i, j, t k , θ C ) are the Gaussian kernel functions, with the mean and variance reflecting the corresponding crop phenological changes; g(i, j, t k , θ F ) and g(i, j, t k , θ C ) can be written as where a F and a C are the mean of the Gaussian functions, and b F and b C are the variance of the Gaussian functions. For noncropland pixels, we assume that the NDVI is stable with approximately linear changes over the prediction date. The Gaussian kernel functions g(x, y, t k , θ F ) and g(x, y, t k , θ C ) can be regarded as the same constant because the intraseason changes in NDVI of noncropland pixels, such as buildings and water bodies, are relatively few. From (1) and (2), the relationship of noncropland pixels between the high-resolution and low-resolution NDVI data can be deduced by The relationship between the high-resolution and lowresolution NDVI data of noncropland pixels is linear. Therefore, we characterized the NDVI of cropland pixels by (1) and (2), and represented the NDVI of noncropland pixels by (5).
B. Spatiotemporal Fusion for Cropland
According to the growth stage of crops, the time variable t k in (3) and (4) can be eliminated through logarithmic transformation We set α = b C /b F and β = (a C − a F )/b F . Then, (6) can be written as The nonlinear relationship of cropland pixels between the high-resolution and low-resolution NDVI data can be transformed into a linear relationship through logarithmic transformation. The coefficients α and β can be obtained by regressing two pairs of pixels at t m and t n from the high-resolution and the low-resolution NDVI data. To make the regression coefficient more robust, the sliding window is used to select similar pixels within the window for regression [14]. Information of similar pixels is then integrated into high-resolution NDVI calculation as described in where N is the number of similar pixels; W i is the weight of ith similar pixel; t k (k -n, or n) is the time of the input image; t p is the time of the predicted image; ω is the size of the window; and (x ω 2 , y ω 2 ) is the center pixel of the window. F (x i , y i , t p ) in (8) can be formulated by Equation (8) means that the high-resolution NDVI of the predicted date (t p ) equals the high-resolution NDVI obtained at one time (t n or t m ) added to the weighted sum of all similar pixel changes within the window in the low-resoution NDVI. According to (8), the high-resolution NDVI at either t n or t m can be used as the NDVI at a base date to estimate the high-resolution NDVI at t p , the results are marked as F m (x ω 2 , y ω 2 , t p ) and F n (x ω 2 , y ω 2 , t p ), respectively. In the heterogeneous region, the local land cover change may lead to significant uncertainty. A reliable predicted result could be obtained by combining the two predicted results by weighting, as in where T m and T n are weights of the two predicted results. Equation (19), which follows, is used to calculate the time weight T m and T n .
C. Spatiotemporal Fusion for Noncropland
The low-resolution and high-resolution NDVI for noncropland pixels can be written as (5). We assume that the land covers and other conditions do not change at the time of t m , t n , and t p , thus the relationship between the high-resolution and lowresolution NDVI is stable and linear. For convenience, we set a Furthermore, the relationship of NDVI in the two phases can be written by where t k (t m or t n ) is the date of the input image; t p is the predicted date. Thus, (11) and (12) can be deduced to The coefficient a can be obtained by regressing a pixel at t m and t n . Theoretically, the high-resolution NDVI at t p can be predicted by the high-resolution NDVI at t m or t n . However, only a single pair of pixels is used for fusion, which will cause part of the spatiotemporal details to be lost and great uncertainty. For noncropland pixels, the sliding window is used again to search neighboring pixels with similar NDVI values for robust regression. Then, (13) with neighboring regression can be written by where ω is the size of the sliding window; W i is the weight of the ith pixel in the sliding window.
D. Weight and Linear Parameter Calculation
In order to compare the prediction effect of Gaussian, the same weighting method as ESTARFM was used. First, we need to select similar pixels in the sliding window. The discriminant criterion of similar pixels can be judged by where σ is the variance of the whole image; z is the number of landcover types. When a pixel of t m and t n in the slide window meets this condition, it is marked as a similar pixel. The weight of the similar pixel determines its contribution to the predicted result, which is determined by the similarity between the NDVI of the similar pixel and the central pixel, and the distance between them. The weight W i can be defined as where d i is defined as the Euclidean distance between the similar pixel and the central pixel; R i is the correlation coefficient between the low-resolution and high-resolution pixels for the ith similar pixel. The smaller d i is, the larger R i is, and the larger the weight is. Finally, the weight is normalized according to (16). The temporal weight can be calculated according to the change magnitude detected by the low-resolution NDVI between the date t k (k = m or n) and the prediction date t p , (19) shown at bottom of this page.
There are four unknown variables in (9), i.e., M C , M F , N c , and N F , which need additional information. Due to the high temporal resolution of the low-resolution NDVI, the least square fitting was used for the low-resolution time series NDVI reconstructed by maximum value composition (MVC) [33] and harmonic analysis [34] to eliminate noise according to (2), thus M c and N c can be obtained. According to (5), all stable noncropland pixels (e.g., buildings) at t m and t n are regressed. According to the regression coefficient, M F and N F can be estimated.
A. Test Sites
There are three test sites in China, i.e., the southern Junggar Basin [see Fig. 2 Table I. The first study area is mainly utilized to test the effectiveness of GKSFM for crops in different growth stages; the second study area is primarily for the validation in different proportions of cropland/noncropland; and the third study area is for crop mapping in practical applications. The details are stated in the following.
1) The first study area locates in the southern Junggar basin (44°14'29"N, 87°8'9"E), with an area of 30 × 30 km. This region belongs to a temperate continental climate, with rare rainfall, large temperature difference between day and night, long and cold winter, and short and hot summer. This area is mainly consisted of croplands, with the major crop being cotton, which accounts for 76% of the total area, as shown in Fig. 3(a). Cotton is sown in middle April and harvested in October. It lasts about 185-200 days. This area also includes some other land covers, such as grassland and artificial surface. 2) The second study area locates in the center of Jianghan plain. We select three subregions that have different land cover characteristics, with each region having an area of 9 × 9 km. These three regions are the cropland-dominated area (30°40 45 N, 112°41 18 E), the noncroplanddominated area (30°38 16 N, 113°8 13 E), and the cropland/noncropland equivalent area (33°19 29 N, 112°24 40 E), as shown in Fig. 2(b)-(d); Fig. 3 Table I.
B. Datasets and Data Preprocessing
In this study, three different kinds of data were used, including Landsat 8 OLI, MODIS surface reflectance products (MOD09GQ), and GlobeLand30. The Landsat 8 OLI image has nine bands, with a 16-day temporal frequency and a spatial resolution of 30 m for multispectral bands, which were obtained from the U.S. Geological Survey (USGS). 1 MOD09GQ is a daily reflectance product, which can be obtained from the National Aeronautics and Space Administration (NASA). 2 It has red and near-red bands, with a spatial resolution of 250 m. We selected the available Landsat OLI images of good quality under cloudless conditions (cloud cover less than 5%), and the MOD09GQ images from early April to the end of October as the data source for all study areas in 2017 (see Table I).
The MODIS images were reprojected and resampled to have the same coordinate system and spatial resolution with the Landsat 8 images. The NDVI was calculated from Landsat images and MODIS images. The Landsat NDVI was used as the high-resolution NDVI, and MODIS NDVI was used as the low-resolution NDVI. GlobeLand30 is the first global comprehensive high-resolution land cover dataset [36], with a spatial resolution of 30 m and ten different land cover types, including cultivated land, water, artificial surface, and so on, which can be downloaded from the website. 3 In this study, we took the Glo-beLand30 data as the basis and determined the cultivated land as cropland area, with the other regions as noncropland areas. For convenience, we refer to the cropland area and noncropland area as cropland data.
C. Evaluation Criteria 1) Evaluation of Different Phenological
Periods: NDVI images of Landsat and MODIS in Jungar basin, with a total of nine pairs, were selected to test GKSFM. Meanwhile, STAIR, STARFM, and ESTARFM were utilized as the comparative algorithms. Root-mean-square error (RMSE) was used as the evaluation indicator, which can be calculated as follows: where NDVI i represents the original Landsat NDVI; NDVI i is the predicted NDVI value; and n is the total number of pixels. The time series NDVI of Landsat and MODIS are shown in Table I, and the real images of the predicted date were used as the verification data, with a total of seven predicted images obtained (see Table II. Besides, because STARFM requires only a pair of high-low resolution NDVI images as input, for the sake of fairness, STARFM in this experiment also uses two pairs of images. The same weighting method as ESTARFM is adopted to obtain the predicted NDVI image. 2) Evaluation of Different Proportions of Cropland/Noncropland: NDVI images of Landsat and MODIS in central Jianghan plain, with a total of three pairs, were selected to test GKSFM. To verify the algorithm's performance on different proportions of cropland/noncropland, three typical proportions with apparent differences in the central Jianghan plain were selected, which were the cropland-dominated region, noncropland-dominated region, and cropland/noncropland-equivalent proportion. Quantitative evaluation, regression analysis, and visual analysis were utilized to compare the algorithms' performance.
3) Evaluation by Crop Mapping: NDVI images of Landsat and MODIS in Mishan County, with a total of three pairs, were selected to fuse continuous time series NDVI for crop mapping. STARFM, ESTARFM, STAIR, and GKSFM were used for fusion to obtain continuous time series NDVI. Also, HANTS was used for generating continuous time series NDVI. Since there are only three Landsat NDVI, HANTS can only have one harmonic. These methods predicted the whole phenological period images of the time series NDVI with a 30-m resolution, from DOY 60 to DOY 300. The temporal resolution of the fused NDVI is daily. Thus, there are 240 days of NDVI in the growing season. Cloud noise interference in the original MODIS NDVI will be introduced into the fused result. Therefore, time series reconstruction on fused NDVI should be conducted. The MVC was adopted to eliminate low-value noise first. Then the Savitzky-Golay (S-G) filter was performed on it. To improve the crop mapping accuracy, fused NDVI was first masked by the existing cropland product [37]. Because most corn and soybeans were cultivated at the mountain's foot, the planting structure is scattered. Therefore, it is necessary to use the existing cropland product to carry out a priori constraint on the crop classification result. Many classifiers have been reported in previous literature [38]- [40]. In this study, the SVM classifier was utilized, with two-thirds of samples used for training, with the rest for testing. Meanwhile, multitemporal Landsat NDVI images were also used for crop classification. By comparing the classification accuracy, the reconstruction degree of the missing phenological information can be evaluated.
A. Performance for Crop Phenology
By analyzing the fusion accuracy for cropland in Table II, the RMSEs of GKSFM in the four dates of DOY123, 155, 219, and 235 are close to that of ESTARFM. While in the midseason of crops, i.e., DOY 171, 187 and 203, RMSEs are significantly lower than that of ESTARFM, STARFM, and STAIR. That is, the performance of GKSFM is outstanding in the midseason of crops. For example, in DOY 203, the RMSE of GKSFM in cropland is 0.1649, which is the lowest among that of ESTARFM (RMSE is 0.1711), STARFM (RMSE is 0.2901), and STAIR (RMSE is 0.3128), indicating that the accuracy of GKSFM in cropland area is higher than that of the other three algorithms.
For noncropland, the RMSE of GKSFM is close to that of ESTARFM and is generally lower than that of STARFM and STAIR. RMSEs of all four algorithms increase first and then decrease. This is because the pixel of MODIS was usually mixed, which transmitted the phenological information of nearby vegetation to the predicted pixel and led to a certain deviation of the predicted NDVI from the real situation. For example, the RMSEs of GKSFM and ESTARFM for noncropland in DOY 187 are higher than that of DOY 123.
The overall fusion accuracy is a combination of both accuracies from cropland and noncropland areas. Because GKSFM has a significant improvement in cropland, and is close to ESTARFM in noncropland, the overall accuracy will be higher than the other linear methods, especially in the midseason of crops. Moreover, since cropland in this study area accounts for 76%, GKSFM has a significant improvement in overall fusion accuracy due to the accuracy improvement in cropland. Fig. 4 shows the mean curve of time series NDVI of MODIS for cropland and the error bar of Landsat NDVI for cropland. The basic assumption of ESTARFM, STARFM, and STAIR is that there is a linear relationship between the high-resolution and low-resolution data. Because the change rate of cropland NDVI curve during the rising and falling phases is relatively small, the relationship between the high-resolution and low-resolution NDVI can be approximately regarded as linear (see Fig. 4). However, when the NDVI curve approaches the peak (midseason), the change rate of both high-resolution and low-resolution NDVI data is drastic, and the rate turns from positive to negative. In Fig. 4, the average NDVI change rates of Landsat cropland in the first three dates and the last three dates were relatively stable, which resulted in the performance of GKSFM being similar to that of ESTARFM, but did not affect the fusion accuracy of GKSFM for the whole growing seasons of crops. For example, RMSEs of GKSFM in DOY 123 and 235 for cropland are slightly different from that of ESTARFM, but the difference is no more than 0.003. Meanwhile, the accuracy of GKSFM in DOY 187 for cropland has a significant improvement, since the change rate of both high-resolution and low-resolution NDVI data is drastic during the midseason.
According to Fig. 4, DOY 187 is close to the peak of NDVI curve. To facilitate the analysis, we selected the local cropland area and marked it with a red box (see Fig. 5). There is a significant difference among Fig. 5(b)-(e), especially in the red circle, GKSFM [see Fig. 5(e)] performed better than the others. The absolute value of the difference between real NDVI and the predicted NDVI by GKSFM tends to zero. The fused results near the peak of the NDVI curve are quite different from each other. Thus, the advantage of the nonlinear method (e.g., GKSFM) can be reflected. The time interval from DOY 91 to DOY 251 is the period from the planting to the harvesting of crops in this region. The NDVI curve of cropland pixels generally shows a trend of increasing first and then decreasing, which corresponds to a growth cycle of crops. Therefore, RMSEs of cropland pixels fused by ESTARFM, STARFM, and STAIR show an increasing first and then decreasing trend. Because the GKSFM algorithm assumes the nonlinear change of NDVI during the crop growing season and characterizes crop phenology more reasonably, the fusion accuracy for cropland is higher than that of ESTARFM, STARFM, and STAIR.
Because the theoretical basis of GKSFM for fusing noncropland is similar to ESTARFM, the fusion accuracy of GKSFM is close to ESTARFM and has been dramatically improved compared with STARFM. Since most of the MODIS pixels labeled as noncropland are mixed pixels, which contain information of many adjacent cropland pixels, the NDVI value of noncropland pixels will be higher than the expected value. Therefore, the noncropland pixels will transmit part phenological information, causing certain fluctuations of RMSE over time and certain errors.
B. Performance for Different Proportions of Cropland/Noncropland
To test the performance of GKSFM in different proportions of cropland/noncropland, the experiment selected three typical proportions with apparent differences in central Jianghan plain, which are cropland-dominant area, noncropland-dominant area, and cropland/noncropland equivalent area, respectively. This study used the three proportions of cropland/noncropland image pairs on DOY 118 and 230 as reference image pairs, and then, inputted MODIS NDVI on DOY 198 to predict Landsat NDVI on DOY 198. The absolute image of the difference between the predicted NDVI and the real NDVI is shown in Fig. 6.
In the cropland-dominated region, most of the land cover types are crops, such as rice. The predicted date is DOY 198, which is near the peak of the NDVI curve. In Fig. 7(a)-(d), the scatter plots of GKSFM fusion results and real image are more concentrated in the diagonal region, and R 2 (0.711) is also the highest, which indicates that the predicted result of GKSFM is closer to the real images. RMSEs of GKSFM for both cropland and the whole image are the lowest, as shown in Table III with values of 0.1073 and 0.1053, respectively. The prediction accuracy of GKSFM for noncropland pixels is very close to that of ESTARFM, only 0.8% lower.
In the noncropland-dominant region, most areas are noncropland pixels (buildings, roads, etc.), and cropland pixels only account for a few parts. As there is no phenological change for buildings and roads, the NDVI values of such types do not change much with time. Therefore, the fusion accuracies of all methods are high. Fig. 7(e)-(h) shows the scatter plots among the predicted results of STAIR, STARFM, ESTARFM, and GKSFM and the real image, in which the scatter plot of GKSFM is more concentrated in the diagonal area, and its R 2 (0.807) reaches the optimal level. GKSFM had the lowest RMSEs for cropland pixels and the whole image, as shown in Table III which are 0.1154 and 0.1097, respectively. Its RMSE (0.0916) for noncropland pixels is closed to that of STARFM (0.0915).
In the cropland/noncropland equivalent proportion, as shown in Fig. 6(k)-(o), the STARFM, ESTARFM, and GKSFM can achieve acceptable prediction results for buildings and roads, while the NDVI value is underestimated for cropland pixels. Because the cropland is too fragmented, MODIS cannot capture the subtle changes and is also affected by the adjacent buildings, roads, and other land covers, resulting in a low predicted value of some fragmented cropland. The scatter plots of Fig. 7(i)-(l) also show that NDVI is underestimated in all results, but the scatter plot of GKSFM is more concentrated in the diagonal region, and R 2 (0.797) is the highest of the four methods. In terms of RMSE, GKSFM is superior to the other three methods for both cropland pixels and noncropland pixels.
Based on the above quantitative and qualitative evaluations, it is shown that under the same weighted framework, the fusion accuracy of GKSFM optimized with the Gaussian kernel is higher than that of ESTARFM in cropland pixels, which proves the feasibility of the Gaussian kernel optimization method. In Fig. 7, GKSFM's scatter plot is more concentrated in the diagonal region, and R 2 reaches the highest among the four methods.
The accuracy of spatiotemporal fusion is different in different proportions of cropland/noncropland. Cropland-dominated region and noncropland-dominated region are relatively more homogeneous, the contribution of neighborhood similar pixels to the central pixels is consistent during the fusion. The cropland/noncropland equivalent proportions have considerable heterogeneity. A low-resolution pixel might contain building, water, cropland, etc. It is inevitable that information from other local land covers will be introduced into the central pixel during the fusion process, resulting in the underestimation on cropland NDVI. As shown in Fig. 7(i)-(l), the high-value area gathered some point below the diagonal. Therefore, it can be concluded that GKSFM performs better than STAIR, STARFM, and ESTARFM in both homogeneous and heterogeneous cropland.
C. Application for Crop Mapping
In this section, the characteristics of time series NDVI of different crops will be discussed, and the fused NDVI will be used to improve the accuracy of crop classification for crop mapping in Mishan County.
In Mishan County, rice, soybeans, and corn are all singlecropping crops. For rice, roughly DOY 110 to DOY 140 is the transplanting period. During this period, the surface is a mixture of rice and water so that NDVI will show a downward trend [41]. Then, with the growth of rice, NDVI will gradually increase. This is a unique feature of rice, at about DOY 225, NDVI of rice changes from rising to declining because rice gradually matures from milk stage. Meanwhile, rice husk and japonica gradually changed from green to yellow, leading to a decline in NDVI. For soybeans, the sowing stage is about DOY 120 to DOY 150, and then, NDVI gradually increased. At about DOY 225, NDVI is at the turning point from rising to falling, which is in late August, when soybeans begin to form pod and grain, green leaves gradually turn yellow, when NDVI is not obviously different from rice. For corn, the planted stage is similar to that of soybeans. NDVI turns from rising to declining trend at about DOY 240, and the dates of maturing and harvesting are later than that of soybeans and rice. For corn, the planted stage is similar to that of soybeans. Because leaves of corn are luxuriant, NDVI is relatively high in the corn growing season, and NDVI changes from rising to declining trends at about DOY 240, and the dates of maturing and harvesting are later than that of soybeans and rice. Fig. 8 shows the time series NDVI of rice, soybeans, and corn fused by different methods. We can evaluate and analyze the results from two perspectives. One is the curve consistency with high temporal profile of MODIS NDVI. Although MODIS pixels are inevitably affected by cloud coverage, the NDVI curve after time series reconstruction (e.g., MVC and S-G filter) can reflect the overall trend to a certain extent. Another is the peak number. Because crops during the select time period are single-cropping, only one peak of the NDVI curve is reasonable. In Fig. 8(b) and (c), the result of HANTS is significantly underestimated because of the limited Landsat observations. The feature of peak of time series NDVI generated by ESTAFM and STAIR is not reasonable in Fig. 8, which has more than one peak. In the early and late growing seasons, the trend of NDVI generated by different methods is similar. However, there are significant differences in NDVI between different methods around midseason. Table IV presents the classification results of the time series NDVI fused by different methods and the original Landsat NDVI, respectively. Overall accuracy (OA) of the NDVI fused by GKSFM is the highest 88.29%, and the Kappa coefficient is 0.7823. Compared with the original multitemporal Landsat NDVI and time series NDVI generated by other methods, the OA is improved from 6.09% to 23.28%. The temporal span of fused NDVI covers the whole crop growing season, which can fully reflect the growth characteristics of crops, such as the NDVI decrease first and then increase in the rice transplanting period. The time series NDVI of corn and rice has its own unique characteristics to achieve better classification results. However, due to some late sowing date of soybeans, the planted stage of soybeans is similar to that of the rice transplanting period, which is also in the low NDVI period. Therefore, the time series characteristics of soybeans and rice are relatively close, leading to the misclassification of some soybeans as rice.
According to the classification result in Fig. 9(f), most rice was planted in the middle and east of Mishan County with a large area, while corn and soybeans were mainly planted in the middle and west with a fragment planting structure. The main reason is that rice planting income is better than that of corn and soybeans, and rice planting requires much water resource from the perspective of comprehensive planting income. To reduce the cost, rice is basically planted in contiguous areas. However, in the fragment region, rice cannot be grown due to the limitation of water, soil environment, electric power, and other factors. Only corn, soybeans, and other crops can be planted.
The experiment results show that the time series NDVI fused by GKSFM guarantees the spatial resolution of NDVI, ensures the temporal variation of NDVI of crops, and restores the NDVI in crop progress stages. Classification using the time series NDVI fused by GKSFM can fully characterize the growth of crops, and achieve an OA improvement. Meanwhile, the classification result shows the cultivation structure of Mishan County, where rice is planted in concentrated contiguous areas, soybeans and corn are grown in fragment fields due to the high profit of rice planting in the local area.
V. CONCLUSION
In this study, GKSFM, which employs Gaussian kernel model to characterize the temporal variation of NDVI in agricultural area, was proposed to produce NDVI data with both high spatial resolution and high temporal resolution. Compared with the linear hypothesis of ESTARFM, GKSFM supposes the crop growing season as nonlinear change, which is closer to reality. The experiments verified the performance of GKSFM for different phenological stages and different proportions of cropland/noncropland, compared with STARFM, ESTARFM, and STAIR. GKSFM has an accuracy improvement in the cropland area, especially during the midseason of crops. And the accuracy of GKSFM in noncropland pixels is comparable to that of ESTARFM. These results show that GKSFM has better fusion accuracy for time series NDVI of crops, which is of great significance for predicting the missing NDVI. GKSFM was also adopted to fuse the time series NDVI in Mishan County for crop mapping. The classification accuracy measured by OA is 88.29%, which is improved from 6.09% to 23.28% for the original Landsat NDVI and time series NDVI generated by other methods. It demonstrates that the fused time series NDVI can make up the critical missing information. More efforts are needed in the future for the improvement of the computational efficiency and for integrating other high-resolution sensors (e.g., Sentinel 2), for agricultural remote sensing monitoring. | 8,285.2 | 2021-01-01T00:00:00.000 | [
"Environmental Science",
"Mathematics"
] |
Effect of Glycerin on Electrical and Thermal Properties of PVA/Copper Sulphate Gel Polymer Electrolytes
Copper-ion conducting gel polymer electrolyte (GPE) systems based on polymer poly(vinyl alcohol) (PVA) and copper sulphate salt doped with glycerin as plasticizer have been synthesized by using solution casting technique. Differential scanning calorimetry (DSC) is used to examine the thermal effect of glycerin on a polymer electrolyte. With the addition of various quantities of glycerin, as a plasticizer in pure PVA and PVA + 20 wt% CuSO4 polymer electrolyte shows a decrease in the values of melting temperature, glass transition temperature and percentage of crystallinity. From TGA curves, it is observed that thermal degradation of the glycerin doped polymer electrolyte is shifted towards lower temperature when compared to pristine PVA and the weight loss of the polymer electrolyte increases with increase of glycerin concentration. From DTG analysis, the temperature of maximum decomposition for PVA is 283.4 °C and it is decreased by the addition of 20 wt% CuSO4 and upon increased concentration of the plasticizer from 1 to 3 mL of glycerin. For pure PVA and PVA + 20 wt% CuSO4, ε′ decreases with increasing glycerin concentration and is lowest at 3 mL glycerin concentration. The maximum ionic conductivity obtained was 9.39 × 10− 4 S/cm for PVA + 20 wt% CuSO4 + 3 mL glycerin gel polymer electrolyte. The above results suggested that, the optimum conducting sample is suitable as separator in rechargeable batteries.
Introduction
In electrochemical devices, rechargeable batteries (RBs) have become an increasingly essential energy storage system [1]. The electrolyte which plays an important role in batteries, which allows ions to travel through to create the battery current [2]. Metal salts and organic solvents are the most common components of liquid electrolytes. However, there are several important considerations for practical applications, such as the safety of liquid electrolyte, particularly when the batteries are subjected to thermal, mechanical, or electrical abuse [3]. Gel polymer electrolytes (GPEs), which contain plasticizers such as ethylene carbonate, propylene carbonate, starch, ionic liquids, glycerin etc. are being explored as a viable alternative to currently available organic liquid electrolytes [4]. GPEs offers more benefits compare with liquid electrolytes and solid polymer electrolytes such as improved ionic conductivity at room temperature, high reliability, high flexibility, no leakage, and good performance. In GPEs, the liquid is immobilized in a polymeric matrix, which may reduce the risk of leakage as compared to commercial separators [5].
GPEs are a type of polymer electrolyte that combines the advantages of liquid and solid electrolytes in one package. GPEs have attracted a lot of attention as a dual nature function of electrolytes [6]. Combining heterogeneous (phaseseparated) and homogeneous (uniform) gels results in high ionic conductivity and good interfacial characteristics in the liquid phase, as well as superior mechanical qualities in the solid phase [7]. The majority of GPEs exhibit a remarkable ionic conductivity on the order of 10 − 3 S/cm at ambient temperature [8]. Li et al. synthesized the composite microporous gel polymer electrolytes (CMGPEs) based on poly(vinylidene fluoride) doped with SiO 2 (Li + ). When the content of SiO 2 (Li + ) reached 5 wt% the ionic conductivity of the CMGPEs could reach 10 − 2 S/cm order of magnitude at room temperature [9]. Poly(vinyl chloride) (PVC)/ poly(ethyl methacrylate) (PEMA) blend-based gel doped and plasticized with zinc triflate [Zn(OTf) 2 ] salt and 1-ethyl-3-methylimidazolium bis(trifluoromethylsulfonyl)imide (EMIMTFSI) ionic liquid was synthesized by Prasanna et al. For this GPE the highest ionic conductivity of 4.27 × 10 − 4 S/cm at ambient temperature [10]. In general, the polymer framework with metal ion transport occurs in the swelling gelled phase or liquid phase for heterogeneous GPEs, which can support the electrochemical performance of the battery cell involving GPEs [11].
PVA, PMMA, PVP, PAN, PEO, and PVDF are some of the polymers that have been used to synthesize polymer electrolytes. Poly(vinyl alcohol) (PVA) is a well-known water-soluble polymer with high transparency and outstanding mechanical properties that has been employed in a wide range of personal, industrial, and electrical applications. PVA is a polyhydroxy polymer that is frequently used in beneficial applications due to its ease of production and biodegradability. PVA can also form films, is hydrophilic, and has a high density of reactive chemical functionalities, making it ideal for chemical, irradiation, or thermal crosslinking. The O-H bonds found in PVA aid in the formation of polymer complexes. PVA also has a lot of good qualities, such as high mechanical strength, strong ionic conductivity, non-toxicity, biocompatibility, biodegradability, and ease of manufacture. [12] Copper ion conducting polymer electrolytes with no discernible electronic conductivities are discovered, they could be used in solid polymer batteries. As an anode material in solid state batteries, copper has various benefits over metallic lithium. Copper is more environmentally friendly and less expensive than metallic lithium [13]. The literature survey reveals that the PVA complexes with copper salts are rare. The problem with polymer-salt complexes was low ionic conductivity. That we can enhance by plasticizers, composite addition, polymer blending, and in-situ polymerization techniques. This will lower the crystallinity of the resultant matrix as a result of higher chain flexibility of the polymeric backbone have all been utilized to improve the conductivity of PVA-based polymer electrolytes [14].
Different plasticizers have been used in various polymer electrolyte systems by different research groups. Out of all plasticizers, glycerin is one the good one to increase the performance of polymer electrolytes. In present paper, we report the gel polymer electrolytes prepared by the addition of glycerin to PVA-CuSO 4 (80−20). The purpose of this work is to emphasis the extraordinary effect occurring in the PVA-CuSO 4 -glycerin gel polymer electrolytes. Our results demonstrate that the dispersion of glycerin in the PVA-CuSO 4 matrix leads to an increase in the ionic conductivity of the gel polymer electrolytes. The resultant electrolyte films have been characterized by DSC, TGA-DTG analyses.
The conductivity of the polymer electrolytes is measured using ac impedance technique.
Materials
Poly(vinyl alcohol) (PVA) as a white powder with molecular weight 14,000 produced by Sigma-Aldrich, USA, ionic conductor Copper sulphate (CuSO 4 ) salt from Sigma-Aldrich, USA and glycerin (HOCH 2 CH(OH)CH 2 HO as a liquid plasticizer with density (1.26 g/cm 3 ) received from India.
Preparation PVA Based GPE Films
The solution cast process was used to make gel polymer electrolyte samples. PVA (molecular weight of 14,000) was used as the polymer. CuSO 4 was added accordingly. Distilled water was employed as the solvent in this experiment. To get a homogenous solution, the mixture was agitated at room temperature for up to 10 h. After adding the needed amount of glycerin as a plasticizer, the solution was suspended and agitated for roughly 4 h. These solution mixtures were then poured into glass petri dishes and dried for 72 h, resulting in a thick film with thickness about ~ 65 μm that was peeled off and stored in desiccators to dry further. The composition used for making gel polymer electrolytes is shown in Table 1.
Characterization
The melting and glass transition temperature was measured using a Differential Scanning Calorimeter (DSC) powered by Mettler-Toledo, USA over a temperature range of 40 to 90 °C, and thermal stability was measured using a thermogravimetric analyzer (TGA-DTG, Mettler-Toledo, USA) under N 2 gas and heating at a rate of 10 °C/min over a temperature range of 25 to 300 °C. The resistivities were measured at room temperature using GWINSTEK LCR-6100 with a frequency range of 1-100 kHz.
Thermal Analysis
The thermal effect of glycerin on a plasticized polymer electrolyte is investigated using differential scanning calorimetry (DSC). In Fig. 1, the DSC curves of pure PVA, PVA + 20 wt% CuSO 4 , and PVA + 20 wt% CuSO 4 plasticized by 1, 2 and 3 mL glycerin are presented. An endothermic peak corresponding to the melting temperature (T m ) and the glass transition temperature (T g ) of pure PVA is discovered at 73.12 °C and 66.02 °C, respectively. When 20 wt% of CuSO 4 salt was added to the polymer matrix, the melting temperature (T m ), glass transition temperature (T g ), and degree of crystallinity (χ c ) were all increased.
The glass transition temperature (T g ) moved towards the lower temperature side when different amounts of glycerin were added to the prepared electrolyte membrane PVA + 20 wt% CuSO 4 polymer electrolyte. The decrease in T g of the polymer electrolyte with increased glycerin content suggests a weaker intermolecular connection between the glycerin, CuSO 4 salt, and PVA, allowing the polymer network to move more segmentally by making the polymer matrix more flexible [15,16]. Glycerin's plasticizing action produces a reduction in the T m and T g of polymer gel electrolyte membranes when it is added. The degree of crystallinity (χ c ) and melting temperature increased when 20 wt% of CuSO 4 salt was added to the polymer matrix (T m ) were initially elevated, but this increased crystallinity and melting temperature was successfully controlled by the addition of glycerin and achieved the lowest value (approximately 11.65%) for the membrane contains a higher amount of glycerin (3 mL).
The DSC parameters are presented in Table 2. The relative percentage of crystallinity (χ c ) has been calculated with the Eq. (1).
where ∆H fo = 2.65 (J/g). The calculated heat of fusion (H f ), melting temperature (T m ), and percentage crystallinity (χ c ) values are shown in Table 2.
The heat of fusion (H f ), melting temperature (T m ), and percentage of crystallinity all decreased when plasticized with 1, 2 and 3 mL of glycerin. This is owing to a decrease in PVA polymer electrolyte crystallinity, which is a wellknown favorable condition for boosting ionic conductivity.
TGA thermographs of pure PVA, PVA + 20 wt% CuSO 4 , and PVA + 20 wt% CuSO 4 plasticized by 1, 2, and 3 mL glycerin gel polymer electrolytes are shown in Fig. 2. The plot clearly shows two stages of weight loss, the first of which is a 5% weight loss at 54.3 °C and the second of which is a maximum weight loss at 285.8 °C for pure PVA and PVA + 20 wt% CuSO 4 , both of which can be attributed to evaporation of water and degradation of PVA by the polymer chain's dehydration reaction [16]. Water absorption, heat degradation of functional groups, and thermal oxidation of the polymer backbone are the processes in which the PVA + 20 wt% CuSO 4 /Glycerin gel polymer electrolyte loses weight [17]. All membranes lose weight before reaching 50 °C in the first phase, which can be attributed to the evaporation of bound water in the samples. The release of the quaternary ammonium group's degraded product causes weight loss in the second phase in the range 124 to 180 °C [15]. Weight loss in the third phase at 200-230 °C is assumed to be caused by the release of residual quaternary ammonium group. At 270-300 °C, the fourth and final weight loss was discovered, which was caused by polymer chain breakdown [15,17].
The weight loss of the polymer electrolyte increases as the glycerin concentration rises, which is due to scission monomers and bonds in the polymeric backbone cracking and loss of dopant due to heat energy [18]. The decomposition of organic polymer chains, both the hard segment of PVA linkage and the soft segment from glycerin, was attributed to the decomposition of plasticizer with polymer [17,19].
The DTG thermograms of pure PVA, PVA + 20 wt% CuSO 4 , and PVA + 20 wt% CuSO 4 plasticized by 1, 2, and 3 mL glycerin are shown in Fig. 3. The maximal decomposition temperature of PVA is 283.4 °C, and this temperature drops when 20 wt% of CuSO 4 is added and the plasticizer concentration is increased from 1 to 3 mL. For PVA + 20 wt% CuSO 4 had a T max of 272 °C, which dropped to 261 °C for the electrolyte PVA + 20 wt% CuSO 4 /3mL glycerin. This phenomenon has been linked to the low T g value of the plasticized gel polymer electrolyte. The dipole-dipole interaction of polymer chains was reduced when glycerin was added, because it softens the polymer chain's backbone and lowers the polymer's T g . The same pattern was found by Abdulkadir et al. [20]. Figure 4 shows the results of separating CuSO 4 salt to cations and anions in PVA based GPE. The GPEs were created by injecting conducting salt (CuSO 4 ) into the polymer host (PVA) and plasticizing it with glycerin, as described in the experimental section. PVA and glycerin with hydroxyl or polar groups (−OH) produced a covalent dative bond with the CuSO 4 cations in the electrolytes [20,21]. This is because polymer materials (PVA) and plasticizers (glycerin) have −OH groups in their macromolecular chains and threedimensional networks that can react with various inorganic salts [22,23]. The presence of polar −OH groups enable chemical (complexing processes) and physical connections (via H bonding, Van der Waals dipole-ion interactions, or dipole-dipole interactions) [24]. CuSO 4 dissociates into cations (Cu + 2 ) and anions (SO 4 − 2 ) when dissolved in solvent is shown in Eq. 2.
The cations formed when coordination with the polar groups (−OH) of the plasticized polymer matrix, resulting in a complex molecule, as shown in Fig. 4 groups and the cations from the salts, there are more ionconducting sites and a better interfacial contact, resulting in a higher ionic conductivity for the electrolyte [22,25]. Metal ions from salts interact with each other and polar groups (−OH) of polymers by electrostatic interactions, resulting in the formation of coordinating bonds [20,26]. The type of functional groups attached to the polymer backbone, their compositions and distances between them, molecular weight, branching degree, metal cation nature and charge, and counter ions are all important factors that might affect polymer-metal ion interactions [23,27]. The dielectric constant decreases with increasing frequency and reaches a stable value at high frequencies (100 kHz), as shown in Figs. 5 and 6. A rapid decline in dielectric constant can be seen throughout a frequency range of 1 to 100 kHz, because the charge carriers do not have enough time to orient themselves in the field direction. The periodic reversal of the field occurs so quickly at very high frequencies, resulting in the frequency-independent ε′ behaviour observed [24,28]. The ions are capable of migrating in the direction of the electric field, but due to the blocking electrodes in the low frequency area, they are unable to reach the external circuit, resulting in a dispersion with large ε′ values. As a result, ions become trapped along the electrode-electrolyte contact, forming an electrode polarization layer [25,29]. This indicates that electrode polarization and space charge effects are dominant in the low frequency region.
The dielectric constant of the system increases when 20 wt% CuSO 4 is added to pure PVA. This could be due to a high dielectric constant combined with a strong dissociation capacity to avoid the formation of ion pairs or a high efficiency in shielding the interionic columbic attraction between cations and anions, resulting in a high dielectric constant [26].
The effect of glycerin concentration on ε′ of the gel polymer electrolytes was investigated in the frequency range from 1 to 100 kHz is shown in Figs. 5 and 6. The dielectric constant ε′ decrease with increasing glycerin content for pure PVA and PVA + 20 wt% CuSO 4 in the above frequency range 5 and 6. It achieves a high value at 3 mL glycerin concentration. This type of behaviour has also been observed in electrical conductivity investigations (see Table 3). The decrease in ε′ is due to a reduction in mobile charge carrier density. The addition of plasticizer to the polymer salt system introduces additional ions, lowering the density of charge carriers and thereby lowering the gel polymer electrolyte system's dielectric constant [24].
We can see from Table 3 that when pure PVA and PVA + 20 wt% CuSO 4 are exposed to an electric field, the cations from the salts can migrate from one coordinated site to another. This is due to the weak coordinates of the cations with sites along the polymer chain [19]. According to prior research [24], ions, primarily cations, connected to functional groups of the host polymer chains can move along the polymer backbone by recoordination. After that, the polymer chains are folded to form tunnels in which the functional groups locate and coordinate the cations. Cations can readily flow via these tunnels, which form channels [17,26].
Conducting salts have also been found to reduce the number of active centers in polymer chains, diminishing intermolecular and intramolecular interactions [17,30]. As a result, the stiffness of the host polymer will be lowered, and the mechanical and thermomechanical properties of the polymer will be altered [21,26]. Similarly, introducing high conducting salts reduces the glass transition temperature (T g ) of the polymer system (as would be stated in DSC data) [17,31]. As a result, crystallinity will decrease and salt dissociation capacity will increase, resulting in enhanced charge carrier transport and greater ionic conductivity [25,30]. An increase in glycerin concentration, which results in the formation of a complex between the polymer matrix and the conducting salt (PVA + CuSO 4 ), would raise entropy, which will improve the composite's segmental motion. Reduced crystallinity (greater flexibility) and enhanced electrolyte ionic conductivity will result from increased segmental motion [26,32].
Conclusions
In this work, PVA + 20 wt% CuSO 4 + Glycerin gel polymer electrolytes have been produced with great flexibility, low glass transition temperature, low dielectric constant and strong ionic conductivity. The glass transition temperature (T g ) of the GPE films shifted to the lower temperature side with the addition of glycerin as a plasticizer in pure PVA and PVA + 20 wt% CuSO 4 gel polymer electrolyte. When compared to pristine PVA, the glycerin doped gel polymer electrolytes thermal decomposition is moved towards lower temperatures, and the weight loss of the gel polymer electrolyte increases with the glycerin concentration. By the incorporation of glycerin to a PVA + 20 wt% CuSO 4 film, the conductivity was increased to the order of 10 − 4 S/cm at room temperature. These results suggested that, this would be suitable separator for rechargeable batteries.
Funding The authors have not disclosed any funding. | 4,289.6 | 2022-07-02T00:00:00.000 | [
"Materials Science",
"Chemistry"
] |
Study of the rare decays of B 0 s and B 0 mesons into muon pairs using data collected during 2015 and 2016 with the ATLAS detector
: A study of the decays B 0 s ! (cid:22) + (cid:22) (cid:0) and B 0 ! (cid:22) + (cid:22) (cid:0) has been performed using 26 : 3 fb (cid:0) 1 of 13 TeV LHC proton-proton collision data collected with the ATLAS detector in 2015 and 2016. Since the detector resolution in (cid:22) + (cid:22) (cid:0) invariant mass is comparable to the B 0 s - B 0 mass di(cid:11)erence, a single (cid:12)t determines the signal yields for both decay modes. This results in a measurement of the branching fraction B ( B 0 s ! (cid:22) + (cid:22) (cid:0) ) = (cid:0) 3 : 2 +1 : 1 (cid:0) 1 : 0 (cid:1) (cid:2) 10 (cid:0) 9 and an upper limit B ( B 0 ! (cid:22) + (cid:22) (cid:0) ) < 4 : 3 (cid:2) 10 (cid:0) 10 at 95% con(cid:12)dence level. The result is combined with the Run 1 ATLAS result, yielding B ( B 0 s ! (cid:22) + (cid:22) (cid:0) ) = (cid:0) 2 : 8 +0 : 8 (cid:0) 0 : 7 (cid:1) (cid:2) 10 (cid:0) 9 and B ( B 0 ! (cid:22) + (cid:22) (cid:0) ) < 2 : 1 (cid:2) 10 (cid:0) 10 at 95% con(cid:12)dence level. The combined result is consistent with the Standard Model prediction within 2.4 standard deviations in the B ( B 0 ! (cid:22) + (cid:22) (cid:0) )-B ( B 0 s ! (cid:22) + (cid:22) (cid:0) ) plane. systematic uncertainties. contours use of both the dimuon (26.3 fb (cid:0) 1 and the reference (15.1 fb (cid:0) 1 datasets.
The notation used throughout the paper refers to the combination of processes and their charge-conjugates, unless otherwise specified.The B 0 s → µ + µ − and B 0 → µ + µ − branching fractions are measured relative to the reference decay mode B + → J/ψ(→ µ + µ − )K + which is abundant and has a well-measured branching fraction B(B + → J/ψ K + ) × B(J/ψ → µ + µ − ).The B 0 → µ + µ − (B 0 s → µ + µ − ) branching fraction can be extracted as: where N d (N s ) is the B 0 → µ + µ − (B 0 s → µ + µ − ) signal yield, N J/ψK + is the B + → J/ψ K + reference channel yield, ε µ + µ − and ε J/ψK + are the corresponding values of acceptance times efficiency (measured in fiducial regions defined in Section 9), and f u / f d ( f u / f s ) is the ratio of the hadronisation probabilities of a b-quark into B + and B 0 (B 0 s ).In the quantity D ref = N J/ψK + × (ε µ + µ − /ε J/ψK + ), the ε ratio takes into account relative differences in efficiencies, integrated luminosities and the trigger selections used for the signal and the reference modes.Signal and reference channel events are selected with similar dimuon triggers.One half of the reference channel sample is used to determine the normalisation and the other half is used to tune the kinematic distributions of simulated events.
The event selection uses variables related to the B candidate decay time, thus introducing a dependence of the efficiency on the signal lifetime.The relation between the measured branching fraction and the corresponding value at production is established assuming the decay time distribution predicted in the SM, where the decay occurs mainly through the heavy eigenstate B 0 (s),H of the B 0 (s) -B 0 (s) system.Some models of new physics [16,17] predict modifications to the decay time distribution of B 0 s → µ + µ − and a comparison with the experimental result requires a correction to the ratio of the time-integrated efficiencies entering D ref .
The ATLAS inner tracking system, muon spectrometer and, for efficient identification of muons, also the calorimeters, are used to reconstruct and select the event candidates.Details of the detector, trigger, data sets, and preliminary selection criteria are discussed in Sections 2 and 3. A blind analysis was performed in which data in the dimuon invariant mass range from 5166 to 5526 MeV were removed until the procedures for event selection and the details of signal yield extraction were completely defined.Section 4 introduces the three main categories of background.Section 5 describes the strategy used to reduce the probability of hadron misidentification.The final sample of candidates is selected using a multivariate classifier, designed to enhance the signal relative to the dominant dimuon background component, as discussed in Section 6. Checks on the distributions of the variables used in the multivariate classifier are summarised in Section 7.They are based on the comparison of data and simulation for dimuon events, for B + → J/ψ K + candidates and for events selected as B 0 s → J/ψ φ → µ + µ − K + K − , which provide an additional validation of the procedures used in the analysis.Section 8 details the fit procedure used to extract the yield of B + → J/ψ K + events.The determination of the ratio of efficiencies in the signal and the reference channels is presented in Section 9. Section 10 describes the extraction of the signal yield, obtained with an unbinned maximum-likelihood fit performed on the dimuon invariant mass distribution.In this fit, events are separated into classifier intervals to maximise the fit sensitivity.The results for the branching fractions B(B 0 s → µ + µ − ) and B(B 0 → µ + µ − ) are reported in Section 11 and combined with the full Run 1 results in Section 12.
ATLAS detector, data and simulation samples
The ATLAS detector1 consists of three main components: an inner detector (ID) tracking system immersed in a 2 T axial magnetic field, surrounded by electromagnetic and hadronic calorimeters and by the muon spectrometer (MS).A full description can be found in Ref. [18], complemented by Ref. [19] for details about the new innermost silicon pixel layer that was installed for Run 2. This analysis is based on the Run 2 data recorded in 2015 and 2016 from pp collisions at the LHC at √ s = 13 TeV.Data used in the analysis were recorded during stable LHC beam periods.Data quality requirements were imposed, notably on the performance of the MS and ID systems.The total integrated luminosity collected by ATLAS in this period is 36.2fb −1 with an uncertainty of 2.1%.These values are determined using a methodology similar to that detailed in Ref. [20], based on calibration of the luminosity scale using x-y beam-separation scans, and use the LUCID-2 detector [21] for the baseline luminosity measurement.The total effective integrated luminosity used in this analysis -accounting for trigger prescales -amounts to 26.3 fb −1 for the signal and 15.1 fb −1 for the reference channel.
Samples of simulated Monte Carlo (MC) events are used for training and validation of the multivariate analyses, for the determination of the efficiency ratios, and for developing the procedure used to determine the signal.Exclusive MC samples were produced for the signal channels B 0 s → µ + µ − and B 0 → µ + µ − , the reference channel B + → J/ψ K + (J/ψ → µ + µ − ), and the control channel s) → hh decays with h ( ) being a charged pion or kaon, and inclusive decays B → J/ψ X as well as the exclusive B + → J/ψ π + decay.
Most of the dimuon candidates in the data sample originate from the decays of hadrons produced in the hadronisation of b b pairs.The inclusive b b → µ + µ − X MC sample used to describe this background requires the presence of two muons in the final state, with both muons originating from the b b decay chain.The size of this sample is equivalent to roughly three times the integrated luminosity of the data.
The MC samples were generated with P 8 [22].The ATLAS detector and its response were simulated using G 4 [23,24].Additional pp interactions in the same and nearby bunch crossings (pile-up) are included in the simulation.Muon reconstruction and triggering efficiencies are corrected in the simulated samples using data-driven scale factors.The scale factors for the trigger efficiencies are obtained by comparing data and simulation efficiencies determined with a J/ψ tag-and-probe method.This procedure yields scale factors as a function of the muon transverse momentum and pseudorapidity, which are applied throughout the analysis [25].Reconstruction and selection efficiencies are obtained from simulation and similarly corrected according to data-driven comparisons.In addition to these efficiency corrections, simulated events are reweighted to reproduce the pile-up multiplicity observed in data, and according to the equivalent integrated luminosity associated with each trigger selection.
Using the iterative reweighting method described in Ref. [26], the simulated samples of the exclusive decays considered are adjusted with two-dimensional data-driven weights (DDW) to correct for the differences between simulation and data observed in the B meson transverse momentum and pseudorapidity distributions.DDW obtained from B + → J/ψ K + decays are used to correct the simulation samples in the signal and reference channels.DDW obtained from the B 0 s → J/ψ φ control channel are found to agree with those from B + → J/ψ K + , showing that the same corrections are applicable to B 0 s and B 0 decays.
Residual differences between data and simulation studied in the B + → J/ψ K + and B 0 s → J/ψ φ signals are treated as sources of systematic uncertainty in the evaluation of the signal efficiency, as discussed in Section 9.The only exception to this treatment is the B meson isolation (I 0.7 in Section 6 and Table 1), where residual differences are used to reweight the signal MC events and the corresponding uncertainties are propagated to account for residual systematic uncertainty effects.
Similarly to the exclusive decays, the kinematic distributions of the inclusive b b → µ + µ − X MC sample are reweighted with corrections obtained from the dimuon invariant mass sidebands in data.
Data selection
For data collected during the LHC Run 2, the ATLAS detector uses a two-level trigger system, consisting of a hardware-based first-level trigger and a software-based high-level trigger.A first-level dimuon trigger [27] selects events requiring that one muon has p T > 4 GeV and the other has p T > 6 GeV.A full track reconstruction of the muon candidates is performed by the high-level trigger, where an additional loose selection is imposed on the dimuon invariant mass m µµ , accepting candidates in the range 4 GeV to 8.5 GeV. Due to the increased pile-up in 2016 data, an additional selection was added at this trigger stage, requiring the vector from the primary vertex to the dimuon vertex to have a positive component (L xy ) along the dimuon's transverse momentum direction.The effect of this selection is accounted for in the analysis but has no consequence since stricter requirements are applied in the full event selection (see Section 6).
The signal channel, the reference channel B + → J/ψ K + and the control channel B 0 s → J/ψφ were selected with trigger prescale factors that vary during the data-taking period.In the 36.2fb −1 of data analysed, the prescaling of the trigger approximately averages to a reduction by a factor 1.4, giving an effective integrated luminosity for the signal sample of 26.3 fb −1 , while for the reference and control channels 15.1 fb −1 were collected due to an effective prescale of 2.4.These effects are taken into account in the extraction of the signal branching fraction, through the ε factors in Eq. ( 1).
Using information from the full offline reconstruction, a preliminary selection is performed on candidates for B 0 In the ID system, muon candidates are required to have at least one measured hit in the pixel detector and two measured hits in the semiconductor tracker.They are also required to be reconstructed in the MS, and to have |η| < 2.5.The offline muon pair must pass the p T > 4 GeV and p T > 6 GeV requirements imposed by the trigger.Furthermore, the muon candidates are required to fulfil tight muon quality criteria [28]; this requirement is relaxed to loose for the hadron misidentification studies in Section 5. Kaon candidates must satisfy similar requirements in the ID, except for a looser requirement of p T > 1 GeV.
The computed B meson properties are based on a decay vertex fitted to two, three or four tracks, depending on the decay process to be reconstructed.The B candidates are required to have a χ 2 per degree of freedom below 6 for the fit to the B vertex, and below 10 for the fit to the J/ψ → µ + µ − vertex.The selections 2915 < m(µ + µ − ) < 3275 MeV and 1005 < m(K + K − ) < 1035 MeV are applied to the J/ψ → µ + µ − and the φ → K + K − vertices, respectively.In the fits to the B + → J/ψ K + and B 0 s → J/ψ φ channels, the reconstructed dimuon mass is constrained to the world average J/ψ mass [29].
Reconstructed B candidates are retained if they satisfy p B T > 8.0 GeV and |η B | < 2.5.The invariant mass of each B candidate is calculated using muon trajectories measured by combining the information from the ID and MS to improve upon the mass resolution obtained from ID information only [30].
The invariant mass range considered for the B 0 (s) → µ + µ − decay starts at 4766 MeV and is 1200 MeV wide.Within this range a 360-MeV-wide signal region is defined, starting at 5166 MeV.The remainder of the range defines the upper and lower mass sidebands of the analysis.
For the reference and control channels, the mass range considered is 4930-5630 (5050-5650) MeV for B + → J/ψ K + (B 0 s → J/ψ φ), where 5180-5380 (5297-5437) MeV is the peak region and higher and lower mass ranges comprise the mass sidebands used for background subtraction.
The coordinates of primary vertices (PV) are obtained from charged-particle tracks not used in the decay vertices, and that are constrained to the luminous region of the colliding beams in the transverse plane.The matching of a B candidate to a PV is made by extrapolating the candidate trajectory to the point of closest approach to the beam axis, and choosing the PV with the smallest distance along z.Simulation shows that this method matches the correct vertex with a probability above 99% for all relevant pile-up conditions.
To reduce the large background in the B 0 (s) → µ + µ − channel before applying the final selection based on multivariate classifiers, a loose collinearity requirement is applied between the momentum of the B candidate ( − → p B ) and the vector from the PV to the decay vertex ( − → ∆x).The absolute value of the azimuthal angle α 2D between these two vectors is required to be smaller than 1.0 radians.The combination ∆R flight = α 2D 2 + (∆η) 2 , where ∆η is the difference in pseudorapidity, is required to satisfy ∆R flight < 1.5.
Background composition
The background to the B 0 (s) → µ + µ − signal originates from three main sources: Continuum background, the dominant combinatorial component, which consists of muons originating from uncorrelated hadron decays and is characterised by a weak dependence on the dimuon invariant mass; Partially reconstructed decays, where one or more of the final-state particles (X) in a b-hadron decay is not reconstructed, causing these candidates to accumulate in the low dimuon invariant mass sideband (this background includes a significant contribution from semileptonic decays where one of the muons is a misidentified hadron, discussed below); Peaking background, due to B 0 (s) → hh decays, with both hadrons misidentified as muons.
The continuum background consists mainly of muons produced independently in the fragmentation and decay chains of a b-quark and a b-quark.It is studied in the signal mass sidebands, and it is found to be well described by the inclusive b b → µ + µ − X MC sample.
The partially reconstructed decays consist of several topologies: The b b → µ + µ − X MC sample is used to investigate the background composition after the analysis selection.All backgrounds in this sample have a dimuon invariant mass distribution mainly below the mass range considered in this analysis, with a high-mass tail extending through the signal region.The simulation does not contemplate sources other than muons from b b decays: c c and prompt contributions are not included.All possible origins of two muons in the b b decay tree are, however, analysed, after classification into the mutually exclusive continuum and partially reconstructed categories described above.This sample is used only to identify suitable functional models for the corresponding background components, and as a benchmark for these models.No shape or normalisation constraints are derived from this simulation.This makes the analysis largely insensitive to mismatches between background simulation and data.
The semileptonic decays with final-state hadrons misidentified as muons consist mainly of three-body charmless decays B 0 → π − µ + ν, B 0 s → K − µ + ν and Λ b → pµ − ν in which the tail of the invariant mass distribution extends into the signal region.Due to branching fractions of the order of 10 −6 , this background is not large, and is further reduced by the muon identification requirements, discussed in Section 5.The MC invariant mass distributions of these partially reconstructed decay topologies are shown together with the SM signal predictions in Figure 1(a) after applying the preliminary selection criteria described in Section 3. Finally, the peaking background is due to B 0 (s) → hh decays containing two hadrons misidentified as muons.The distributions in Figure 1(b), obtained from simulation, show that these decays populate the signal region.This component is further discussed in Section 5.
Hadron misidentification
In the preliminary selection, muon candidates are formed from the combination of tracks reconstructed independently in the ID and MS.The performance of the muon reconstruction in ATLAS is presented in Ref. [28].Additional studies were performed to evaluate the amount of background related to hadrons erroneously identified as muons.
Detailed simulation studies were performed for the B 0 (s) → hh channel with a full G 4-based simulation [23] of all systems of the ATLAS detector.The vast majority of background events from particle misidentification are due to decays in flight of kaons and pions, in which the muon receives most of the energy of the parent meson.Hence this background is generally related to true muons measured in the MS, but not produced promptly in the decay of a B meson.
The muon candidate is required to pass tight muon requirements in the preliminary selection, which are based on the profile of energy deposits in the calorimeters as well as on tighter ID-MS matching criteria than those used for the loose requirements.Two-body B decays in control regions show that tight selections have, relative to the loose counterpart, an average hadron misidentification probability reduced by a factor 0.39 with a muon reconstruction efficiency of 90%.The resulting final value of the misidentification probability is 0.08% for kaons and 0.1% for pions.Efficiencies and fake rates are relative to the analysis preselections, including tracking but excluding any muon requirement.
The background due to B 0 (s) → hh , with double misidentification of hh as µ + µ − , has a reconstructed invariant mass distribution that peaks at 5240 MeV, close to the B 0 mass, and is effectively indistinguishable from the B 0 signal (see Figure 1(b)).The expected number of peaking-background events can be estimated in a way analogous to that for the signal, from the number of observed B + → J/ψ K + events using Eq. ( 1), after taking into account the expected differences from muon identification variables and trigger selections.World average [29] values for the branching fractions of B 0 and B 0 s into Kπ, KK and ππ are used, together with the hadron misidentification probabilities obtained from simulation.This results in 2.7 ± 1.3 total expected peaking-background events, after the reference multivariate selection.2 When selecting loose muons and inverting the additional requirements imposed in the tight muon selection, the number of events containing real muons is substantially reduced, while the number of peakingbackground events is approximately two times larger than in the sample obtained with the nominal selection.A fit to data for this background-enhanced sample returns 6.8 ± 3.7 events, which translates into a peaking-background yield in the signal region of 2.9 ± 2.0 events, in good agreement with the simulation.
Besides the peaking background, the tight muon selection also reduces the semileptonic contributions with a single misidentified hadron.Simulation yields 30 ± 3 events expected from B 0 → π − µ + ν and B 0 s → K − µ + ν in the final sample, with a distribution kinematically constrained to be mostly below the signal region.The Λ b → pµ − ν contribution is negligible due to the smaller production cross section and the low rate at which protons fake muons.
Continuum background reduction
A multivariate analysis, implemented as a boosted decision tree (BDT), is employed to enhance the signal relative to the continuum background.This BDT is based on the 15 variables described in Table 1.The discriminating variables can be classified into three groups: (a) B meson variables, related to the reconstruction of the decay vertex and to the collinearity between − → p B and the flight vector between the production and decay vertices − → ∆x; (b) variables describing the muons that form the B meson candidate; and (c) variables related to the rest of the event.The selection of the variables aims to maximise the discrimination power of the classifier without introducing significant dependence on the invariant mass of the muon pair.
The same discriminating variables were used in the previous analysis based on the full Run 1 dataset [15].The removal of individual variables was explored to simplify the BDT input, however, this results inevitably in a significant reduction of the BDT separation power.To minimise the dependence of the classifier on the effects of pile-up, the additional tracks considered to compute the variables I 0.7 , DOCA xtrk and N close xtrk are required to be compatible with the primary vertex matched to the dimuon candidate.
The correlations among the discriminating variables were studied in the MC samples for signal and continuum background discussed in Section 2, and in data from the sidebands of the µ + µ − invariant mass Table 1: Description of the 15 input variables used in a BDT classifier to discriminate between signal and continuum background.When the BDT classifier is applied to B + → J/ψ K + and B 0 s → J/ψ φ candidates, the variables related to the decay products of the B mesons refer only to the muons from the decay of the J/ψ.Horizontal lines separate the classifications into groups (a), (b) and (c) respectively, as described in the text.For category (c), additional tracks are required to have p T >500 MeV.
IP 3D
B Three-dimensional impact parameter of the B candidate to the associated PV.
DOCA µµ Distance of closest approach (DOCA) of the two tracks forming the B candidate (three-dimensional). ∆φ µµ Azimuthal angle between the momenta of the two tracks forming the B candidate.
Significance of the larger absolute value of the impact parameters to the PV of the tracks forming the B candidate, in the transverse plane.
Significance of the smaller absolute value of the impact parameters to the PV of the tracks forming the B candidate, in the transverse plane.The simulated signal sample and the data from the dimuon invariant mass sideband regions are used for training and testing the classifier.As discussed in Section 2, simulated signal samples are corrected for muon reconstruction efficiency differencies between simulation and data, and reweighted according to the distributions of p T and |η| of the dimuon and of the pile-up observed in data.The BDT training is done using the TMVA toolkit [31].Sideband data are used for the BDT training and optimisation.The sample is subdivided into three randomly selected separate and equally populated subsamples used in turn to train and validate the selection efficiency of three independent BDTs.The resulting BDTs are found to produce results that are statistically compatible, and are combined in one single classifier in such a way that each BDT is applied only to the part of the data sample not involved in the BDT training.
Figure 2 shows the distribution of the BDT output variable for simulated signal and backgrounds, separately for continuum background and partially reconstructed events.Also shown is the BDT distribution for dimuon candidates from the sidebands of the invariant mass distribution in data.The BDT output was found to not have any significant correlation with the dimuon invariant mass.The final selection requires a BDT output value larger than 0.1439, corresponding to signal and continuum background efficiencies of 72% and 0.3% respectively.The analysis uses all candidates after this selection; however, accepted events with BDT values close to the selection threshold are effectively only constraining the background models.3For this reason, signal and reference channel yields and efficiencies are measured relative to the signal reference selection discussed in Section 9, while the events in the final selection with lower BDT values are used to improve the background modelling.1).
Data-simulation comparisons
The points correspond to the sideband data, while the continuous-line histogram corresponds to the continuum MC distribution, normalised to the number of data events.The filled-area histogram shows the signal MC distribution for comparison.The bottom insets report the data/MC ratio, zoomed-in in order to highlight discrepancies in the region that is most relevant for the analysis.
The distributions of the discriminating variables are also used to compare simulation and data in the B + → J/ψ K + and B 0 s → J/ψ φ samples.To perform these comparisons, for each variable the contribution of the background is subtracted from the B + → J/ψ K + (B 0 s → J/ψ φ) signal.For this purpose, a maximum-likelihood fit is performed to the invariant mass distribution, separately in bins of rapidity and transverse momentum.The fit model used is simpler than the one employed for the extraction of the B + signal for normalisation as described in Section 8, but is sufficient for the purpose discussed here.
Figure 4 shows examples of the distributions of the discriminating variables obtained from data and simulation for the reference samples.Observed differences are used to estimate systematic uncertainties with the procedure described in Section 9.The discrepancy visible for the isolation variable I 0.7 in the B + → J/ψ K + channel is the most significant among all variables and both reference channels.
B + → J/ψK + yield extraction
The reference channel yield is extracted with an unbinned extended maximum-likelihood fit to the J/ψK + invariant mass distribution.The functional forms used to model both the signal and the backgrounds are obtained from studies of MC samples.All the yields are extracted from the fit to data, while the shape The variable I 0.7 is also shown in (d) for B 0 s → J/ψ φ events.The points correspond to the sideband-subtracted data, while the line corresponds to the MC distribution, normalised to the number of data events.The highest bin in (c) and (d) accounts for the events with I 0.7 = 1.The bottom insets report the data/MC ratio, zoomed-in in order to highlight discrepancies in the region that is most relevant for the analysis.
parameters are determined from a simultaneous fit to data and MC samples.Free parameters are introduced for the mass scale and mass resolution to accommodate data-MC differences.The best-fit values indicate a negligibly poorer resolution and a mass shift at the level of 2 MeV.
The fit includes four components: B + → J/ψ K + decays, Cabibbo-suppressed B + → J/ψ π + decays in the right tail of the main peak, partially reconstructed B decays (PRD) where one or more of the final-state particles are missing, and the non-resonant background composed mostly of b b → J/ψ X decays.All components other than the last one have shapes constrained by MC simulation as described below, with the data fit including an additional Gaussian convolution to account for possible data-MC discrepancies in mass scale and resolution.The shape of the B + → J/ψ K + mass distribution is parameterised using a Johnson The various components of the spectrum are described in the text.The inset at the bottom of the plot shows the bin-by-bin pulls for the fit, where the pull is defined as the difference between the data point and the value obtained from the fit function, divided by the error from the fit.
S U function [32,33].The final B + → J/ψ K + yield includes the contribution from radiative effects (i.e.where photons are emitted from the B decay products).The B + → J/ψ π + decays are modelled by the sum of a Johnson S U function and a Gaussian function, where all parameters except the normalisation are determined from the simulation.The decay modes contributing to the PRD are classified in simulation on the basis of their mass dependence.Each of the three resulting categories contributes to the overall PRD shape with combinations of Fermi-Dirac and exponential functions, contributing differently in the low-mass region.Their shape parameters are determined from simulation.Finally, the non-resonant background is modelled with an exponential function with the shape parameter extracted from the fit.The normalisation of each component is unconstrained in the fit, which is therefore mostly independent of external inputs for the branching fractions.The residual dependence of the PRD model shapes on the relative branching fractions of the contributing decays is considered as a source of systematic uncertainty.The resulting fit, shown in Figure 5, yields 334 351 B + → J/ψ K + decays with a statistical uncertainty of 0.3%.The ratio of yields of B + → J/ψ π + and B + → J/ψ K + is (3.71 ± 0.09)% (where the uncertainty reported is statistical only), in agreement with the expectation from the world average [29] of (3.84 ± 0.16)%.Some systematic uncertainties are included by design in the fit.For example, the effect of the limited MC sample size is included by performing a simultaneous fit to data and MC samples.Scaling factors determined in the fit to data account for the differences in mass scale and resolution between data and simulation.Additional systematic uncertainties are evaluated by varying the default fit model described above.They take into account the kinematic differences between data and the MC samples used in the fit, differences in efficiency between B + and B − decays and uncertainties in the relative fractions and shapes of PRD and in the shape of the various fit components.The stability of this large sample fit is verified by repeating the fit with different initial parameter values.In each case, the change relative to the default fit is recorded, symmetrised and used as an estimate of the systematic uncertainty.The main contributions to the systematic uncertainty come from the functional models of the background components, the composition of the PRD and the signal charge asymmetry.The total systematic uncertainty in the B + yield amounts to 4.8%.9 Evaluation of the B + → J/ψ K + to B 0 (s) → µ + µ − efficiency ratio The ratio of efficiencies The signal reference BDT selection, defined as BDT > 0.2455, has an efficiency of about 54% (51%) in the signal (reference) channel.The overall efficiency ratio R ε is 0.1176 ± 0.0009 (stat.)± 0.0047 (syst.), with uncertainties determined as described below.
The ratio R ε is computed using the mean lifetime of B 0 s [29,34] in the MC generator.The same efficiency ratios apply to the B 0 s → µ + µ − and B 0 → µ + µ − decays, within the MC statistical uncertainty of 0.8%.The statistical uncertainties in the efficiency ratios come from the finite number of events available for the simulated samples.The systematic uncertainty affecting R ε comes from five sources.
The first contribution is due to the uncertainties in the data-driven weights introduced in Section 2, and amounts to 0.8%.This term is assessed by creating alternative datasets using correction factors that are randomly sampled in accord with their nominal values and uncertainties.The RMS value of the distribution of R ε obtained from these datasets is taken as the systematic uncertainty.
A second contribution of 1.0% is related to the muon trigger and reconstruction efficiencies.The effect of the uncertainties in the data-driven efficiencies is evaluated using random sampling, as above.A 3.2% systematic uncertainty contribution arises from the differences between data and simulation observed in the modelling of the discriminating variables used in the BDT classifier (Table 1).For each of the 15 variables, the MC samples for B 0 (s) → µ + µ − and B + → J/ψ K + are reweighted with the ratio of the B + → J/ψ K + event distributions in sideband-subtracted data and the MC simulation.The isolation variable I 0.7 is computed using charged-particle tracks only, and differences between B + and B 0 s are expected and were observed in previous studies [26].Hence for this variable the reweighting procedure for the B 0 s → µ + µ − MC sample is based on B 0 s → J/ψ φ data.For all discriminating variables except I 0.7 , the value of the efficiency ratio is modified by less than 2% by the reweighting procedure and each variation is taken as an independent contribution to the systematic uncertainty in the efficiency ratio.For I 0.7 the reweighting procedure changes the efficiency ratio by about 6%.Because of the significant mis-modelling, the MC samples obtained after reweighting on the distribution of I 0.7 are taken as a reference, thus correcting the central value of the efficiency ratio.The 1% uncertainty in the I 0.7 correction is added to the sum in quadrature of the uncertainties assigned to the other discriminating variables.The total uncertainty in the modelling of the discriminating variables is the dominant contribution to the systematic uncertainty in R ε .
A fourth source of systematic uncertainty arises from differences between the B 0 s → µ + µ − and the B + → J/ψ K + channel related to the reconstruction efficiency of the kaon track and of the B + decay vertex.These uncertainties are mainly due to inaccuracy in the modelling of passive material in the ID.
The corresponding systematic uncertainty is estimated by varying the detector model in simulations, which results in changes between 0.4% and 1.5% depending on the η range considered.The largest value is used in the full eta range.
Finally, the uncertainty associated with reweighting the simulated events as a function of the pile-up multiplicity distribution contributes 0.6%.A correction to the efficiency ratio for B 0 s → µ + µ − is needed because of the width difference ∆Γ s between the B 0 s eigenstates.According to the SM, the decay B 0 s → µ + µ − proceeds mainly through the heavy state B s,H [1,16], which has width Γ s,H = Γ s − ∆Γ s /2, which is 6.6% smaller than the average Γ s [29].The variation in the value of the B 0 s → µ + µ − mean lifetime was tested with simulation and found to change the B 0 s efficiency, and consequently the B 0 s to B + efficiency ratio, by +3.3%.This correction is applied to the central value of D ref used in Section 11 for the determination of B(B 0 s → µ + µ − ).4 Due to the small value of ∆Γ d , no correction needs to be applied to the B 0 → µ + µ − decay.
Extraction of the signal yield
Dimuon candidates passing the preliminary selection and the selections against hadron misidentification and continuum background are classified according to four intervals (with boundaries at 0.1439, 0.2455, 0.3312, 0.4163 and 1) in the BDT output.Repeating the Run 1 analysis approach, each interval is chosen to give an equal efficiency of 18% for signal MC events, and they are ordered according to increasing signal-to-background ratio.
An unbinned extended maximum-likelihood fit is performed on the dimuon invariant mass distribution simultaneously across the four BDT intervals.The first two bins contribute very little to the signal determination and are included for background modelling.They were verified with MC pseudo-experiments to have negligible relevance for the signal extraction.The result of the fit is the total yield of B 0 s → µ + µ − and 4 The decay time distribution of B 0 s → µ + µ − is predicted to be different from the one of B s,H in scenarios of new physics, with the effect related to the observable A µµ ∆Γ [16,17].The maximum possible deviation from the SM prediction of A µµ ∆Γ = +1 is for A µµ ∆Γ = −1, for which the decay time distribution of B 0 s → µ + µ − corresponds to the distribution of the B s,L eigenstate.In the comparison with new-physics predictions, the value of B(B 0 s → µ + µ − ) obtained from this analysis should be corrected by +3.6% or +7.8% respectively for A µµ ∆Γ = 0 and −1.
B 0 → µ + µ − events in the three most sensitive BDT intervals.The parameters describing the background are allowed to vary freely and are determined by the fit.The normalisations of the individual fit components, including the signals, are completely unconstrained and allowed to take negative values.The ratios of the signal yields in different BDT bins are constrained to equal the ratios of the signal efficiencies in those same bins, as discussed in Section 10.1, where the signal and background fit models are described.The systematic uncertainties due to variations in the relative signal and background efficiencies between BDT intervals, to the signal parameterisation and to the background model are discussed in Sections 10.1 and 10.2.Each is modelled in the likelihood as a multiplicative Gaussian distribution whose width is equal to the corresponding systematic uncertainty.
Signal and background model
The signal and background models are derived from simulations and from data collected in the mass sidebands of the search region.
The invariant mass distribution of the B 0 (s) → µ + µ − signal is described with two double-Gaussian distributions, centred respectively at the B 0 or B 0 s mass.The shape parameters are extracted from simulation, where they are found to be uncorrelated with the BDT output.Systematic uncertainties in the mass scale and resolutions are considered separately.Figure 6 shows the invariant mass distributions for B 0 and B 0 s , obtained from MC events and normalised to the SM expectations.Section 9 explains how systematic uncertainties affect the overall selection efficiency for signal candidates.The separation of the candidates according to BDT bins introduces an additional dependence on the relative efficiencies in each BDT bin, and systematic uncertainties in these relative efficiencies must be accounted for.Two different procedures are explored.First, the distribution of the BDT output is compared between MC simulation and background-subtracted data for the reference and control channels.The differences observed in the ratio of data to simulation are described with a linear dependence on the BDT output.The linear dependencies observed for B + → J/ψ K + and B 0 s → J/ψ φ are in turn used to reweight the BDT-output distribution in the B 0 (s) → µ + µ − MC sample.The maximum corresponding absolute variations in the efficiencies are equal to +1.7% and −2.3% respectively in the second and fourth BDT intervals, with the third interval basically unaffected.A second assessment of the systematic uncertainties in the relative efficiency of the BDT intervals is obtained with a procedure similar to the one used for the event selection (Section 9).For each discriminating variable, the MC sample is reweighted according to the difference between simulation and data observed in the reference channels.The variation in the efficiency of each BDT interval is taken as the contribution to the systematic uncertainty due to mis-modelling of that variable.The sum in quadrature of the variations due to all discriminating variables is found to be similar in the B + → J/ψ K + and B 0 s → J/ψ φ channels.Absolute variations of ±1.0%, ±2.4% and ±4.4% are found in the second, third and fourth BDT intervals respectively.The first of these procedures is used as a baseline for inclusion of Gaussian terms in the signal extraction likelihood to account for the uncertainty in the relative signal efficiency in the three most sensitive BDT bins.Care is taken in constraining the sum of the efficiencies of the three intervals sensitive to the signal, since that absolute efficiency and the corresponding uncertainty is parameterised with the R ε term.
Figure 7 shows the distribution of the BDT output from data and simulation for the reference channels, after reweighting the MC sample.The MC distribution for B 0 (s) → µ + µ − events is also shown, illustrating how the linear deviation obtained from the reference channels affects the simulated signal BDT output.When studying these effects, the linear fits to the ratios in Figures 7(a) and 7(b) are performed in the range corresponding to the three BDT bins with the highest signal-to-background ratio, since the remaining bin is insensitive to the signal contribution.
The background in the signal fit is composed of the types of events described in Section 4: (a) the continuum background; (b) the background from partially reconstructed b → µ + µ − X events, which is present mainly in the low mass sideband; (c) the peaking background.
The non-peaking contributions have a common mass shape model, with parameters constrained across the BDT bins in the fit as described below, and independent yields across BDT bins and components.
Both in simulation and sideband data, the continuum background has a small linear dependence on the dimuon invariant mass.In the simulation, the slope parameter has a roughly linear dependence versus BDT interval; the mass sidebands in data confirm this trend, albeit with large statistical uncertainty.This dependence is included in the fit model.The small systematic uncertainties due to deviations from this assumption are discussed below in Section 10.2.
The b → µ + µ − X background has a dimuon invariant mass distribution that falls monotonically with increasing dimuon mass.The mass dependence is derived from data in the low mass sideband, and described with an exponential function with the same shape in each BDT interval.The value of the shape parameter is extracted from the fit to data.
The invariant mass distribution of the peaking background is very similar to the B 0 signal, as shown in Figure 1(b).The description of this component is obtained from MC simulation, which indicates that the shape and normalisation are the same for all BDT bins.In the fit, this contribution is included with fixed mass shape and with a normalisation of 2.9 ± 2.0 events, as discussed in Section 5.This contribution is equally distributed among the three highest intervals of the BDT output.
The fitting procedure is tested with MC pseudo-experiments, as discussed below.
Systematic uncertainties in the fit
Studies based on MC pseudo-experiments are used to assess the sensitivity of the fit to the input assumptions.Variations in the description of signal and background components are used in the generation of these samples.The corresponding changes in the average numbers, N s and N d , of B 0 s and B 0 events determined by the fit, run in the nominal configuration, are taken as systematic uncertainties.The size of the variations used in the generation of the MC pseudo-experiments is determined in some cases by known characteristics of the ATLAS detector (reconstructed momentum scale and momentum resolution), in others using MC evaluation (background due to semileptonic three-body B 0 (s) decays and to B c → J/ψ µ + ), and in others from uncertainties determined from data in the sidebands or from simulation (shapes of the background components and their variation across the BDT intervals).
The MC pseudo-experiments were generated with the normalisation of the continuum and b → µµX components obtained from the fit to the data in the sidebands of the invariant mass distribution, and the peaking background from the expectation discussed in Section 5.The signal was generated with different configurations, roughly covering the range between zero and twice the expected SM yield.
For all variations of the assumptions and all configurations of the signal amplitudes the distributions of the differences between fit results and generated values are used to evaluate systematic uncertainties.In addition, distributions obtained from MC pseudo-experiments generated and fitted according to the nominal fit model are used to study systematic biases deriving from the fit procedure.For both signal yields, the bias is smaller than 15% of the fit error, for true values of the B 0 s → µ + µ − branching fraction above 5 × 10 −10 .
The shifts in N s or N d are combined by considering separately the sums in quadrature of the positive and negative shifts and taking the larger as the symmetric systematic uncertainty.The total systematic uncertainty is found to increase with the assumed size of the signal, with a dependence σ N s syst = 3 + 0.05N s and σ N d syst = 2.9 + 0.05N s + 0.05N d .Most of the shifts observed have opposite sign for N s and N d , resulting in a combined correlation coefficient in the systematic uncertainties of ρ syst = −0.83.
The systematic uncertainties discussed in this Section are included in the fit to the µ + µ − candidates in data.The fit for the yield of B 0 s and B 0 events is modified by including in the likelihood two smearing parameters for N s and N d that are constrained by a two-dimensional Gaussian distribution parameterised by the values of σ N s syst , σ N d syst and ρ syst .
Results of the signal yield extraction
The numbers of background events contained in the signal region (5166-5526 MeV) are computed from the interpolation of the data observed in the sidebands.This procedure yields 2685 ± 37, 330 ± 14, 51 ± 6 and 7.9 ± 2.6 events respectively in the four intervals of BDT output.For comparison, the total expected numbers of signal events according to the SM prediction are 91 and 10 for N s and N d respectively, equally distributed among the three intervals with the highest signal-to-background ratio.
In those three BDT intervals, in the unblinded signal region, a total of 1951 events in the full mass range of 4766-5966 MeV are used in the likelihood fit to signal and background.Without applying any bounds on the values of the fitted parameters, the values determined by the fit are N s = 80 ± 22 and N d = −12 ± 20, where the uncertainties correspond to likelihood variations satisfying −2 ∆ ln(L) = 1.The likelihood includes the systematic uncertainties discussed above, but statistical uncertainties largely dominate.The result is consistent with the expectation from simulation.The uncertainties in the result of the fit are discussed in Section 11, where the measured values of the branching fractions are presented.
Figure 8 shows the dimuon invariant mass distributions in the four BDT intervals, together with the projections of the likelihood.A modified Kolmogorov-Smirnov (KS) test [35] is used to estimate the fit quality: the p-value is estimated by comparing the maximum of the KS distance across the four histograms of Figure 8 with the distribution of the same quantity from pseudo-experiments generated with the shape resulting from the fit to data.This procedure yields a compatibility probability of 84%.
Branching fraction extraction
The branching fractions for the decays B 0 s → µ + µ − and B 0 → µ + µ − are extracted from data using a maximum-likelihood fit.The likelihood is obtained from the one used for N s and N d by replacing the fit parameters with the corresponding branching fractions divided by normalisation terms in Eq. ( 1), and including Gaussian multiplicative factors for the normalisation uncertainties.All results are obtained profiling the fit likelihood with respect to all parameters involved other than the branching fraction(s) of interest.
The normalisation terms include external inputs for the B + branching fraction and the relative hadronisation probability.The branching fraction is obtained from world averages [29] as the product of B(B + → Table 3: Breakdown of the expected systematic uncertainties in B(B 0 (s) → µ + µ − ).The measurements are dominated by statistical uncertainty, followed by the systematic uncertainty from the fit.The latter is dominated by contributions from the mass scale uncertainty and the parameterisation of the b → µ + µ − X background.The statistical uncertainties reported here are obtained from the maximisation of the fit likelihood and are meant only as a reference for the relative scale uncertainties.
Table 3 gives a breakdown of the estimated contributions of systematic and statistical uncertainties.The results are dominated by statistical uncertainties, with the most prominent source of systematic uncertainty coming from fit uncertainties, where the largest contributors are the mass scale and b → µ + µ − X background parameterisation.
Given the statistical regime of the analysis, the likelihood contours of Figure 9(a) cannot be immediately translated into contours with the conventional coverage of one, two and three Gaussian standard deviations.Moreover, the contours extend into regions of negative branching fractions, which are unphysical.In order to address these points, a Neyman construction [36] is employed to obtain the 68.3%, 95.5% and 99.7% confidence intervals in the B(B 0 s → µ + µ − ) -B(B 0 → µ + µ − ) plane.This construction yields the contours shown in Figure 9 branching fractions (obtained from the unconstrained likelihood maximum) are in all cases inputs to the Neyman construction, which, by design, results in physically allowed values for the resulting branching fractions.
Combination with the Run 1 result
The likelihood function from the current result is combined with the likelihood function from the Run 1 result [15].The only common parameters in the combination are the fitted B(B 0 (s) → µ + µ − ) and the combination of external inputs Except for F ext , all nuisance parameters are treated as uncorrelated between the two likelihoods, with both likelihoods including their individual parameterisations of systematic uncertainty effects.A negligible change in the results, corresponding to shifts in central values and uncertainties between 1% and 4%, is found when all sources of systematic uncertainty are assumed to be fully correlated.
The maximum of the combined likelihood corresponds to When applying the one-dimensional Neyman construction described in Section 11 to this combined likelihood, whose maximum is unconstrained and allowed to access the unphysical (negative) region, the 68.3% confidence interval obtained for B(B The upper limit at 95% CL on B(B 0 → µ + µ − ) is determined with the same Neyman procedure, yielding Using the predicted SM branching fractions from Section 1, the analysis is expected to yield on average a measurement of 3.6 +0.9 −0.8 × 10 −9 for B(B 0 s → µ + µ − ) and an upper limit of 5.6 × 10 −10 for B(B 0 → µ + µ − ).
The Run 1 and Run 2 results are found to be 1.2 standard deviations apart.Using both runs, the combined significance of the B 0 s → µ + µ − signal is estimated to be 4.6 standard deviations, and the combined branching fraction measurements differ by 2.4 standard deviations from the SM values in the B(B 0 → µ + µ − )-B(B 0 s → µ + µ − ) plane.These significancies are assessed purely from the evaluation of likelihood ratios.
Conclusions
A study of the rare decays of B 0 s and B 0 mesons into oppositely charged muon pairs is presented, based on 36.2 fb −1 of 13 TeV LHC proton-proton collision data collected by the ATLAS experiment in 2015 and 2016.
For the B 0 s the branching fraction is determined to be B(B 0 s → µ + µ − ) = 3.2 +1.1 −1.0 × 10 −9 , where the uncertainty includes both the statistical and systematic contributions.The result is consistent with the analysis expectation of 3.6 +1.1 −1.0 × 10 −9 in the SM hypothesis.For the B 0 an upper limit B(B 0 → µ + µ − ) < 4.3 × 10 −10 is placed at the 95% confidence level, with an expected upper bound of 7.1 × 10 −10 in the SM hypothesis.The limit is compatible with the SM prediction.
The result presented in this paper is combined with the ATLAS result from the full Run 1 dataset to obtain B(B 0 s → µ + µ − ) = 2.8 +0.8 −0.7 × 10 −9 and B(B 0 → µ + µ − ) < 2.1 × 10 −10 .All the results presented are compatible with the branching fractions predicted by the SM as well as with currently available experimental results.ERC, ERDF, Horizon 2020, and Marie Skłodowska-Curie Actions, European Union; Investissements d' Avenir Labex and Idex, ANR, France; DFG and AvH Foundation, Germany; Herakleitos, Thales and Aristeia programmes co-financed by EU-ESF and the Greek NSRF, Greece; BSF-NSF and GIF, Israel; CERCA Programme Generalitat de Catalunya, Spain; The Royal Society and Leverhulme Trust, United Kingdom.
[12] LHCb Collaboration, Measurement of the B 0 s → µ + µ − Branching Fraction and Search for B 0 → µ + µ − Decays at the LHCb Experiment, Phys.Rev. Lett. 111 (2013) u Also at Giresun University, Faculty of Engineering, Giresun; Turkey.v Also at Graduate School of Science, Osaka University, Osaka; Japan.w Also at Hellenic Open University, Patras; Greece.
x Also at Horia Hulubei National Institute of Physics and Nuclear Engineering, Bucharest; Romania.y Also at II.Physikalisches Institut, Georg-August-Universität Göttingen, Göttingen; Germany.
Figure 1 :
Figure 1: (a) Dimuon invariant mass distribution for the partially reconstructed background (as categorised in Section 4), from simulation, before the final selection against continuum is applied but after all other requirements.The different components are shown as stacked histograms, normalised according to world-averaged measured branching fractions.The SM expectations for the B 0 (s) → µ + µ − signals are also shown for comparison.Continuum background is not included here.(b) Invariant mass distribution of the B 0 (s) → hh peaking background components after the complete signal selection is applied.The B 0 s → π + π − and B 0 → K + K − contributions are negligible on this scale.In both plots the vertical dashed lines indicate the blinded analysis region.Distributions are normalised to the expected yield for the integrated luminosity of 26.3 fb −1 .
the B candidate transverse momentum − → p T B .χ 2 PV,DV xy Compatibility of the separation − → ∆x between production (i.e.associated PV) and decay (DV) vertices in the transverse projection: P min L The smaller of the projected values of the muon momenta along − → p T B .I 0.7 Isolation variable defined as ratio of | − → p T B | to the sum of | − → p T B | and the transverse momenta of all additional tracks contained within a cone of size ∆R = (∆φ) 2 + (∆η) 2 = 0.7 around the B direction.Only tracks matched to the same PV as the B candidate are included in the sum.DOCA xtrk DOCA of the closest additional track to the decay vertex of the B candidate.Only tracks matched to the same PV as the B candidate are considered.N close xtrk Number of additional tracks compatible with the decay vertex (DV) of the B candidate with ln( χ 2 xtrk,DV ) < 1.Only tracks matched to the same PV as the B candidate are considered.χ 2 µ,xPV Minimum χ 2 for the compatibility of a muon in the B candidate with any PV reconstructed in the event.distribution.There are significant linear correlations among the variables χ 2 PV,DV xy , L xy , |d 0 | max -sig., |d 0 | min -sig.and χ 2 µ,xPV .The variables IP 3D B , DOCA µµ and I 0.7 have negligible correlation with any of the others used in the classifier.
Figure 2 :
Figure 2: BDT output distribution for the signal and background events after the preliminary selection and before applying any reweighting to the BDT input variables: (a) simulation distributions for B 0 s → µ + µ − signal, continuum, partially reconstructed b → µ + µ − X events and B c decays; (b) dimuon sideband candidates (which also include prompt contributions, mainly at lower BDT values and not simulated in the continuum MC sample), compared with the continuum MC sample and the simulated signal.All distributions are normalised to unity in (a) and to data sidebands in (b).
Figure 3 Figure 3 :
Figure 3 compares the distributions of two discriminating variables in the continuum background MC sample with data in the dimuon mass sidebands.Agreement with the sideband data is fair and the
Figure 4 :
Figure 4: Data and MC distributions in B + → J/ψ K + events for the discriminating variables: (a) |α 2D |, (b) ln χ 2PV,DV xy and (c) I 0.7 .The variable I 0.7 is also shown in (d) for B 0 s → J/ψ φ events.The points correspond to the sideband-subtracted data, while the line corresponds to the MC distribution, normalised to the number of data events.The highest bin in (c) and (d) accounts for the events with I 0.7 = 1.The bottom insets report the data/MC ratio, zoomed-in in order to highlight discrepancies in the region that is most relevant for the analysis.
Figure 5 :
Figure5: Result of the fit to the J/ψK + invariant mass distribution for all B + candidates in half of the data events.The various components of the spectrum are described in the text.The inset at the bottom of the plot shows the bin-by-bin pulls for the fit, where the pull is defined as the difference between the data point and the value obtained from the fit function, divided by the error from the fit.
enters the D ref term defined in Section 1: D ref = N J/ψK + /R ε .Both channels are measured in the fiducial acceptance for the B meson, defined as p B T > 8.0 GeV and |η B | < 2.5.Correspondingly, ε(B + → J/ψ K + ) and ε(B 0 (s) → µ + µ − ) are measured within the B meson fiducial acceptance and include additional final state particles acceptance as well as trigger, reconstruction and selection efficiencies.The final state particles acceptance is defined by the selection placed on the particles in the final state: |η µ | < 2.5 and p µ T > 6.0 (4.0) GeV for the leading (trailing) muon p T , p K T > 1.0 GeV and |η K | < 2.5 for kaons.
Figure 6 :
Figure 6: Dimuon invariant mass distribution for the B 0 s and B 0 signals from simulation.The results of the double-Gaussian fits are overlaid.The two distributions are normalised to the SM prediction for the expected yield with an integrated luminosity of 26.3 fb −1 .
Figure 7 :
Figure 7: BDT value distributions in data and MC simulation for (a) B + → J/ψ K + , (b) B 0 s → J/ψ φ.The MC samples are normalised to the number of data events passing the signal reference BDT selection (Section 6). Figure (c) illustrates the BDT output for the B 0 s → µ + µ − signal, with the dashed histogram illustrating the effect of the linear reweighting on the BDT output discussed in the text.The vertical dashed lines correspond to the boundaries of the BDT intervals used in the B 0 (s) → µ + µ − signal fit.
Figure 8 :
Figure 8: Dimuon invariant mass distributions in the unblinded data, in the four intervals of BDT output.Superimposed is the result of the maximum-likelihood fit.The total fit is shown as a continuous line, with the dashed lines corresponding to the observed signal component, the b → µµX background, and the continuum background.The signal components are grouped in one single curve, including both the B 0 s → µ + µ − and the (negative) B 0 → µ + µ − component.The curve representing the peaking B 0 (s) → hh background lies very close to the horizontal axis in all BDT bins.
Figure 10 Figure 10 :
Figure 10 shows the likelihood contours for the combined Run 1 and Run 2 result for B(B 0 s → µ + µ − ) and B(B 0 → µ + µ − ), for values of −2∆ ln (L) equal to 2.3, 6.2 and 11.8, relative to the maximum of the likelihood.The contours for the result from 2015-2016 Run 2 data are overlaid for comparison.
Table 2
summarises these systematic uncertainties.
Table 2 :
Summary of the uncertainties in R ε .
The efficiency ratio enters in Eq. (1) with the D ref term defined in Section 1, multiplied by the number of observed B ± candidates.The total uncertainty in D ref is ±6.3%.
101805, arXiv: 1307.5024[hep-ex].[13] CMS and LHCb Collaborations, Observation of the rare B 0 s → µ + µ − decay from the combined analysis of CMS and LHCb data, Nature 522 (2015) 68, arXiv: 1411.4413[hep-ex].[14] LHCb Collaboration, Measurement of the B 0 s → µ + µ − Branching Fraction and Effective Lifetime and Search for B 0 → µ + µ − Decays, Phys.Rev. Lett.118 (2017) 191801, arXiv: 1703.05747[hep-ex].[15] ATLAS Collaboration, Study of the rare decays of B 0 s and B 0 into muon pairs from data collected during the LHC Run 1 with the ATLAS detector, Eur.Phys.J. C 76 (2016) 513, arXiv: 1604.04263[hep-ex].Also at California State University, East Bay; United States of America.c Also at Centre for High Performance Computing, CSIR Campus, Rosebank, Cape Town; South Africa.d Also at CERN, Geneva; Switzerland.e Also at CPPM, Aix-Marseille Université, CNRS/IN2P3, Marseille; France.f Also at Département de Physique Nucléaire et Corpusculaire, Université de Genève, Genève; Switzerland.g Also at Departament de Fisica de la Universitat Autonoma de Barcelona, Barcelona; Spain.h Also at Departamento de Física Teorica y del Cosmos, Universidad de Granada, Granada (Spain); Spain.i Also at Departamento de Física, Instituto Superior Técnico, Universidade de Lisboa, Lisboa; Portugal.j Also at Department of Applied Physics and Astronomy, University of Sharjah, Sharjah; United Arab Emirates.k Also at Department of Financial and Management Engineering, University of the Aegean, Chios; Greece.l Also at Department of Physics and Astronomy, University of Louisville, Louisville, KY; United States of America.m Also at Department of Physics and Astronomy, University of Sheffield, Sheffield; United Kingdom.n Also at Department of Physics, California State University, Fresno CA; United States of America.o Also at Department of Physics, California State University, Sacramento CA; United States of America.p Also at Department of Physics, King's College London, London; United Kingdom.q Also at Department of Physics, St. Petersburg State Polytechnical University, St. Petersburg; Russia.r Also at Department of Physics, Stanford University; United States of America.s Also at Department of Physics, University of Fribourg, Fribourg; Switzerland.t Also at Department of Physics, University of Michigan, Ann Arbor MI; United States of America. b
z
Also at Institucio Catalana de Recerca i Estudis Avancats, ICREA, Barcelona; Spain.aa Also at Institut für Experimentalphysik, Universität Hamburg, Hamburg; Germany.ab Also at Institute for Mathematics, Astrophysics and Particle Physics, Radboud University Nijmegen/Nikhef, Nijmegen; Netherlands.ac Also at Institute for Particle and Nuclear Physics, Wigner Research Centre for Physics, Budapest; Hungary.ad Also at Institute of Particle Physics (IPP); Canada.ae Also at Institute of Physics, Academia Sinica, Taipei; Taiwan.a f Also at Institute of Physics, Azerbaijan Academy of Sciences, Baku; Azerbaijan.ag Also at Institute of Theoretical Physics, Ilia State University, Tbilisi; Georgia.ah Also at Instituto de Física Teórica de la Universidad Autónoma de Madrid; Spain.ai Also at Istanbul University, Dept. of Physics, Istanbul; Turkey.a j Also at Joint Institute for Nuclear Research, Dubna; Russia.Also at LAL, Université Paris-Sud, CNRS/IN2P3, Université Paris-Saclay, Orsay; France.al Also at Louisiana Tech University, Ruston LA; United States of America.am Also at LPNHE, Sorbonne Université, Paris Diderot Sorbonne Paris Cité, CNRS/IN2P3, Paris; France.an Also at Manhattan College, New York NY; United States of America.ao Also at Moscow Institute of Physics and Technology State University, Dolgoprudny; Russia.ap Also at National Research Nuclear University MEPhI, Moscow; Russia.aq Also at Physikalisches Institut, Albert-Ludwigs-Universität Freiburg, Freiburg; Germany.ar Also at School of Physics, Sun Yat-sen University, Guangzhou; China.as Also at The City College of New York, New York NY; United States of America.at Also at The Collaborative Innovation Center of Quantum Matter (CICQM), Beijing; China.au Also at Tomsk State University, Tomsk, and Moscow Institute of Physics and Technology State University, Dolgoprudny; Russia.av Also at TRIUMF, Vancouver BC; Canada.aw Also at Universita di Napoli Parthenope, Napoli; Italy. | 15,293.6 | 2018-12-07T00:00:00.000 | [
"Physics"
] |